NLP scans my Ektar 100 photos red and black for some reason?

When posting about Image Conversion Issues, please include the following information:

  1. Which version of Negative Lab Pro are you using?

V3.02

  1. If using DSLR scanning, please include: 1) camera make/model, 2) lens make/model, 3) light source make/model

Sony A7RIV, Olympus Zuiko 50mm F3.5 Macro, CS-LITE Camera Scanning Light Source

  1. Please add the conversion you are having difficulty with, along with a short description of what you are seeing wrong with it.

NLP has been having this issue where I get crazy color casts in my images. Sometimes, I have to reduce the black point to -20 to make the image look less crunchy. Sometimes, this fixes my problem, but other times, I can’t seem to salvage the picture. I have tried including the film border and using roll analysis; neither seems to help.

  1. It’s not required, but it’s very helpful if you can provide a link to the original RAW or TIFF negative before conversion. If you don’t want to share this file publicly, you can also email it to me at nate@natephotographic.com

https://drive.google.com/drive/folders/1Z7Wg_DXkyKqr598e-VYCf_Eak1pAWW4z?usp=sharing

Welcome to the forum, @EthanK

Tested your images with the following settings:
Noritsu, Pre-Saturation set to 1, margin =25%

From top:
Originals,
Converted No WB, no crop, Roll Analysis
Converted No WB, slot crop, Roll Analysis
Converted Daylight WB, slot crop, no Roll Analysis

Depending on these basic settings, conversion results can be quite different. Sometimes, Roll Analysis can work wonders, but it can err too. Same for WB, and cropping the brightest parts off changes results too.

My lessons learned are that NLP works most reliably with fairly multicoloured images and that it needs special attention with images with low content of either R, G or B. Most things that are close to burnt need to be cropped away before converting. In cases that don’t provide the results I want or want to use as starting points, I test systematically - if the images seem to be worth it.

I have a series of thin negatives that I scanned with UniWB settings and I get the easiest to work with results without WB. Converting photos from the same roll with Roll Analysis improves some images, others get off track.

Summary: Test, learn - and be prepared for the occasional surprise.

I like your pics btw…

Digitizer,
Thank you so much for responding and giving me new insights on how this program works. I have been working with film for a few years now but I am still a bit of a newbie when it comes to getting professional results from DIY scanning. Its crazy how different your scans are by tweaking a few small settings. You said you had some thin negatives you scanned with “UniWB settings.” Do you mean scanning all the negatives with the same white balance instead of leaving them unaltered? Thanks for the compliment about my pictures by the way, I am really trying to up my game with my film landscape work and I am hoping these pics from Great Smokey Mountain NP might make some nice prints :slight_smile:

There are two parts of the text:

  1. When I took the pictures of the negatives, I set the camera to UniWB, and you can read about UniWB in all the articles published out there, e.g. here.
  2. Normally, we set WB from the film rebate before converting. I found that thin (underexposed) negatives cause NLP to produce positives that are easier to adjust…when I don’t change WB and start conversions with the hulkish green looking images that UniWB creates.

Anyways, the most important things to remember with NLP are that it

  • produces a starting point - don’t expect a perfect conversion of anything you throw in
  • can deliver easier to work with positives if we deviate from the recommendations of the guide
  • can take a few trials and iterations to get a positive that is easy to adjust.
1 Like

Hi Ethan,

I downloaded your file 11378 to test for what if anything unusual is happening implementing the normal NLP workflow and I determined that there is not. The procedure I adopted is as follows:

  1. Grey-balance the neg in Lr by clicking the WB eyedropper on the border at the right edge of the image content. That neutralizes the “orange mask”.
  2. Crop out all border material in Lr leaving only the image content visible.
  3. Open NLP and click on convert.
  4. The first result you see is the initial NLP rendition, which is pretty good I think (without having seen the scene - so by “pretty good”, I mean “credible”. I show all my NLP edit settings as they were upon this conversion. The conversion settings are the usual Camera Basic for NLP.
  5. Then in Lr I made a rough sky mask and tweaked exposure and colour balance in the sky to strengthen it a bit - maybe I overdid it, but I wanted to see how the image reacted. This is the second result you see.

So bottom line, not clear to me why you should be having problems. The negs seem eminently treatable using very basic procedure.

Mark


@Mark_Segal , what were your pre-saturation and colour model settings?

Color Model “Basic”; Pre-saturation 4

@Mark_Segal 's sky edit and first tab info made me test again.

  • Tuning the sky in the negative
  • Using different colour models
  • Using different pre-saturation levels

From left: pre-saturation = 1, 2, …
From top: Colour model = None, Basic, Frontier, Noritsu

With this negative, results look fairly similar (at least here) but the histograms look different. Also, the different colour models seem to be rendered differently. “None” creates the most distinctive sky details. Here are the views of the different models converted with pre-saturation default values:

The interesting thing is that none of these renditions are necessarily “correct” out of the box and none of them are poor, so the challenge is to determine which is closest to the intent for the scene (only Ethan would know that) and would therefore need the least amount of post-conversion tinkering.

As a BTW, I also noticed that on magnifying the converted photo to 100%, the sharpness leaves some to be desired. The rendition of film grain is nice and sharp, and quite uniformly so, clearly visible upon darkening the exposure and upping Clarity in Lr to accentuate it - viz the attached screen grab at 200% - this is quite a fine-grained film for colour neg material. So it seems any unsharpness happens with the camera making the original photograph. This is not unusual - we can be very well set-up to scan the film beautifully, only to learn upon major magnification


that our photos were not as sharp as we may have thought they’d be.

Yes, and correct is not something that I’m after. My goal is to get a starting point that is close to what I want so that I don’t have to tweak too much. In that respect, all conversions you and I did are usable and far away from the reddish conversions in the OP. I’ve tried to reproduce that look and never even got close. I have no idea what causes that look.

During my tests, I often remove images from LrC, optimise the catalog and import the photos again. This provides a clean slate for further testing, and I’d recommend that @EthanK do the same.

Note that I’m still on LrC 13.4 and that Vincent van Gogh never cared for sharp and correct and that this was not the thing that made him crazy but a highly regarded artist. Sadly, he did not live to know it.

Yup, really unclear from our testing what is causing the OP’s reddish results. I can’t help thinking it’s pilot error somewhere along the line; that is why I provided my procedure steps and settings. Ethan should jump in at this point and review with us the procedure and settings he used that produced these results. We may find the solution to his problem therein.

I’m on Lr 13.2 in MacOS 12.6, using NLP 3

As for sharpness, OT of course, but I treat photography and painting as different art forms and believe photos should be sharp, at least for the main subject matter unless otherwise intended. It’s still considered a fundamental property of the medium no matter what we choose to do about it when we’re being creative. :slight_smile: But of course, “chacun à son goût”.