Until now I’ve been using NLP mostly on black-and-white negatives with good results. Recently I tried shooting and converting color negatives, and while some were fine, others gave me results with peculiarly uneven tones.
I started my troubleshooting process by shooting a white-balanced, exposure-metered image on my scanning setup with no film in the gate. This resulted in the attached image named ‘positive-1200.’ It may not be perfectly even, but it’s pretty close, as you can see by the histogram in the upper-right corner.
I then converted the test image using NLP at a variety of settings, all of which produced results similar to the attachment named ‘converted-1200’ – a very uneven conversion with a vaguely uterus-shaped pattern of tones.
Is this an expected result caused by NLP’s application of conversion logic to an almost completely uniform “image,” or is some kind of error occurring? If this is how NLP is intended to behave, I’m not sure how to proceed with my troubleshooting, since I won’t know whether anomalies are coming from my process or from its interaction with NLP.
…kind of. NLP needs something to work on. With an input histogram that narrow, output can get unpredictable and strange.
Look at the r, g and b tone curves of the converted file. The curves are steep, which means that slight differences are exaggerated. Whether the output is meaningful or not, I let you decide.
So presumably if I shot a real photo with a very limited range of colors and tones — maybe a gray cat in the fog? — NLP would likewise try to expand it and produce a result different from what I would expect?
That explanation jibes with the situations in which I’ve observed the issue: a roll that had been in a camera for a couple of years and which had a very “gray” overall appearance, and some underexposed frames from a point-and-shoot camera in which all the tones were faint. “Meaty”-looking negatives made with more ample exposure on fresher films have not exhibited this effect.
I’m still left wondering what I should do if I want to photograph gray cats in the fog and have them come out looking like gray cats in the fog… use transparency film, I suppose!
Follow-up: I went back and reviewed some problem vs. non-problem negatives and confirmed that the negatives that have a narrow tonal range are more likely to yield bizarre results. My theory is that this results because NLP’s expansion of the negative’s tonal range has the side effect of magnifying minor film variations and processing flaws (or in the case of my original example, very slight unevenness of my backlight illumination.)
How narrow is too narrow? I had no direct way to measure this except by the blunt approach of measuring the widths of Lightroom histograms with a caliper! I found that my no-problem negatives had a width roughly twice that of my biggest-problem negatives; in the attachments below, the top screenshot had big problems, while the bottom screenshot had no problems.
Try a shot of a negative that actually contains something real… It makes more sense than converting flat grey objects. I’ve tried it, it’s a waste of time, unless you want to see the structure of the backlight (a Kaiser plano in this case)
upper left: shot of a backlight
upper right: same as above with a very steep tone curve, revealing a diamond shaped pattern -> backlight LEDs are arranged in a diamond pattern (and a lot of dust)
Lower row: NLP conversions of the above.