I’m just getting started with NLP, doing a lot of mucking about before I start scanning/converting in earnest. It’s recommended that we white balance based on the edge of the film strip, and then apply the NLP conversion. My initial results were not very encouraging—overly cool colour tones, weirdly oversaturated greens, lacking orange warmth. On a lark I tried converting without applying a white balance first. Voila! Much better! The warmth is back, and overall the colours are richer and more natural.
I realize that negative conversion is essentially highly subjective, so in a sense it doesn’t matter what settings are used as long as the end result “looks good”. Perhaps I’ll find that for other film stocks or rolls, doing a white balance first will yield better looking results. But I have to wonder if anyone else is finding this in their experiences.
For what it’s worth, I’m using the Valoi easy35 set to the coolest color tone, shooting with a Nikon D7200 and a Nikon 60mm 2.8 Macro lens. The film stock in this case is not especially good, it’s from a disposable panoramic camera from the late 80s.
Other than WB vs No WB, the conversion settings were identical, just NLP Standard with no modifications.
Here’s a side-by-side comparison image. You can see (maybe?) the intense but dynamically flat greens on the left, and how much more rich, natural, and continuous the colour tones are on the right.
Ok now I’m lost again. I don’t know what happened between then and now, but I’m trying to reconvert these images, and I’m taking the same DNG and applying the same simple workflow: No pre-WB, NLP Standard settings unmodified. And now they’re coming out much more saturated and with waaay more contrast.
I had actually noticed the negative scan was slightly cropped, so I added one more short tube to the easy35 and reshot the negative, and it was that new scan that converted looking like this. But when I went back to the original scan DNG that got the results above, it still came out looking radically different. I can not think of what is different. The workflow is so very simple. No pre-WB done. Convert using default NLP settings (NLP Standard).
Yes I have reset the conversion before reconverting. Yes it’s the same cropping as before, nothing beyond the frame border showing.
In all my experiments and tidying up, I seem to have lost the copy of the DNG that has the “No WB” look that I was so happy with above. And now I can not recreate it. I’ve tried various settings and they all are much more saturated and contrasty than I was getting before, with the very same source DNG.
Here’s what it looks like now (uncropped). It is obviously much darker. Does anyone have any idea what’s going on?
UPDATE: I seem to have figured it out although the huge difference in results is still surprising to me. It seems that the earlier conversion was done with 0% cropping applied at the conversion point, since I’d already cropped the image entirely within the image bounds. But at some point I switched that to 10% cropping. That 10% crop resulted in the much darker image. I suppose it’s because of the vignetting, with the darker edges shifting the black point and compressing the overall tonal range? The different is still much greater than I would have thought.
Man. Just when I think I’ve got an approach that will get me quickly and easily to within striking distance of my desired result, I get whammied by the complex variability of negative conversion.
Here’s the “good” result: