One negative comes out perfectly, another nearly identical neg looks awful—can't figure out what's going wrong

When posting about Image Conversion Issues, please include the following information:

  1. Which version of Negative Lab Pro are you using? V3.1.1

  2. If using DSLR scanning, please include: 1) camera make/model, 2) lens make/model, 3) light source make/model Sony A7III with Sigma 70mm 2.8 DG ART and a Cinestill CS-Lite

Hi everyone, this is my first post so hopefully this is an OK place to ask. I’ve been going through the process of digitizing 10,000+ negatives from my grandfather, and up until last summer, I was manually converting them in Lightroom. The results were fine, but I quickly realized they could be much better with NLP.

That said, I’m still learning how best to manipulate NLP to consistently get the results I want, and sometimes what it outputs is totally confounding. Enter exhibit A:

The photo on the left turned out pretty much perfectly with very little tweaking, but then the very next frame turns out like this.

The settings used to scan were identical. It appears that the second photo was shot with flash, and there’s no sky visible which is bound to make it more difficult for NLP to properly process. I would think that dropping the black clip and fiddling with the WB would fix it, and that does help, but no matter what I do, the greens look radioactive and the image looks overall much worse than the previous frame. I figured maybe I could sync the analysis/settings of the previous frame, but when I try, there is zero change. Not sure if I’m maybe approaching that wrong.

If anyone has advice on what to do, I’d be very grateful! Here are the RAWs: NLP - Google Drive

Thanks,

Finn

had the same issue too if I batch converted it (over 60 pics at once). Now I do a couple at once and it is waaaaay better. think it has something to do with whitebalance or something. I could be wrong tho since im still a greeny on negative scanning and converting.

If all else fails then you could just convert that one picture and see if its behaving better.

When I first starting using NLP (and also other conversation methods btw), I also had issues with neon colors, especially red and green. Someone, might have been Nate himself, gave a tip to use the little drop in dot in the Color Mixer tab to adjust Hue, saturation and luminance at a chosen spot. Just click the little circle, point your mouse to a problem area and while keeping left mouse clicked, move the cursor up and down. While it won’t change anything about the way NLP converts images, it might help adjusting individual images after conversion

@finn_l

“nearly identical” is probably how you feel about the two images, and this is where we have to look closer: “nearly”.

From NLP’s point of view, the two images are quite different. Putting it simply: one is but green and brown, the other has some sky too. It does not matter that it’s “sky”, but that it has different colour and tonality. The wider the range of colours and tones are, the easier it is for NLP to provide a balanced conversion. Roll Analysis can help too.

Roll Analysis takes all selected images into consideration and the sky in one image will help an other image that has some red helping in return. For RA to work best, all selected scans should have been exposed with identical aperture and shutter speed settings. There’s some slack, but at a certain point, NLP will drop RA and convert images one by one.

You can see the effect of present/absent hues and tones by cropping off the sky. The converted image might look closer to the converted skyless image. Try different Border Buffer widths and see what they do. Basically, NLP reacts to what is defined (by crop and buffer) to be used for preparing the conversion.

Maybe try some of the other tone profiles and work back from there. I have found that this can help eliminate “radioactive” saturation and tonality in weird conversions when all else fails. Unfortunately the “match” feature will not work properly unless the negatives are nearly identical in terms of exposure

Thanks to everyone for the suggestions. I realized that I foolishly was using the Sync setting, not the Match setting–Match worked perfectly in this instance. I had incorrectly thought that, because Sync supposedly copies the analysis data, that it would have the same effect, but that isn’t the case.

It is still frustrating that the inner workings of NLP exist in a black box with little to no discernible reason (sometimes the WB sliders seem to be reversed in their effect. why? who knows). Even trying different tone profiles, individual colour adjustments, and white balance adjustments didn’t yield even close to the correct outcome – so it’s lucky that I had an accurate frame to ‘match’ from in this case.

Thanks for bringing your problem to the forum. I love to read these, and it helps me to hear the solutions that are proposed.

I don’t think of NLP as “black box” of mystery! At the core of its process is to intercept the way you are moving the slider and invert it. You are processing a negative image in Lightroom which only processes positive images. It might be useful for you to try processing a negative image manually. There are countless guides out there. That method is slower and sometimes frustrating but can give you some insights.