Huge colour difference after DSLR scan

Using NLP 2.3
DSLR scanning with a7III, Sigma Art 70 Macro, Viltrox L116T

Hi everyone,
New to the home scanning journey, I have to say that I’ve generally been surprised by how close from a lab scan I could get with scanning at home with my camera.

Usually, I just have to play a bit within the NLP panel control (the one when you hit ^+N) to get a result that’s not that far from a lab edit.

However and for this image that I scanned (on the right), I’d have to get a lot deeper into a personalised edit to really get to something like the lab gave it to me (on the left), which at the end, I personally prefer in terms of colours. The only changes I’ve made over the TIFF file is playing with WB, exposure, highlights and shadows, very lightly ! I haven’t even touched to the HSL panel.

Indeed the negative is something like 2 years old, but I haven’t had such differences for other image of the same rolls. I’ve tried difference exposures during the scanning process but the difference is always really noticeable, especially for the portion of water in the middle, that tends to a dark purple colour.

My goal is to get used to know how’s NLP is making his magic onto my negatives, in order to eventually scan negatives of friends and more, aiming for a neutral and most natural look.

Y’all maybe have something to teach me about this kind of issue ? Sure I might not be the first one.

Thank you for reading me anyway, looking forward to read you in return !

1 Like

Hi Hugo and welcome!

I deal with this issue fairly often, as I have a lot of photos with water in nice destinations, which translates to a lot of blue and cyan.

As you can kind of see, your shadows and dark are tending towards red. It’s an inevitability of how NLP, it’s algorithm trying to find balance, and like so with too much blue or cyan it will charge onto the opposites. This is at least, what I believe it’s doing.

In order to correct this you have to go to the mids tab and for me usually dragging the top slider in favor of Cyan does the trick. But play with others a bit too. If you need more refining, then jump into the highs and shadows tab.

Hope this helps!
Nuno

A different crop can give you a different balance.
Try with a tighter crop, r.g. pulling inwards the upper and lower crop edges.

Thanks for the tips Nuno !

I’ve tried and yes, it is getting closer to what the lab scan looks like indeed, however it does still require quite some time. I can’t stop thinking that if I hadn’t the lab scan as a reference at first, I would’ve thought that I could probably rely on NLP converting abilities, without knowing that I would be that far from a result that appears to be that close to the scene I had in front of me that day.

My concern and interest at the same time is to understand how to get a conversion that’s most accurate, closer to reality in its raw form, in terms of colour reproduction, to then obviously put in my own interpretation.

Thanks Digitizer !

I’ve tried your trick, which I think makes sense. However for this scene it doesn’t help at all…

You are welcome.

It does require some work, yes indeed. If you have people in a water scene, I have found that the skin tones are incredibility difficult to tackle. Other thing you can try, is to copy the settings from another Scene that you think nailed the conversion. A more balanced scene, with hopefully the same kind of light.

Nevertheless, it will be work, when there’s predominantly 1 color in the scene, NLP will overcompensate and create an unbalanced color look. I find a similar experience with sunsets, although, they are always closer and require less tweaking. I guess this could be a future feature request for NLP, to better handle Azure blue water scenes. /cc @nate

I love to work with a reference in some cases and I am lucky enough to own a Pakon F135+ scanner that helps create those for me. I confess that sometimes I end up using it’s sunset or water scenes, because I can never come as close as those target images are.

Have you tried using the new saturation control in the NLP edit menu? This was a very good and useful addition to the photo correction tool-set. I ask because it strikes me that the right image is generally a bit more saturated than the left, hence by desaturating a bit you approach closer to your objective. If you find that the NLP saturation control does not do enough, another option is to try the various gamma options, white balance options and “lab look” options available in that interface. There are many possible combinations of these presets that can take you closer or remove you further from the objective. I also observe that there is more contrast in the right version of the photo, so you may wish to try reducing contrast. In the final analysis you can amend whatever you finished with in NLP by using some of the controls in the Lightroom Develop Module to modify contrast, hue and saturation, though these are of somewhat limited use on an NLP-converted photo and usually require thinking in reverse.

But let’s spend a moment on the objective. I’m not convinced that one should overly concentrate these days on the results delivered from commercial film processors. I take them as indicative and move on from that. Not the case in your immediate example, but I find them often over-saturated with blocked-up highlights and shadows relative to what the negatives can reveal when correctly imaged with a decent digital camera set-up. We need to start thinking more in terms of what we imagine the photo should look like, rather than another machine’s interpretation of what we photographed.

NLP does a color analysis on the frame you give it (including crop). When you give NLP a frame that doesn’t have a full range of brightness and colors, it can be misled trying to find the right balance. For many images, drop to the area of important colors, convert, then recrop. For this image, I don’t see such an area. I would take another frame shot in same light, convert it to taste, then “Sync Scene.”

Is your light source for camera scanning color accurate? It’s important to use a very high CRI (with high R9) source that is HIGHLY diffused for color negative work. Red/yellow response is critical for cyan/blue reproduction. (Look up CRI on wikipedia for details).

Frankly, I don’t think either of the scans you posted is accurate, but this is a scene that you can render however you like it. When I process camera scans, I go for a pleasing effect or interpretation when I don’t have a “clinical” reference target of some sort.

No problem about the light source I guess, it is rated at a CRI of 95+

To me it looks like nlp has tried too hard to get the histogram touching the left side. Maybe try raising the BlackClip, and turn on soft whites and blacks. You could also use the Lab Fade thing, I am not usually a fan of having a raised black level but that is what the lab scan has and it definitely contributes to the softness of the image.

I do find NLP to be a little overzealous on the contrast, and am really not a fan of the highlight and shadow sliders when used to pull back. For me, using the clip tools and cropping to certain sections before converting usually does the trick.

~ Ed

1 Like

I had a problem. I did not frame the frames and on the conversion I got a terrible blue color. The problem was within the framework. From the beginning, you need to crop the image, then take a white balance pipette (I pressed the gray sweater) and only then run the NLP program. I did it. Good luck to you.