Get Off The Correct Horse

Looking through the many posts in the forum, I often find statements like “colours aren’t correct”, “looks different than the lab scan” etc.

This, and my current testing of NLP 3 with Lightroom 13 and Lightroom 14 made me add this thread about the notions of correct, different and other comparative adjectives.

But first, let’s have a look at the following:

This is a screen capture of different conversions of exactly one image’s virtual copies. Even though the differences don’t show as clearly here, we can observe the following.

  • The two images on the left look similar, both have a pinkish cast in the foregreound) and slightly different hues in the dry vegetation part of the image.
  • The two images on the right share similar vegetation colours. The foreground is quite different, pinkish vs. greenish (or whatever we might call it)

Now, how can conversions of this one image look so different?

  • The left image was converted with buffer border set to 25%
  • The middle image was converted with the same settings, but I set border buffer to 5%
  • The image at right was converted as the one in the middle, but I also engaged roll analysis

What can we learn from this test?

  • The looks of converted images can change when border buffer settings change
  • The looks of converted images can change when roll analysis is used

What we can’t learn from this test!

In summary, the lessons to be learnt imo are the following

  • Change conversion setting(s) to get different result(s)
  • Try until you find the result(s) you like
  • Re-try with each film or photo … or work on converted images

Photo: Tetons, captured with M645 on Kodak Safety Film 6014 (Kodacolor II) in August 1978, and yes, it was snowing and well below freezing at night.

Very interesting, and I know that you often present variations in NLP settings as a kind of ‘colour wheel’ contact sheet, though probably that description is doing it an injustice, or just plain wrong.

I’ve hardly used NLP, just dabbled a few years ago. I’m not scanning colour negative and am waiting until v3.1 to settle down before I commit myself one way or another. So I’m wondering if there is an easy wholesale change that will make all these images above look like the other, say the one on the left. Difficult to see on the small comparison above but to me the middle one is slightly darker and so the colour tints in the foreground and background are strengthened slightly but are essentially the same. Similarly the 3rd image is darker still but needs some magenta.

Alttrnatively is there some ‘adaptive’ processing going on that changes the relationships between the colours in the image simply by virtue of changing the amount of border buffer?

That is my point: No

NLP analyses each image (or as part of a roll analysis) and determines, how the R, G, and B tone curves are set in order to provide a conversion with “clean” black- and white points as well as, what I’d call, balanced colours and tonality.

In dark room (intended spelling) ages, images were cast on paper with exposures that were needed to produce an image according to our taste. We did the analysis, judged contrast and exposure and changed development times for whatever the desired effect was. NLP does that for us, without knowing our tastes or intentions, and delivers conversions, the looks of which we can influence b the provided selections, checkboxes etc.

Indeed, and as written above, image content can, and often will, change the looks of a converted image. Adapting to image content, NLP can get off-track occasionally and following what @nate has written in the guide helps to prevent that. Testing that with just one image, I found that NLP can work miracles nevertheless, like in this case. Whether it is representative of NLP’s current abilities remains to be seen. But here, the conversion is as good as it gets - even without the super-lazy scan and without crop!


Note that I was really surprised to get such a good result!
(image converted with NLP 3.1 Beta 9)

Well, not really, unless you’re talking about messing about with the strict C41 processing parameters for developing colour negative film. That should be a closed process within very strict tolerances although admittedly I have had rubbish processing from hight street minilab operators in the past, but very rarely, poor replenishment perhaps. Actually I think that this may be happening a lot when people use these home processing kits, but unintentionally, and then they might be thinking that NLP is causing them problems.

The RA4 printing process is also very strictly defined so not much scope for experimentation there unless it’s accidentally, which is going to happen I suppose because of the home processing drums and the relatively high specified temperatures. It’s probably why I saw a Jobo Autolab go for almost £1000 a couple of weeks ago.

So maybe NLP is also great for rescuing the results from sub-optimal processing and development, just as it seems to be with faded, ageing transparencies. That’s a good thing of course.

Going back to your images they do look to me as if there is a simple but slight difference in exposure combined with an equally slight colour cast which had it been in the darkroom would be simple to correct, and I quite like that idea in fact. However I accept that NLP does a lot more than that if you want it to otherwise there wouldn’t be a ‘Frontier’ look, or ‘Noritsu’ etc.

Ah well, colour was well beyond my financial possibilities and my experience is therefore based on fairly barefooted B&W film processing and printing. A big step were the Kodak multigrade papers, but I never liked to work with them.

NLP has an almost overwhelming number of degrees of freedom and that is also the reason to convert those contact-sheet like mosaics. As I see it, NLP 3.0.2 and 3.1 are different beasts than the versions of NLP 2.

I’ll probably either stick to “Basic” or “Noritsu”, not for their respective looks, but because they get me close to what I want. And every now and then, I do reconvert because it seems less obnoxious than adjusting converted images.

If we’d only care for “correct”, there’d be even more lawyers and much less art galleries :wink:

YMMV

1 Like

Yes, the point that I failed to get across is that when printing colour negative there is a simplicity to looking at a resulting print in the darkroom and then gauging the change in filtration and exposure that would produce the print that you wanted it to be. Not just a simplicity but a clear scientific logic to it.

I should say that I’m only seeing stuff about NLP through this and the FB forum so I can’t speak for other colour negative software offerings, perhaps they work differently, I imagine we’re not encouraged to discuss those anyway. My impression is that some of this simplicity is lost when there are so many options within the software. In this particular case surely there could be a way to look at the different results obtained by altering the border buffer and instead simply correcting for them within the software. Changing the border buffer seems rather haphazard.

Back in the day if I was printing images from the same roll then perhaps I had photographed a close view of the scene, whatever it was. I could then print a wide view, perhaps including a large area of blue sky knowing that the filtration and exposure would be identical. That’s clearly not true when using NLP as it individually analyses every frame so is fooled by the blue sky, however it is presumably the basis of the batch processing feature.

I don’t mean to hark back to the ‘good old days’, just that I think there are lessons to be learnt. I get the impression sometimes when people post ‘problems with NLP’ that they’ve never even looked at the negative, as if somehow that didn’t matter. Oh well.

Actually I didn’t often print my personal stuff, I was a photographer and had in effect a contract to print hundreds of large 12"x 8" & 16" x 12" colour prints of my pictures on a regular basis over quite a few years, many thousands then. It just seemed silly to get them done at my local professional lab at great expense and then have to put up with their interpretation, I did get the films processed though. So I bought a Durst Printo setup, washer, dryer, the works. What a fantastic system.

Digitizer :
Logical, practical, and well reasoned.
I sometimes read problematic questions posted, and would suggest your thought process here as a good factual method vs. trying to out guess the software. Nicely done.

1 Like