How to get "analogue" prints colors?

Hi everyone,

I’ve been using NLP with satisfaction for a few years now, but every time I convert a film I ask myself the same questions, which I report here in the hope that someone can clarify:

  • If NLP needs to do an analysis for each negative, maybe it also balances the colors for each image?

  • If you balance the colors for each image, does it mean that those colors will not be the same throughout the roll?

  • I can use the “copy scene” function but how do I understand which image is with the correct balance?

  • How can I prevent NLP from correcting an “own” behavior of the single film stock?

  • How can I simulate what happened in the chemical printing, that all the images had the same tonal balance? Ok, changing paper changed prints, but printing a lot of rolls keeps all images consistent with each other.

  • Could there be a feature of NLP that develops the negative independently from the single image? Maybe balancing only the exposure like in analogue printing?

I hope I have explained :wink:

Thank you so much!

Your question is quite similar to this one, it’s probably quite a difficult one to answer, I certainly can’t because i’ve done very few colour negatives as yet but the OP for that question did seem to get what he wanted by using the ‘Frontier’ LUT.

To explain myself better, my concern comes from the fact that if I convert two images with the same exposure but scenes with different predominant colors, the color balance changes a lot.

Obviously I’m not using automatic white balance. Digital camera was in M, same exposure for entire roll. I’m converting with “Frontier” color model, Pre-Saturation “3” (anyway, changing these don’t change much in results); Edit Settings: Tones “LAB - Standard”, WB “Standard”.

For example, if I convert photo A and then copy the scene to B, I get warmer colors:

>>>

If instead I convert photo B and then copy the conversion (copy-scene) to A, I get colder colors instead:

<<<

These are the negatives:

As there were no exposure problems in either of them, I expected the colors to match, but the color balance depends a lot on the scene. This puzzles me a bit precisely because in analogue the white balance should be fixed.

Could it be possible to tell NLP to do a color balance perhaps only on the film mask, without taking into account the colors of the images?

By the way, I prefer the conversion based on A, but how can I choose the “right” photo con copy conversion from?

Thank you so much!!!

It’s partly objective and partly subjective. The objective part is that when you click the film border with the the LR eyedropper, the orange mask is neutralized. That is the same for the whole roll of film, as it should be. For the images themselves, less obvious to me how NLP balances colours, but I expect it finds a white point and black point in the photo and neutralizes them (Nate please jump in if I am wrong here). This should pretty well produce correct colour balance. Whether or not you like that particular colour balance is the subjective part. The scene that you photographed may not want to be colour balanced - an extreme example being a sunset. You normally want that orange/yellow cast in the sky and reflected off landscapes or buildings. So it would be perfectly normal to alter the colour balance of each photo that needs it in order to reproduce your intention for the scene appearance.

Thank you for reply Mark. My concern is why the software should try to balance every image while negative are just balanced itself? Wouldn’t it be better to leave it as “standard”, not applying any automatic per-image balancing and let the user to do it just as personal choice?

For “standard balance” I mean something like the classic chemical prints from negative.

In Italy we say that sometimes “the patch is worse than the hole” or “the cure is worse than the disease” :wink:

We can always do exactly that. I like the cinematic styles as well as linear+gamma and use them depeding on how the negative looks.

Nevertheless, I mostly target something I like rather than enulating something else.

We can always do exactly that. I like the cinematic styles as well as linear+gamma and use them depeding on how the negative looks.

That styles aren’t about brightness curve? How can I manage the color balance? How can I use the same “base” color balance for all images?

Nevertheless, I mostly target something I like rather than enulating something else.

Oh yes, I agree. But one of the great think about analog photography is the “manual mode” of everything, as you can always learn how to get a specific result. All my workflow is “known”, everything but that “automatic” color balance of NLP analisys :wink:

Because the negative is not balanced to begin with. There are (at least) three factors at play: (1) the orange mask, (2) the lighting conditions of the scenes we photograph and (3) the consistency of film development conditions. #1 requires only one setting for the entire roll. I would redo it for each roll unless they were all the same film batch and developed at the same time together (re #3). For #2, the normal situation is that lighting and reflectance from the objects being photographed change from one photo to the next, even if only slightly. Therefore a sensible algorithm would seek to find the numerically correct colour balance for each one. Whether or not we like that result is a separate and subjective matter, which is why it is good to have the re-balancing tools available.

Turning to the tools in NLP, you look after #1 before you open NLP by clicking the LR colour balance eyedropper on the film border and then cropping it out. Next, when you open NLP to make a conversion, there is no “Standard” option; the choices of colour model are None, Basic, Frontier, Noritsu or B&W. These are profiles that control a number of colour and gamma settings. As I’m not the least bit interested in emulating historical appearances, but only achieving what works best for my photos here and now, I use “Basic” because None deprives me of LUT adjustments at the next stage, and B&W is only for B&W film.

Then we convert, during which each photo is analyzed (and that cannot be turned off as far as I know - nor can I see why or how we would want to; it would defeat the purpose of the application) and we get to the NLP image adjustment toolset. The item under discussion here is the WB options tool. The film options including Standard are not image specific. As such they are, in my experience, more prone to producing unsatisfactory outcomes. And again, I could care less about a historical film look. I’m only interested in making the photo’s colours look correct (relative to my recollection of what I photographed) here and now. So I use Auto-Neutral, which generally produces the closest to correct colour balance. I selected one photo at random to demonstrate what I mean here, and provide a screen grab of the mask-neutralized negative, the result from Auto-Neutral and from Standard. No doubt in my mind that Standard is too yellow and Auto-Neutral is closer to correct - I may warm it up just a little bit after I see what it looks like when luminance (contrast, brightness) are better adjusted. This is my experience over and over again, and it makes sense because the WB is image specific and no two photos are the same, so as far as I’m concerned, this is preferred behaviour for this application.

Sorry Mark, I understand that you are not interested in “emulating historical appearances” but for me is different. Even in digital photography I can keep the color balance “fixed”. I just want that all my photo balanced the same way or in way that I can manage, without automatic processing.

When I look at my negatives from 30+ years ago, I can see differences in exposure and development (always had film developed, wherever I was) - and the respective prints were adjusted by the labs, either automatically or manually on request (and extra cost). Somehow, Kodak looked like Kodak and Fuji like Fuji, but after the negatives have been sitting in storage for many years, I take what I can get by camera scanning, NLP and Lightroom.

I also found that differences can be negligible or visible, depending on how I have NLP convert the negatives - and what I do after the conversion. Some things can be achieved with the necessary knowledge and effort, some things cannot be achieved, no matter the knowledge and effort, and most of the time, we can balance our effort with the desired results, specially when we know when to stop pushing sliders… I mostly stay within my personal 20/80 boundaries, your mileage might be different.