I’m not sure if there are any intellectual property protection concerns regarding this topic, but I would still like to ask about the general working principles of a negative film lab. Some aspects of the software’s workflow have confused me during actual use.
Recently, I have been experimenting with the intriguing Phoenix 200 film, which led me to carefully examine and study its technical documentation. I noticed that since both the film base mask and color shifts stem from the shifting and overlapping of the three photosensitive layers’ response curves, wouldn’t calculating the film base mask and color shifts based on the official response curves be the most theoretically accurate way to restore the film’s inherent color characteristics? After all, nowadays, one of the main reasons for choosing to shoot on film is to preserve its unique color characteristics.
After studying NLP’s workflow and several of its new features, I began to wonder: does NLP’s color mask removal logic work similarly to Photoshop’s automatic color correction? That is, does it analyze the image’s post-exposure colors, black points, white points, and neutral tones to adjust the colors, rather than restoring colors based on the film’s spectral sensitivity data provided by the manufacturer? This is what I would like to understand—if it is permissible and does not infringe on any intellectual property rights, I hope to gain a deeper insight into NLP’s working principles.
This question becomes particularly relevant when processing films like the Phoenix 200, where the blue sensitivity curve is not parallel to the red and green curves. If the software simply calibrates the colors of the scanned image, does that mean the unique characteristics of the film itself are lost? On the other hand, if a mathematical reconstruction method is used, could any resulting color shifts be interpreted as an authentic representation of the film’s inherent response characteristics?
From using NLP for a while now, I deduct that NLP converts adaptively, based on image content and using metadata in order to adjust conversions to the characteristics of the camera sensor that was used to capture the negative.
With this approach, NLP is able to convert any negative, old and new, and independently of
a) whether film characteristics are (still) available,
b) of the chemical processing that might have changed the documented characteristics due to deviations from official processing conditions
c) storage conditions of film before it was used and processed etc.
Yes, and as of now, this has not been the objective of NLP. NLP easily provides a starting point for further processing to one’s own taste which, however, does not mean that one cannot use NLP for reproductions and higher precision in e.g. rendering today’s film characteristics. Again, due to the variability of chemical processing, each take would need to be calibrated end-to-end, and only the photographer has the possibility and responsibility for all the steps before using Lightroom and NLP.
I’ve barely scratched the surface with NLP but I am very conversant with printing from colour negative in the darkroom. There you can only change the exposure, the colour filtration and possibly the brand of paper though the choice was always pretty limited, I always printed using Fuji Crystal Archive and the chemistry (and relenishment) was taken care by Fuji in a Durst Printo setup, excellent in my view.
So it struck a chord for me when I saw Nate say the following in answer to questions in this thread:
Q. Our digital workflow includes so many more levers. But shouldn’t our default be as simple as RA-4? Beyond that, pull the digital levers for creative adventures.
Nate:
"I could perhaps introduce a “mode” where you only see adjustment tools in Negative Lab Pro that were possible in RA-4. Or, you could try the following:
Set the tone profile to “Linear-Flat”
Only adjust the “Brightness” and “Temp/Tint” sliders in Negative Lab Pro.
But many people use Negative Lab Pro for different reasons and expecting different outputs, so I try to make it as flexible as possible to work with many different setups and desired results."
I don’t know if this is still true with the latest versions but I liked the sound of a setting where you could switch off the ‘Adaptive’ side of things.
Just flip the tone curve and you’re there, but you’ll probably not like the output
If there were a single bunch of settings that worked on all the negatives out there, we would have seen it in the wild for quite some time! Decently exposed negatives are relatively easy to convert, even manually. Negatives (and positives) that suffered from age, storage conditions etc. take more effort though and then, we need to adapt our processes somehow. And so far, NLP 3 does a good job of widening the goldilocks zone.
Hi there…it would be interesting what you got from NLP and your Phoenix negatives. Could you share a pic?
Other question is if any lab would do it the way you described. I thought that the response curves are just tell you at what wavelengths the negative is more sensitive and therefor collects more light of this spectrum. I mean, if the film records more red tones, then they are stored in the negative. So flipping with NLP shouldn’t be able to delete this information so easily, should it? I’m note sure if you could flip the negative with purely respect to the response curves. But maybe I don’t understand this whole topic completely.
Well, with respect that isn’t true.The question is can software, combined with a suitable light source and camera profile replicate the colours that would have been obtained with RA4 processing in the darkroom? I can see it’s not easy, and that it’s not what everyone would want. Noritsu and Frontier scanners were able to do this, and then the operator could put their particular slant on it but they were (are) very sophisticated machines.
…in this case, adaptability is in how the lights are mixed, what profiles are used etc…and yes, this should be possible and is practicable according to this setup and the respective procedures that you also read about here: Let's see your DSLR film scanning setup! - #293 by seklerek.
My point is - and the flipped tone curve was meant as a somewhat rough hint - that without adapting, the results will be all over the place. Today, most things are implemented in software because it is easy to correct while older systems had tactile controls, e.g. for colour balance in a scanner. Nevertheless, not all parameters can be changed easily. Changing the power of R, G and B lights is easy (change the distance between the lights and the diffusor), changing contrast (overall or per channel) is more difficult though and requires some electrical/electronic gear.
We can see that we need to adapt the parameters of a conversion, and we can also see that several ways to do so can be used. Doing things in software (NLP in our case) is state of the art, but of course, taking action as @seklerek is a cool remix of possibilities…and if we could flip the tone curve in camera, his approach would be really neat indeed!
I think that we’re probably talking about the same thing from different directions, with different priorities. For now I’m not concerned about rescuing aged or mistreated negatives, just with being able to reproduce simply and easily the colour palette of prints that I made in the darkroom from my own well exposed negatives. You could ‘read’ a contact sheet and get very close to the right filtration and exposure first time.
There is I suppose an analogy for NLP with darkroom work in those ‘colour analysers’ where the light from the enlarger was diffused and fed into a sensor, this then promised to give you a starting point for the C,M,Y values to dial into the enlarger. I never used one because I found it easy to get to a starting point with a simple test print and I was using a fairly small range of films. The analogy falls down though because there was nothing adaptive about it, the colour paper didn’t adapt, or the chemistry, you dial in CMY, set the exposure and that print would come out just the same now as it did then on the same Fuji Crystal Archive paper.
To be fair, I suppose there is another analogy, if Nate simply had to tailor his software around the same light source and the same camera and lens things would be a lot more straightforward.
Partially agreed, but there’s some variation in the negatives, therefore, a few things need to be dialed in (adapted). It’s about words and whether one size fits all.