Optimize NLP for High CRI AND RGB light sources?
Reading the NLP forum it looks like there is an active debate about High CRI vs. RGB light sources. I think that I understand the source of the disagreement and see an opportunity for NLP to be much better.
I have scanned about 1/3 of a lifetime collection of 30,000 color negatives using a Sony NEX-7 and NLP. This was intended to be just an index of the collection. The best images were to be rescanned with a Nikon Coolscan LS4000 ED. The Nikon makes beautiful scans but too slowly to create a 30,000 neg index.
But I find that I truly enjoy the NLP interface and occasionally I get a very well scanned image. More often I get crossovers that are impossible to correct in NLP. I get images that are too magenta AND too green or that just don’t have the subtile blues and greens I expect. They are useful for an index but not final product.
I think that with an additional feature in NLP our scans could become more reliably excellent. Or perhaps the Fuji Frontier setting could be optimized for a Fuji Frontier light source:
Thanks @NateWeatherly
I have yet to find a parameter set to apply in NLP that will be consistently successful with an RGB source. If you try using a narrow spectrum RGB source like the Frontier or late model iPads and phones with NLP you tend to produce radically saturated images. Is NLP using camera profiles that were designed for full spectrum daylight illuminating full spectrum scenes? A narrow spectrum RGB light source radically changes the digital camera system’s spectral sensitivity. So an input camera profile for white light can be radically wrong.
Nate Weatherly explains this far better than I can–here
But why not just use white light with NLP?
It takes far too much tweaking to get a good image. And as film dyes fade the situation gets worse.
Why do all the “big guns” in the scanning world use or recommend RGB light sources?
- They want to render colors that are faithful to the intent of the origination film maker.
- They get easier processing. Less noise in the data.
- Their scans are less sensitive to dyes faded by years of storage. They want to represent the image in each layer of the film as it was originally made regardless of the color the CN dyes have become.
To get faithful color rendition simulate color print material
Edward J. Giorgianni and Thomas E. Madden are color scientists with each over 30 years of teaching, inventing, and creating standards for color representation using disparate components. They were responsible for the Kodak Photo CD and other specifications. In their very readable textbook Digital color management–Encoding solutions one point they make is:
- To digitally represent the scene as intended by the film manufacturer you need to scan the color negative using spectral sensitivity that matches the sensitivity of the print material intended for that film.
Fuji Frontier’s light source with peaks at B=460nm, G=550nm, and R=630nm matches the print materials pretty well. If their red was up at 690nm it would match even better.
The curves below are for color print motion picture film. I have been unable to find spectral sensitivity curves for print paper.
Academy Printing Density Standards
Kodak Vision color print film sensitivity curves:
But the typical digital camera’s spectral sensitivity has peaks at B=460nm, G=530nm, and R=590nm. In particular the camera’s red sensitivity is far from the red sensitivity of print materials.
Camera scans with white light are more ambiguous
There is significant overlap in these camera sensitivities. For example the green record is affected by more than the magenta dye. But worse, the red channel is strongly affected by both the cyan and magenta dyes. This results in significant ambiguity in the resulting data.
I believe that this ambiguity requires brittle algorithms to tease out subtile colors in the data. (It is my sense that this situation was made worse with NLP version 2).
The situation gets far worse as color negatives age
Researchers focused on motion picture preservation have studied the affect of aging on color negatives and how to best make digital copies.
They make the following recommendation for scanning spectral sensitivity. Notice that by moving red sensitivity so far out they have minimized the effect of the magenta layer on the red record.
Barbara Flueckiger, et. al.,“Investigation of Film Material–Scanner Interaction”.
Thank you Nate Weatherly
Nate Weatherly opened my eyes to this situation. His graphic here summarizes his thorough explanation of the reason for RGB light sources:
Discussion
As I see it the challenge for our scans of color negatives is to deliver the interpretation of the scene as intended by the origination emulsion maker. ​At least that is what we should get as the default before we make aesthetically driven changes.
We have no control over negatives processed years ago or the spectral sensitivity of our cameras. When we use a digital camera to scan negatives each CMY layer in the filmstrip should be represented in a digital RGB channel. But because of the overlap in the camera’s sensitivity curves and significant overlap in the filmstrip dyes there is significant ambiguity a.k.a. crosstalk, between channels.
But we can control the illuminant. With sharp cutting colored LEDs we can reduce the crosstalk recorded by the camera. Essentially we can make the camera more like a densitometer.
Will NLP work well with RGB illuminants? I am assuming that NLP is designed to tease out colors from data heavy with crosstalk–made with white light. @nate , I am assuming that from your reply to @DrBormental WRT profiles for different digital cameras. It sounded like you are using the image’s camera metadata to select input camera profiles in NLP. (Of course those profiles might be designed for full spectrum images and work best with white light illuminated scans.)
Wouldn’t color channels that were spectrally separated by the illuminant and not by the filters in the camera be independent of the camera used? Wouldn’t the camera profile be close to the same for all cameras.
It seems to me that the least ambiguous color separations will be the easiest to convert to positives with accuracy. Could NLP work even better if it knew the light source? You could have a variable in NLP for light source which would allow you to optimize conversion for high CRI and for RGB illuminants.
I propose a new user input for NLP—>Lightsource used
- White light (High CRI)
- RGB (like Fuji Frontier, iPhone, iPad)
- Max-separation RGB (Flueckiger et.al., Weatherly, Giorgianni and Madden, Academy Printing Density, and Kodak Chromogenic Print Paper)
The deep red around 700nm needed for max-separation may be too high wavelength to reliably record in typical cameras. Some experimentation is needed. The IR blocking filter in the camera might need to be removed.
But maybe something short of the max-separation red is still quite good at making clean separations from various film stock, processing, and storage conditions.
I make this proposal as a fan of NLP
I look forward to discussion. Please correct me where you find me wrong or confused. And a special thanks to Nate Weatherly and Edward Giorgianni for helping me to understand these issues.