Optimize NLP for RGB light sources?

Optimize NLP for High CRI AND RGB light sources?

Reading the NLP forum it looks like there is an active debate about High CRI vs. RGB light sources. I think that I understand the source of the disagreement and see an opportunity for NLP to be much better.

I have scanned about 1/3 of a lifetime collection of 30,000 color negatives using a Sony NEX-7 and NLP. This was intended to be just an index of the collection. The best images were to be rescanned with a Nikon Coolscan LS4000 ED. The Nikon makes beautiful scans but too slowly to create a 30,000 neg index.

But I find that I truly enjoy the NLP interface and occasionally I get a very well scanned image. More often I get crossovers that are impossible to correct in NLP. I get images that are too magenta AND too green or that just don’t have the subtile blues and greens I expect. They are useful for an index but not final product.

I think that with an additional feature in NLP our scans could become more reliably excellent. Or perhaps the Fuji Frontier setting could be optimized for a Fuji Frontier light source:

Fuji Frontier SP3000 illuminant spectrum

Thanks @NateWeatherly

I have yet to find a parameter set to apply in NLP that will be consistently successful with an RGB source. If you try using a narrow spectrum RGB source like the Frontier or late model iPads and phones with NLP you tend to produce radically saturated images. Is NLP using camera profiles that were designed for full spectrum daylight illuminating full spectrum scenes? A narrow spectrum RGB light source radically changes the digital camera system’s spectral sensitivity. So an input camera profile for white light can be radically wrong.
Nate Weatherly explains this far better than I can–here

But why not just use white light with NLP?

It takes far too much tweaking to get a good image. And as film dyes fade the situation gets worse.

Why do all the “big guns” in the scanning world use or recommend RGB light sources?

  • They want to render colors that are faithful to the intent of the origination film maker.
  • They get easier processing. Less noise in the data.
  • Their scans are less sensitive to dyes faded by years of storage. They want to represent the image in each layer of the film as it was originally made regardless of the color the CN dyes have become.

To get faithful color rendition simulate color print material

Edward J. Giorgianni and Thomas E. Madden are color scientists with each over 30 years of teaching, inventing, and creating standards for color representation using disparate components. They were responsible for the Kodak Photo CD and other specifications. In their very readable textbook Digital color management–Encoding solutions one point they make is:

  • To digitally represent the scene as intended by the film manufacturer you need to scan the color negative using spectral sensitivity that matches the sensitivity of the print material intended for that film.

Fuji Frontier’s light source with peaks at B=460nm, G=550nm, and R=630nm matches the print materials pretty well. If their red was up at 690nm it would match even better.

The curves below are for color print motion picture film. I have been unable to find spectral sensitivity curves for print paper.

Academy Printing Density Standards
Academy Printing Density

thanks Nate Weatherly

Kodak Vision color print film sensitivity curves:

Kodak Vision color print film sensitivity curves

But the typical digital camera’s spectral sensitivity has peaks at B=460nm, G=530nm, and R=590nm. In particular the camera’s red sensitivity is far from the red sensitivity of print materials.

Camera scans with white light are more ambiguous

Sensitivity curves for ILCE-7RM2

thanks Nate Weatherly

There is significant overlap in these camera sensitivities. For example the green record is affected by more than the magenta dye. But worse, the red channel is strongly affected by both the cyan and magenta dyes. This results in significant ambiguity in the resulting data.

I believe that this ambiguity requires brittle algorithms to tease out subtile colors in the data. (It is my sense that this situation was made worse with NLP version 2).

The situation gets far worse as color negatives age

Researchers focused on motion picture preservation have studied the affect of aging on color negatives and how to best make digital copies.

They make the following recommendation for scanning spectral sensitivity. Notice that by moving red sensitivity so far out they have minimized the effect of the magenta layer on the red record.

Flueckigeretal max-separation

Barbara Flueckiger, et. al.,“Investigation of Film Material–Scanner Interaction”.

Thank you Nate Weatherly

Nate Weatherly opened my eyes to this situation. His graphic here summarizes his thorough explanation of the reason for RGB light sources:

see at pixels.us and also see

Discussion

As I see it the challenge for our scans of color negatives is to deliver the interpretation of the scene as intended by the origination emulsion maker. ​At least that is what we should get as the default before we make aesthetically driven changes.

We have no control over negatives processed years ago or the spectral sensitivity of our cameras. When we use a digital camera to scan negatives each CMY layer in the filmstrip should be represented in a digital RGB channel. But because of the overlap in the camera’s sensitivity curves and significant overlap in the filmstrip dyes there is significant ambiguity a.k.a. crosstalk, between channels.

But we can control the illuminant. With sharp cutting colored LEDs we can reduce the crosstalk recorded by the camera. Essentially we can make the camera more like a densitometer.

Will NLP work well with RGB illuminants? I am assuming that NLP is designed to tease out colors from data heavy with crosstalk–made with white light. @nate , I am assuming that from your reply to @DrBormental WRT profiles for different digital cameras. It sounded like you are using the image’s camera metadata to select input camera profiles in NLP. (Of course those profiles might be designed for full spectrum images and work best with white light illuminated scans.)

Wouldn’t color channels that were spectrally separated by the illuminant and not by the filters in the camera be independent of the camera used? Wouldn’t the camera profile be close to the same for all cameras.

It seems to me that the least ambiguous color separations will be the easiest to convert to positives with accuracy. Could NLP work even better if it knew the light source? You could have a variable in NLP for light source which would allow you to optimize conversion for high CRI and for RGB illuminants.

I propose a new user input for NLP—>Lightsource used

  • White light (High CRI)
  • RGB (like Fuji Frontier, iPhone, iPad)
  • Max-separation RGB (Flueckiger et.al., Weatherly, Giorgianni and Madden, Academy Printing Density, and Kodak Chromogenic Print Paper)

The deep red around 700nm needed for max-separation may be too high wavelength to reliably record in typical cameras. Some experimentation is needed. The IR blocking filter in the camera might need to be removed.

But maybe something short of the max-separation red is still quite good at making clean separations from various film stock, processing, and storage conditions.

I make this proposal as a fan of NLP

I look forward to discussion. Please correct me where you find me wrong or confused. And a special thanks to Nate Weatherly and Edward Giorgianni for helping me to understand these issues.

1 Like

Great post! I’ve been having very similar thoughts and my conclusion is that we’re looking at the impossible situation: the 2-dimensional matrix of cameras and light sources. People in some “cells” of that matrix are much happier with NLP than others! :slight_smile: Just comparing the default output between Canon 5D Mk4 and Sony a7R IV I can tell that Sony users don’t know what they’re missing! (assuming they have the same light source as I do)

If I were to suggest something to improve consistency of NLP’s default output, it would be to include a physical color calibration target with every copy of NLP, supplemented by a calibration feature inside the plugin. This way users will shoot a target using their light source + camera combination and NLP will be able to create a personalized profile on the spot, because it knows the true colors (probably just patches of grey) of the target.

That’s what I do for my manual inversions anyway.

Actually, the NLP calibration kit could be a separate product. I will gladly pay another $100 for it. :wink:

@DrBormental Do you think a 2D matrix would do it? What about all the different emulsions and the effects of ageing? Your proposal for a profiling tool would take you part of the way. But I think there will be still too many uncontrolled variables.

That is the reason that I think that the “maximum separation” light source is the only answer.

BTW, I would happily have the IR blocking filter removed from the NEX-7 if that was necessary for the camera to see a source with 700nm red---- AND IF---- NLP was optimized for such a lightsource.

I do not think that different emulsions make much difference. A good scanning rig needs to behave as a “digital RA4 paper”. And the paper doesn’t care about emulsions.

TBH I am quite happy with everything except speed. It is not hard to produce great scans, just get a color target and practice manual inversions - they allow you to get a feel for each emulsion. I think eariler I posted this sample of Fuji 400H Pro inverted manually. I make these for all films I use. Hard to get better than that, and then I use these as a reference for tuning my NLP parameters.

What can be improved is speed and efficiency. NLP does not produce results like above by default, it requires tweaking. The amount of tweaking is significant, but still faster than manual inversion, that’s why I use it. As Nate gets closer to manual quality with each update, the efficiency improves.

The fact that manual inversions can look perfect tell me that a high-CRI light source is good enough for algorithms to work their magic.

Speaking of old and faded film, I have no experience with it, sorry.

Thanks @davidS for your post!

Yes, this is something I have talked about doing and am planning to do in the future.

What I haven’t seen are good, commercially available RGB light sources that are optimized for film reproduction (other than, of course, re-purposing iPhones and iPads, which are less than ideal for other reasons). I have suggested to several manufacturers (Kaiser, Negative Supply, etc) that they should make one, but have yet to see any. If you are aware of any widely available RGB light sources that match films sensitive curve, I would be happy to add one to my recommended list of camera scanning light tables! Or if there is a kickstarter, I will be the first to invest!

Until then, the best advice I can give is to use either a high CRI white light source, or a modern iPad or iPhone as the light source. Telling users that theoretically there should be a better light source is not helpful if it is not available to them.

But again, I’d be very happy to update this with better options as they become available, and help with testing/calibrating.

I think there is maybe a lack of understanding on how digital camera sensors work and what is happening under the hood with with raw camera profiles (specifically, the color matrix and forward matrix). Yes, with the filter in a digital camera, there is overlap between the spectral curves. But this overlap is mathematically known and solvable via the color matrix and forward matrix (and even more fine-tuned via HueSatDelta tables and final LookTable). This is why, despite the imperfection of the color separation filters in a digital camera, it is possible to get extremely accurate color reproduction (or to fine-tune the color reproduction to be more ideal for film).

(BTW, it’s unclear to me if every digital camera has as much overlap as the Sony ILCE-7RM2 camera shown here, or even how accurate this data is. I will say anecdotally that the majority of users who experience very “off-colors” when digitizing with a digital camera are using a Sony. I have received virtually no major color issues from Fuji users, as an example. But I have found ways to improve Sony renderings via the raw camera profile, which will be released in v2.3 shortly).

It’s also unclear to me what exactly is suggested that we do with this sensor overlap that isn’t already being done internally via the camera profile. I’ve seen it suggested to take three separate digital shots, cycling through a red only, green only, and blue only lighting, and then separate and recombine the color channels in post. But in this case, you still haven’t changed the imperfection of the RGB filtering on the sensor, or the peak of the curves. All you’ve done now is make it more difficult to account for these things in post.

This may be due to some issues in general with the Sony profiles that I have been working to improve. If you’d be willing to send RAW samples to me at nate@natephotographic.com, I’d be happy to take a look!

I’m also hopeful that the new profiles and calibrations coming in v2.3 should greatly improve your experience.

And just to summarize again:

I’m absolutely all for improving light sources, and getting more refined calibrations into Negative Lab Pro. But for most users, you should be able to get amazing colors from a good, high CRI light source. And if you aren’t getting good colors, then something has gone wrong along the way and I can help figure out what that is and help improve it!

Thanks!
-Nate Johnson
Creator of Negative Lab Pro

1 Like

dxomark.com publishes sensor response data measured with CIE-D50 and CIE-A illuminants.
Check out their camera analyses, they might help.

How to get started?

I think we have a bit of “chicken vs. the egg”. I am contending that along with having no RGB commercial light sources we have no good commercial software for their use. If that is to change someone is going to have to produce a product in somewhat of a vacuum. For that to happen there would have to be significant interest in the new direction and confidence in its success.

I think that work with breadboards can supply that confidence. I have built a breadboard light source using commercial LEDs that matches the spectral distribution of the Fuji Frontier light source. But I don’t know of software to use that will work well with it.

Will NLP disadvantage “RGB scans” from that breadboard?

Does NLP apply D65 based input camera profiles to all scans? If so, @nate do you disagree with Nate Weatherly? His context is the Negadoctor module on Darktable. But the concepts should be the same.

  • The iPad is an excellent light source for scanning negatives (better than flash, tungsten, or any “white” LED panel) but when you start using narrow band RGB light sources normal camera profiles will result in extreme saturation and clipping, especially when inverted. Camera profiles can only deal with color, they don’t work when you’re using narrow band illumination to make your camera into a sort of densitometer. Full post at pixls.us

Nate Weatherly describes my experience when using my “Frontier lightsource” breadboard with NLP, extreme over saturation and difficult crossovers.

How can input camera profiles control sensor overlap ambiguity?

Obviously there is much here that I don’t understand. Can someone recommend a reference that I should read? My many hours with Giorgianni and Madden have taught me so much about digitization of negatives but nothing about mathematical reduction of sensor spectral overlap ambiguity. It seems to me that in a severe overlap situation, once you only have RGB channels that you cannot tell if the red channel was attenuated by cyan or magenta dye, etc. etc.

I would contend that a narrow spectrum RGB lightsource does more than reduce sensor overlap. Such a light source confines digitization to spectral bands where little overlap exists in the dyes of our color negatives.

Do we need an input camera profile for the camera-and-lightsource system?

Is it enough to just turn off the camera input profile in NLP? I don’t know enough to say. Is it possible to trick NLP to not apply D65 input camera profiles? Do we need a profile for the system including the lightsource?

Gary Bucher over at pixls.us provided a design for a single image spectrometer and software that uses it to create input camera profiles.
DSZ_5034
He describes its intended use; to create daylight profiles. But it strikes me that it would create system profiles for other camera-lightsource combinations. For instance, use your DSLR of choice and your RGB light source.

Is white-light scanning good enough?

I know that many NLP users are quite happy with their scans using high CRI lightsources. I suggest that there are three opportunities here:

  • More faithful color reproduction matching the intent of the film manufacturer.
  • Less tweaking needed to get a good scan
  • Better behavior with old and faded emulsions.

Faithful reproduction

I have already discussed the first item extensively. But I just found spectral sensitivity curves for Kodak paper with RA-4 processing:
Kodak RA-4 Spectral sensitivity curves
Note: Its Cyan forming layer peak is way above digital camera red channel peaks. And the paper is almost blind at 600nm.

Less tweaking

@DrBormental described RA-4 as a system designed to work well with most emulsion negatives. That simplifies the task of making photographic prints without considering the filmstock used for the CN. But also note that in the darkroom you have really only two levers, exposure and color filtration-- and one filter in the color head for the whole image for most prints.

Our digital workflow includes so many more levers. But shouldn’t our default be as simple as RA-4? Beyond that, pull the digital levers for creative adventures.

Better scans from old negatives

@DrBormental you and I live in very different worlds. I have not exposed film for about 18 years. Sounds like you are actively shooting film now. (There is no way for me to get a Macbeth chart into my old scenes.)

I won’t beat this drum so well researched by the motion picture preservation folks here but read the comments in my original post. Suffice it to say that our ideal for old and faded emulsions is to digitize the densities of the CN dyes without being affected by the colors that they have become. And the proposed narrow spectrum light source is those folks proposed method to achieve that goal.

BTW Thank you @Digitizer for the dxomark suggestion. Even there they say that the red channel on my camera, the Sony A7R2, is strongly influenced by green light suggesting that it may peak near 600nm as shown in the graph in my post.

I used an iPad as a backlight for this proof of concept:

@Digitizer. Have you scanned the same negative with a high CRI lightsource and NLP?

Yes, and it worked too, but with a result that was not as good as the one from the RGB scan. I’d not say that the RGB scan is better due to shortcomings in NLP with the HCRI scan though. One single trial is not enough to prove anything.

Improving the overall colour balance would have required more adjustments with a TIFF positive and local adjustments in order to remove the casts on the background and the skin.

Using a true B&W camera could have helped. But I’m not spending money on getting a Leica Monochrom or for removing the filter array of my camera…

Real Life Examples - Negative Lab Pro v2.3 + iPad as Backlight

Just to show some real life examples… all of the below were digitized using a Fuji X-T2 with an iPad as the light source. They were converted using Negative Lab Pro v2.3 (currently in beta testing), and adjusted to taste, with the final settings shown.

Version 2.3 uses a new camera profile that simply works better across the board (for both white light and for RGB light). So I’ll be interested to see if you are able to get better results with your breadboard light source in v2.3 than you did previously.

Examples of Negative Lab Pro v2.3 with an iPad light source:

To my eyes, at least, the colors in these is quite good - on par or better than the same negatives in the Kaiser light table. Good skin tones, good sky hue, good distinctions between yellows and greens, etc…

And of course, there is a LOT of adjustability within them.


But why am I getting bad colors with my RGB light table?

First, there are some major improvements in v2.3, so I’ll be interested to see if you get better results with that… but in general, there are a few reasons that colors can get wacky:

  1. The importance of color balance. Just as you would in a darkroom setting, a lot of the color balance will come down to the operator. Almost all images will need some tweaking of the “temp/tint” settings.
  2. The unpredictability of deteriorated film. The older your film negatives are, the more damaged the colors in them will be. This is especially true if they have not been stored properly. This is true regardless of if you are using a white lightsource or an RGB lightrsource. Will using a perfectly calibrated RGB lightsource help? Perhaps it will improve the starting point. But even then, you will need to make serious adjustments to get correct colors out of final.
  3. The default settings in Negative Lab Pro. The defaults in Negative Lab Pro are not set in stone. The default setting, for instance, is to use the “LAB - Standard” tone profile, which adds a fair amount of contrast, which increases perceived saturation and make it appear more like a lab scan. If you wanted to begin in a more “neutral” starting point, you could use something like the “Linear - Flat” tone profile, and save that as your default setting.
  4. The variability of the scene itself - Most film conversion software (including Negative Lab Pro) works by analyzing the scene of the negative itself. This works well in most cases, but can fail in certain types of scenes, especially scenes with either one dominant color, or very low dynamic range. I’m working to improve this by doing more contextual analysis within a roll, but this is an important consideration in why some scenes may not have as good of an initial conversion as other.
  5. Issues with Setup - since camera scanning is a DIY adventure, there are a lot of potential mistakes that can be made along the way. Many many issues I see with color reproduction come down to various errors in the setup that cause uneven light, lens flare, or sensor flare. Most users who experience color issues will benefit from a good examination of every part of their setup. Often it is a simple as masking out light, or upgrading to a new macro lens with better coating.

In my humble opinion, each of the above 5 variables will play a larger roll in the final colors of your image than the difference between a High CRI White Lightsource and an RGB Lightsource.

I say this having converted tens of thousands of negatives from various cameras and light sources, and having spent the last 5 years or so working almost exclusively on this project.

But again, I’m not at all opposed to finding ways to improve calibrations for various light sources, and I do believe that good RGB light sources would be a wonderful tool to have. I would just caution against thinking that it would be any kind of silver bullet. Any combination of camera, light-source, software, etc. will need adjustments to get the results the user wants.


It would be helpful to have samples to test in Negative Lab Pro to see for myself what is happening and if it could benefit from adjustments to the profile. If you’re up for it, you can email them to me at nate@natephotographic.com

The Adobe DNG Specification is a good place to start to understand what is happening inside a camera profile (at least in Lightroom).

And here is a really good multi-part series on understanding how color works in digital cameras…

No, that’s not how it works. There always needs to be a camera profile when working with RAW. Otherwise, there would be no instructions on how to turn the RAW sensor data into white-balanced sRGB data.

I could perhaps introduce a “mode” where you only see adjustment tools in Negative Lab Pro that were possible in RA-4. Or, you could try the following:

  1. Set the tone profile to “Linear-Flat”
  2. Only adjust the “Brightness” and “Temp/Tint” sliders in Negative Lab Pro.

But many people use Negative Lab Pro for different reasons and expecting different outputs, so I try to make it as flexible as possible to work with many different setups and desired results.

Again, thank you for this discussion!

I’m always looking to improve Negative Lab Pro and make is a great experience for all!

1 Like

@nate I wonder what do you think about camera+lightsource calibration routine. I used to be an extremely happy NLP user until I “upgraded” to Sony a7R IVa and, besides waiting for v2.3, there’s nothing I can do: the results are awful and tweaking NLP settings is much slower than inverting manually.

I wonder if providing users with a tool to calibrate their camera + light combination against a well-known standard (IT8 targets like Kodak’s G-60 or similar) could be an option. Wolf Faust sells transparency targets, for example. I suspect that’s what Silverfast comes with.

First, it would allow users of any camera to get more predictability, and also can create an additional income stream to NLP.

I love the idea.

I have already experimented with a methodology to do this. The hardest thing I think is actually getting a truly neutral negative target. Targets that I’ve taken with Kodak Portra, for instance, are influenced by the color palette of Portra, and so the calibration tends to correct away the uniqueness of Portra’s color palette, if that makes sense? Maybe someone else has already thought through a way to solve this…

This is because the Sony a7R IVa was not available at the time v2.2 launched, so there is no NLP profile for it in the package. When this happens, you will see a warning in Lightroom when you first launch Negative Lab Pro that says “profile missing” underneath the profile name, and Lightroom will use the default profile (Adobe Color) which is not designed for use in converting negatives, and will introduce many problems.

The new profile will be included in v2.3.

In the meantime though, you can download this profile, and then add it to Lightroom (file > import develop profiles and presets).

https://www.dropbox.com/s/m8bqucpv0mjmxkz/Negative%20Lab%20-%20Sony%20ILCE-7RM4A%20v2.0.dcp?dl=0

Just note that after adding this profile, you will need to use Negative Lab Pro to unconvert, and then reconvert any previous conversions you’ve made with your Sony a7R IVa. The reason for this is that once you add the profiles, Lightroom will look at the conversion where it should have already applied this profile but it was missing, and start applying the profile. That will throw off the image, since the analysis was done on the wrong profile (Adobe Color) when it should have been done against this new profile. Does that make sense?

@nate , First of all, what beautiful images. I am salivating for your v2.3. Of course I still worry that my 30+ year old negs will not perform that well.

And thank you for your thoughtful response and references for my post.

Exploring with @DrBormental the possibility of using NLP with a camera input profile for camera with lightsource: Is there a way to get NLP to use a user supplied profile? What do you think about Glen Bucher’s method of using a spectrometer and software to construct the profile and its use with NLP? If it works, it would not use a neutral negative target (or any negative) to characterize the system.

I know that his method is not easily duplicated by others but maybe we would learn something by trying it. In particular I am trying to get the best out of old and faded negatives.

Of course! :raised_hands:

Yes! If you set the “color model” to “none” before converting, then it will use whatever camera profile and camera calibration settings you had set prior to opening Negative Lab Pro.

It’s cool to see how he has set up his own spectrometer, but I really don’t see how this would be any better than making a profile by shooting an IT8 target… because the spectral data of the IT8 target is known to high degree of accuracy.

In fact, in the follow up post, he looks at how close an it8 target gets to the spectral profile. Based on the single sample image (which is not nearly enough to evaluate), there isn’t that much difference. The Quest for Good Color - 3. How Close Can IT8 Come to SSF?" - Processing - discuss.pixls.us

In both cases, they are simply trying to get the camera profile to accurately render what it is seeing for a given illuminant. Both an IT8 target and a spectral response function will get you there.

@nate for calibration to work and maintain a unique look of every emulsion, you have to use 3rd party color targets like those from Wolf Faust. They are actually meant to be used for scanning slides, but that’s perfect for our use case: the goal of calibration is to cancel out camera manufacturer’s RAW fuckery and imperfections of a light source towards a well-known state. I also have spent considerable amount of time studying color/film/scanning literature and TBH I am convinced that without “normalizing” RAW input NLP (or any other similar product) will never work equally well for all camera+light combinations. Some users will be luckier than others.

P.S. Nate, I just tried your new profile and indeed there are significant improvements! (I was using a7r IV profile before, just re-tagged it with “IVa”).

Creating profiles with today’s films, shot with today’s cameras and today’s processing, will not necessarily improve the results we get with old negatives.

NLP provides good starting points with my negatives. Some are pretty much spot on, some take more effort or even tweaking of a positive copy and some will still get me a decent B&W copy.

Also, how well do I remember what the colours were, when I shot the scenes 30plus years ago?

Maybe this light will help?