Optimize NLP for RGB light sources?

Great post! I’ve been having very similar thoughts and my conclusion is that we’re looking at the impossible situation: the 2-dimensional matrix of cameras and light sources. People in some “cells” of that matrix are much happier with NLP than others! :slight_smile: Just comparing the default output between Canon 5D Mk4 and Sony a7R IV I can tell that Sony users don’t know what they’re missing! (assuming they have the same light source as I do)

If I were to suggest something to improve consistency of NLP’s default output, it would be to include a physical color calibration target with every copy of NLP, supplemented by a calibration feature inside the plugin. This way users will shoot a target using their light source + camera combination and NLP will be able to create a personalized profile on the spot, because it knows the true colors (probably just patches of grey) of the target.

That’s what I do for my manual inversions anyway.

Actually, the NLP calibration kit could be a separate product. I will gladly pay another $100 for it. :wink:

@DrBormental Do you think a 2D matrix would do it? What about all the different emulsions and the effects of ageing? Your proposal for a profiling tool would take you part of the way. But I think there will be still too many uncontrolled variables.

That is the reason that I think that the “maximum separation” light source is the only answer.

BTW, I would happily have the IR blocking filter removed from the NEX-7 if that was necessary for the camera to see a source with 700nm red---- AND IF---- NLP was optimized for such a lightsource.

I do not think that different emulsions make much difference. A good scanning rig needs to behave as a “digital RA4 paper”. And the paper doesn’t care about emulsions.

TBH I am quite happy with everything except speed. It is not hard to produce great scans, just get a color target and practice manual inversions - they allow you to get a feel for each emulsion. I think eariler I posted this sample of Fuji 400H Pro inverted manually. I make these for all films I use. Hard to get better than that, and then I use these as a reference for tuning my NLP parameters.

What can be improved is speed and efficiency. NLP does not produce results like above by default, it requires tweaking. The amount of tweaking is significant, but still faster than manual inversion, that’s why I use it. As Nate gets closer to manual quality with each update, the efficiency improves.

The fact that manual inversions can look perfect tell me that a high-CRI light source is good enough for algorithms to work their magic.

Speaking of old and faded film, I have no experience with it, sorry.

Thanks @davidS for your post!

Yes, this is something I have talked about doing and am planning to do in the future.

What I haven’t seen are good, commercially available RGB light sources that are optimized for film reproduction (other than, of course, re-purposing iPhones and iPads, which are less than ideal for other reasons). I have suggested to several manufacturers (Kaiser, Negative Supply, etc) that they should make one, but have yet to see any. If you are aware of any widely available RGB light sources that match films sensitive curve, I would be happy to add one to my recommended list of camera scanning light tables! Or if there is a kickstarter, I will be the first to invest!

Until then, the best advice I can give is to use either a high CRI white light source, or a modern iPad or iPhone as the light source. Telling users that theoretically there should be a better light source is not helpful if it is not available to them.

But again, I’d be very happy to update this with better options as they become available, and help with testing/calibrating.

I think there is maybe a lack of understanding on how digital camera sensors work and what is happening under the hood with with raw camera profiles (specifically, the color matrix and forward matrix). Yes, with the filter in a digital camera, there is overlap between the spectral curves. But this overlap is mathematically known and solvable via the color matrix and forward matrix (and even more fine-tuned via HueSatDelta tables and final LookTable). This is why, despite the imperfection of the color separation filters in a digital camera, it is possible to get extremely accurate color reproduction (or to fine-tune the color reproduction to be more ideal for film).

(BTW, it’s unclear to me if every digital camera has as much overlap as the Sony ILCE-7RM2 camera shown here, or even how accurate this data is. I will say anecdotally that the majority of users who experience very “off-colors” when digitizing with a digital camera are using a Sony. I have received virtually no major color issues from Fuji users, as an example. But I have found ways to improve Sony renderings via the raw camera profile, which will be released in v2.3 shortly).

It’s also unclear to me what exactly is suggested that we do with this sensor overlap that isn’t already being done internally via the camera profile. I’ve seen it suggested to take three separate digital shots, cycling through a red only, green only, and blue only lighting, and then separate and recombine the color channels in post. But in this case, you still haven’t changed the imperfection of the RGB filtering on the sensor, or the peak of the curves. All you’ve done now is make it more difficult to account for these things in post.

This may be due to some issues in general with the Sony profiles that I have been working to improve. If you’d be willing to send RAW samples to me at nate@natephotographic.com, I’d be happy to take a look!

I’m also hopeful that the new profiles and calibrations coming in v2.3 should greatly improve your experience.

And just to summarize again:

I’m absolutely all for improving light sources, and getting more refined calibrations into Negative Lab Pro. But for most users, you should be able to get amazing colors from a good, high CRI light source. And if you aren’t getting good colors, then something has gone wrong along the way and I can help figure out what that is and help improve it!

Thanks!
-Nate Johnson
Creator of Negative Lab Pro

1 Like

dxomark.com publishes sensor response data measured with CIE-D50 and CIE-A illuminants.
Check out their camera analyses, they might help.

How to get started?

I think we have a bit of “chicken vs. the egg”. I am contending that along with having no RGB commercial light sources we have no good commercial software for their use. If that is to change someone is going to have to produce a product in somewhat of a vacuum. For that to happen there would have to be significant interest in the new direction and confidence in its success.

I think that work with breadboards can supply that confidence. I have built a breadboard light source using commercial LEDs that matches the spectral distribution of the Fuji Frontier light source. But I don’t know of software to use that will work well with it.

Will NLP disadvantage “RGB scans” from that breadboard?

Does NLP apply D65 based input camera profiles to all scans? If so, @nate do you disagree with Nate Weatherly? His context is the Negadoctor module on Darktable. But the concepts should be the same.

  • The iPad is an excellent light source for scanning negatives (better than flash, tungsten, or any “white” LED panel) but when you start using narrow band RGB light sources normal camera profiles will result in extreme saturation and clipping, especially when inverted. Camera profiles can only deal with color, they don’t work when you’re using narrow band illumination to make your camera into a sort of densitometer. Full post at pixls.us

Nate Weatherly describes my experience when using my “Frontier lightsource” breadboard with NLP, extreme over saturation and difficult crossovers.

How can input camera profiles control sensor overlap ambiguity?

Obviously there is much here that I don’t understand. Can someone recommend a reference that I should read? My many hours with Giorgianni and Madden have taught me so much about digitization of negatives but nothing about mathematical reduction of sensor spectral overlap ambiguity. It seems to me that in a severe overlap situation, once you only have RGB channels that you cannot tell if the red channel was attenuated by cyan or magenta dye, etc. etc.

I would contend that a narrow spectrum RGB lightsource does more than reduce sensor overlap. Such a light source confines digitization to spectral bands where little overlap exists in the dyes of our color negatives.

Do we need an input camera profile for the camera-and-lightsource system?

Is it enough to just turn off the camera input profile in NLP? I don’t know enough to say. Is it possible to trick NLP to not apply D65 input camera profiles? Do we need a profile for the system including the lightsource?

Gary Bucher over at pixls.us provided a design for a single image spectrometer and software that uses it to create input camera profiles.


He describes its intended use; to create daylight profiles. But it strikes me that it would create system profiles for other camera-lightsource combinations. For instance, use your DSLR of choice and your RGB light source.

Is white-light scanning good enough?

I know that many NLP users are quite happy with their scans using high CRI lightsources. I suggest that there are three opportunities here:

  • More faithful color reproduction matching the intent of the film manufacturer.
  • Less tweaking needed to get a good scan
  • Better behavior with old and faded emulsions.

Faithful reproduction

I have already discussed the first item extensively. But I just found spectral sensitivity curves for Kodak paper with RA-4 processing:
Kodak RA-4 Spectral sensitivity curves
Note: Its Cyan forming layer peak is way above digital camera red channel peaks. And the paper is almost blind at 600nm.

Less tweaking

@DrBormental described RA-4 as a system designed to work well with most emulsion negatives. That simplifies the task of making photographic prints without considering the filmstock used for the CN. But also note that in the darkroom you have really only two levers, exposure and color filtration-- and one filter in the color head for the whole image for most prints.

Our digital workflow includes so many more levers. But shouldn’t our default be as simple as RA-4? Beyond that, pull the digital levers for creative adventures.

Better scans from old negatives

@DrBormental you and I live in very different worlds. I have not exposed film for about 18 years. Sounds like you are actively shooting film now. (There is no way for me to get a Macbeth chart into my old scenes.)

I won’t beat this drum so well researched by the motion picture preservation folks here but read the comments in my original post. Suffice it to say that our ideal for old and faded emulsions is to digitize the densities of the CN dyes without being affected by the colors that they have become. And the proposed narrow spectrum light source is those folks proposed method to achieve that goal.

BTW Thank you @Digitizer for the dxomark suggestion. Even there they say that the red channel on my camera, the Sony A7R2, is strongly influenced by green light suggesting that it may peak near 600nm as shown in the graph in my post.

I used an iPad as a backlight for this proof of concept:

@Digitizer. Have you scanned the same negative with a high CRI lightsource and NLP?

Yes, and it worked too, but with a result that was not as good as the one from the RGB scan. I’d not say that the RGB scan is better due to shortcomings in NLP with the HCRI scan though. One single trial is not enough to prove anything.

Improving the overall colour balance would have required more adjustments with a TIFF positive and local adjustments in order to remove the casts on the background and the skin.

Using a true B&W camera could have helped. But I’m not spending money on getting a Leica Monochrom or for removing the filter array of my camera…

Real Life Examples - Negative Lab Pro v2.3 + iPad as Backlight

Just to show some real life examples… all of the below were digitized using a Fuji X-T2 with an iPad as the light source. They were converted using Negative Lab Pro v2.3 (currently in beta testing), and adjusted to taste, with the final settings shown.

Version 2.3 uses a new camera profile that simply works better across the board (for both white light and for RGB light). So I’ll be interested to see if you are able to get better results with your breadboard light source in v2.3 than you did previously.

Examples of Negative Lab Pro v2.3 with an iPad light source:

To my eyes, at least, the colors in these is quite good - on par or better than the same negatives in the Kaiser light table. Good skin tones, good sky hue, good distinctions between yellows and greens, etc…

And of course, there is a LOT of adjustability within them.


But why am I getting bad colors with my RGB light table?

First, there are some major improvements in v2.3, so I’ll be interested to see if you get better results with that… but in general, there are a few reasons that colors can get wacky:

  1. The importance of color balance. Just as you would in a darkroom setting, a lot of the color balance will come down to the operator. Almost all images will need some tweaking of the “temp/tint” settings.
  2. The unpredictability of deteriorated film. The older your film negatives are, the more damaged the colors in them will be. This is especially true if they have not been stored properly. This is true regardless of if you are using a white lightsource or an RGB lightrsource. Will using a perfectly calibrated RGB lightsource help? Perhaps it will improve the starting point. But even then, you will need to make serious adjustments to get correct colors out of final.
  3. The default settings in Negative Lab Pro. The defaults in Negative Lab Pro are not set in stone. The default setting, for instance, is to use the “LAB - Standard” tone profile, which adds a fair amount of contrast, which increases perceived saturation and make it appear more like a lab scan. If you wanted to begin in a more “neutral” starting point, you could use something like the “Linear - Flat” tone profile, and save that as your default setting.
  4. The variability of the scene itself - Most film conversion software (including Negative Lab Pro) works by analyzing the scene of the negative itself. This works well in most cases, but can fail in certain types of scenes, especially scenes with either one dominant color, or very low dynamic range. I’m working to improve this by doing more contextual analysis within a roll, but this is an important consideration in why some scenes may not have as good of an initial conversion as other.
  5. Issues with Setup - since camera scanning is a DIY adventure, there are a lot of potential mistakes that can be made along the way. Many many issues I see with color reproduction come down to various errors in the setup that cause uneven light, lens flare, or sensor flare. Most users who experience color issues will benefit from a good examination of every part of their setup. Often it is a simple as masking out light, or upgrading to a new macro lens with better coating.

In my humble opinion, each of the above 5 variables will play a larger roll in the final colors of your image than the difference between a High CRI White Lightsource and an RGB Lightsource.

I say this having converted tens of thousands of negatives from various cameras and light sources, and having spent the last 5 years or so working almost exclusively on this project.

But again, I’m not at all opposed to finding ways to improve calibrations for various light sources, and I do believe that good RGB light sources would be a wonderful tool to have. I would just caution against thinking that it would be any kind of silver bullet. Any combination of camera, light-source, software, etc. will need adjustments to get the results the user wants.


It would be helpful to have samples to test in Negative Lab Pro to see for myself what is happening and if it could benefit from adjustments to the profile. If you’re up for it, you can email them to me at nate@natephotographic.com

The Adobe DNG Specification is a good place to start to understand what is happening inside a camera profile (at least in Lightroom).

And here is a really good multi-part series on understanding how color works in digital cameras

No, that’s not how it works. There always needs to be a camera profile when working with RAW. Otherwise, there would be no instructions on how to turn the RAW sensor data into white-balanced sRGB data.

I could perhaps introduce a “mode” where you only see adjustment tools in Negative Lab Pro that were possible in RA-4. Or, you could try the following:

  1. Set the tone profile to “Linear-Flat”
  2. Only adjust the “Brightness” and “Temp/Tint” sliders in Negative Lab Pro.

But many people use Negative Lab Pro for different reasons and expecting different outputs, so I try to make it as flexible as possible to work with many different setups and desired results.

Again, thank you for this discussion!

I’m always looking to improve Negative Lab Pro and make is a great experience for all!

1 Like

@nate I wonder what do you think about camera+lightsource calibration routine. I used to be an extremely happy NLP user until I “upgraded” to Sony a7R IVa and, besides waiting for v2.3, there’s nothing I can do: the results are awful and tweaking NLP settings is much slower than inverting manually.

I wonder if providing users with a tool to calibrate their camera + light combination against a well-known standard (IT8 targets like Kodak’s G-60 or similar) could be an option. Wolf Faust sells transparency targets, for example. I suspect that’s what Silverfast comes with.

First, it would allow users of any camera to get more predictability, and also can create an additional income stream to NLP.

I love the idea.

I have already experimented with a methodology to do this. The hardest thing I think is actually getting a truly neutral negative target. Targets that I’ve taken with Kodak Portra, for instance, are influenced by the color palette of Portra, and so the calibration tends to correct away the uniqueness of Portra’s color palette, if that makes sense? Maybe someone else has already thought through a way to solve this…

This is because the Sony a7R IVa was not available at the time v2.2 launched, so there is no NLP profile for it in the package. When this happens, you will see a warning in Lightroom when you first launch Negative Lab Pro that says “profile missing” underneath the profile name, and Lightroom will use the default profile (Adobe Color) which is not designed for use in converting negatives, and will introduce many problems.

The new profile will be included in v2.3.

In the meantime though, you can download this profile, and then add it to Lightroom (file > import develop profiles and presets).

Just note that after adding this profile, you will need to use Negative Lab Pro to unconvert, and then reconvert any previous conversions you’ve made with your Sony a7R IVa. The reason for this is that once you add the profiles, Lightroom will look at the conversion where it should have already applied this profile but it was missing, and start applying the profile. That will throw off the image, since the analysis was done on the wrong profile (Adobe Color) when it should have been done against this new profile. Does that make sense?

@nate , First of all, what beautiful images. I am salivating for your v2.3. Of course I still worry that my 30+ year old negs will not perform that well.

And thank you for your thoughtful response and references for my post.

Exploring with @DrBormental the possibility of using NLP with a camera input profile for camera with lightsource: Is there a way to get NLP to use a user supplied profile? What do you think about Glen Bucher’s method of using a spectrometer and software to construct the profile and its use with NLP? If it works, it would not use a neutral negative target (or any negative) to characterize the system.

I know that his method is not easily duplicated by others but maybe we would learn something by trying it. In particular I am trying to get the best out of old and faded negatives.

Of course! :raised_hands:

Yes! If you set the “color model” to “none” before converting, then it will use whatever camera profile and camera calibration settings you had set prior to opening Negative Lab Pro.

It’s cool to see how he has set up his own spectrometer, but I really don’t see how this would be any better than making a profile by shooting an IT8 target… because the spectral data of the IT8 target is known to high degree of accuracy.

In fact, in the follow up post, he looks at how close an it8 target gets to the spectral profile. Based on the single sample image (which is not nearly enough to evaluate), there isn’t that much difference. The Quest for Good Color - 3. How Close Can IT8 Come to SSF?" - Processing - discuss.pixls.us

In both cases, they are simply trying to get the camera profile to accurately render what it is seeing for a given illuminant. Both an IT8 target and a spectral response function will get you there.

@nate for calibration to work and maintain a unique look of every emulsion, you have to use 3rd party color targets like those from Wolf Faust. They are actually meant to be used for scanning slides, but that’s perfect for our use case: the goal of calibration is to cancel out camera manufacturer’s RAW fuckery and imperfections of a light source towards a well-known state. I also have spent considerable amount of time studying color/film/scanning literature and TBH I am convinced that without “normalizing” RAW input NLP (or any other similar product) will never work equally well for all camera+light combinations. Some users will be luckier than others.

P.S. Nate, I just tried your new profile and indeed there are significant improvements! (I was using a7r IV profile before, just re-tagged it with “IVa”).

Creating profiles with today’s films, shot with today’s cameras and today’s processing, will not necessarily improve the results we get with old negatives.

NLP provides good starting points with my negatives. Some are pretty much spot on, some take more effort or even tweaking of a positive copy and some will still get me a decent B&W copy.

Also, how well do I remember what the colours were, when I shot the scenes 30plus years ago?

Maybe this light will help?

Digitizer, I don’t think that NLP should care about old negatives. At all. In my view those are two separate steps. NLP’s focus is the quality color inversion, to be a perfect “electronic RA4 paper”.

I would argue that restoring faded or shifted color is better done before or after the NLP step. In fact, “auto-color” in Photoshop is probably all you really need at that point.

Actually if I were in your shoes I’d want this too, i.e. to have NLP provide a “reference look”, so you could evaluate if colors have shifted and by how much.

The purpose of a calibration is to get the RAW image into a well-known state. Currently NLP algorithms can make very few assumptions about the color because everyone’s light source is different, and camera manufacturers tinker with RAW data trying to make non-inverted colors more “pleasing”. Adobe Color camera profile tries to normalize across all camera models, but they are too guilty of optimizing for “pleasing” and they do not account for variations in light sources.

In fact, I believe that NLP is actually trying too hard. Having quality camera profiles would allow the algorithm to be dramatically simplified: no need to analyze picture data, don’t clip channels, don’t crop the border, adjust gamma + gain for each channel to match paper response, that’s all I need.