Optimize NLP for RGB light sources?

Thanks @davidS for your post!

Yes, this is something I have talked about doing and am planning to do in the future.

What I haven’t seen are good, commercially available RGB light sources that are optimized for film reproduction (other than, of course, re-purposing iPhones and iPads, which are less than ideal for other reasons). I have suggested to several manufacturers (Kaiser, Negative Supply, etc) that they should make one, but have yet to see any. If you are aware of any widely available RGB light sources that match films sensitive curve, I would be happy to add one to my recommended list of camera scanning light tables! Or if there is a kickstarter, I will be the first to invest!

Until then, the best advice I can give is to use either a high CRI white light source, or a modern iPad or iPhone as the light source. Telling users that theoretically there should be a better light source is not helpful if it is not available to them.

But again, I’d be very happy to update this with better options as they become available, and help with testing/calibrating.

I think there is maybe a lack of understanding on how digital camera sensors work and what is happening under the hood with with raw camera profiles (specifically, the color matrix and forward matrix). Yes, with the filter in a digital camera, there is overlap between the spectral curves. But this overlap is mathematically known and solvable via the color matrix and forward matrix (and even more fine-tuned via HueSatDelta tables and final LookTable). This is why, despite the imperfection of the color separation filters in a digital camera, it is possible to get extremely accurate color reproduction (or to fine-tune the color reproduction to be more ideal for film).

(BTW, it’s unclear to me if every digital camera has as much overlap as the Sony ILCE-7RM2 camera shown here, or even how accurate this data is. I will say anecdotally that the majority of users who experience very “off-colors” when digitizing with a digital camera are using a Sony. I have received virtually no major color issues from Fuji users, as an example. But I have found ways to improve Sony renderings via the raw camera profile, which will be released in v2.3 shortly).

It’s also unclear to me what exactly is suggested that we do with this sensor overlap that isn’t already being done internally via the camera profile. I’ve seen it suggested to take three separate digital shots, cycling through a red only, green only, and blue only lighting, and then separate and recombine the color channels in post. But in this case, you still haven’t changed the imperfection of the RGB filtering on the sensor, or the peak of the curves. All you’ve done now is make it more difficult to account for these things in post.

This may be due to some issues in general with the Sony profiles that I have been working to improve. If you’d be willing to send RAW samples to me at nate@natephotographic.com, I’d be happy to take a look!

I’m also hopeful that the new profiles and calibrations coming in v2.3 should greatly improve your experience.

And just to summarize again:

I’m absolutely all for improving light sources, and getting more refined calibrations into Negative Lab Pro. But for most users, you should be able to get amazing colors from a good, high CRI light source. And if you aren’t getting good colors, then something has gone wrong along the way and I can help figure out what that is and help improve it!

Thanks!
-Nate Johnson
Creator of Negative Lab Pro

1 Like