RGB Light vs. High CRI Light (possible project)

Hey Forum –

I’ve been reading along on RGB vs. High-CRI light sources to scan negatives and am still a bit puzzled if having a good RGB light would be more beneficial than CRI95 or CRI99.

Would you mind to shed some light into the current situation and let me know if it’s worth building a RGB light at all? Highend commercial products, such as the Phase One Capture Table or DT Photon both rely on High-CRI light sources, while lab scanners (e.g. Fuji Frontier SP-3000) use RGB light. The main difference is surely that Phase One and DT both use digital camera backs while the Frontier operates through means of trichromatic film scanning.

I found RGB LEDs which operate in the spectrum of the Fuji Frontier, so …… is it worth it?

Happy to have your opinions!

Quite a few posts deal with the topic already. I’ve not seen the definitive answer to that question though. You might like to read through this thread:

I’ve tried it.

  • I like Electronic Flash for illumination, it’s very good light at daylight °K.
  • I’ve tried various LED and other light sources. Daylight color balance I think works best for color negatives. I’ve tried a number of small video lights, all good results.
  • Narrow band RGB light souces do quite well and I think add additional saturation.
  • And, then there’s three shots, one each for R, G, and B, then combine the channels in post. Adds punch and saturation.

I’d say give it a try.

Tip: Recent iPhones and iPads create a white screen from pretty narrow band R, G, and B cells, it’s an easy way to try out RGB backlight. Take a look with a magnifying glass.

1 Like

Hey – Thanks for the feedback!

I’ve read the thread before posting and it also didn’t give me a definitive answer. I’ve also seen the tests you made @Richard1Karash (quite impressive).

I guess a lot depends on the inner workings of NLP and how it deals with camera RAW data and if RGB light sources would be specifically supported going forward. That’s something only @nate really could answer.

There are quite a lot of projects for RGB LED replacements for enlargers and also commercially available products (though quite expensive) from Heiland Electronic: https://heilandelectronic.de/led_kaltlicht/lang:en

The projects working on RGB LED enlarger light sources suggest to use WS2812b LED panels, these peak at (R) 630nm, (G) 525nm, (B) 475nm – which matches exactly the Frontier light.

Next to the fact that I love to satisfy my own curiosity, maybe we’d have something at hand to build an affordable commercial product, meeting the specific needs of analogue enthusiasts?

Seriously, give it a try with a current iPhone or iPad to see if you like the results.

Went ahead and scanned two of my last negatives with both 99CRI and RGB light sources (the latter done with a recent iPad Pro). Same settings for scanning, white-balance set as needed. NLP settings identical (except WB).

Some thoughts on the photos:

  • Both negatives have been overexposed
    Very much expected to have cyan-ish sky colours with Portra 400
  • Greens are definitetly clearer with RGB scans / They look a bit yellow-ish with 99CRI
  • Generally more vibrant colors (and more saturated) with RGB scans, confirming @Richard1Karash findings

Not sure I can draw massive conclusions from just two negatives, but it’s worth exploring more into RGB light sources and feed the nerd in me.

Please ignore the yellow cast on the left side of the Palm Tree RGB scan. Human error with the negative holder.

I repeated an old test, this time with 2.3, posted on the NLP Facebook group.

Conclusions:

  • Exposure, +/– 1 or 2 EV in scan camera doesn’t matter
  • RBG (iPhone) changes the reds
  • Kaiser and Video lights (at 5500°K) are fine
  • Avoid 3000 and 4000°K light

Red is also something that I found is affected. Overall it doesn’t seem to make much difference for camera scanning and NLP 2.3 if a high CRI or RGB light is used, except for the reds.

Though I’m a bit surprised this is the case, as presumably the wavelength for reds should be broader with high CRI lights vs. the narrow band with RGB light sources.

Could this be related to something inside NLP?

In a very unscientific approach to see differences I’ve taken existing photos (scanned with a Fuji Frontier), inverted the RGB curves and re-processed these through NLP as TIFFs. The expected output (to me) would be to see the (almost) same image as before. Everything looked right, except the reds.

Here is Flash vs iPhone. More on the Facebook page.

Would this be the Negative Lab Pro Facebook page? I don’t really ‘do’ Facebook but in a moment of weakness I did apply to join that private group but never received a reply. I just wanted to ask if you profile these shots using the Colorchecker and if so how that affects the colours, have these two been profiled in fact?

I’m in the same boat here – Not a big FB fan, but did apply for the group too. Still waiting for feedback. Would prefer to keep the discussion in this forum though.

Phooey, sorry to hear issues in getting to the FB page. To ensure no confusion, I’m talking about this group:

 https://www.facebook.com/groups/negativelabpro

No, I have not attempted a profile based on the color checker, and the shown test results are not from custom-profiled images, just the standard NLP and LR profiles applied.

In case it helps anyone, I went to have another go at applying for access to the NLP Facebook group and found that I had access, so my problem was just that for some reason I didn’t receive the notification. I suppose that if I was a regular FB user I would have discovered this sooner but really I never go on.

Looking forward to having a look round.

The end goal of a scanning system for color negative is to produce printing density. Yes RGB LED light sources better isolate dyes. But to actually achieve printing density, it’s not as simple as achieving better dye isolation. The real goal is to achieve a system grey scale with as little non-linear interpolation as possible. A printing system such as RA4 paper under an enlarger color head illuminant, is able to achieve a system grey scale by simply adjusting printer points, which is the equivalent of a channel offset adjustment. Printer lights can balance a negative onto print linearly, so if a scanner were designed to replicate the spectral responsivities of the printing system, you will only require analog gain to achieve a perfectly balanced scan.

So a few tasks need to be met.

  1. You need to know exactly what the spectral responsivities are for each channel of the printing system intended for those color negatives.

  2. You need to try to replicate those spectral responsivities with the scanner system physically as close as you can. No scanner does this perfectly, but the closer you get, the less interpolation will be required for a perfect calibration to printing density.

  3. You need to encode those printing densities into an appropriate gamut matching the CIEXYZ colorimetry of the reflection print viewed under a standard illuminant.

Why is it better to achieve printing density physically rather than digitally?

Because achieving printing density physically allows you to take full advantage of the dynamic range for each channel of your digital sensor. When you expose color negative under a white illuminant (whether high cri or not), you are effectively filling the well of each photosite with a lot of light you don’t need which will need to be interpolated to a lower code value, and not enough of the light you do need which you have to interpolate to a higher code value. In the case of the orange mask, it should be neutralised through simple analog gain of each channel rendering it a neutral white, which once inverted will be at an appropriate low code value. If the orange mask is captured as having equal printing densities per channel, and the same is true for a neutral grey scale, then the actual light filling each photosite is taking advantage of the dynamic range of the sensor. Thus a more faithful color rendition is achieved.

Steps to achieve printing density:

  1. Obtain the spectral sensitivity data of a print material intended for the color negatives you wish to scan. This can be obtained in Kodak’s spec sheets. Tabulate the spectral data for each dye layer in excel.

  2. Hire a sekonic spectromaster and a darkroom with a color enlarger, measure the spectral power distribution of the light required for a balanced print. Tabulate this spectral data in the same excel spreadsheet containing the spectral sensitivity data. Ensuring they’re both in the same increments, like 2nm.

  3. Multiply the spectral power distribution spectra by the spectral sensitivity spectra per dye layer. This will produce spectral responsivities for the printing system. This per channel spectral response is what you are trying to match with your scanning system.

  4. Obtain spectral sensitivity data of your scanner camera sensor. Tabulate that in the excel spreadsheet.

  5. Divide the spectral sensitivity spectra of your scanner camera sensor from the print spectral responsivities. This will produce spectral data representing the exact spectral power distribution you will require for illumination to achieve print spectral responsivities with your specific scanner sensor.

  6. Buy whatever filters or LED light sources you require to meet this target spectra. The closer you get, the better. You won’t be able to achieve printing density perfectly. So a calibration 3D LUT should be applied to correct for any errors for your specific scanner. Polynomial equations are usually required.

There are further equations to convert spectral responsivities into actual printing densities, which I can share if desired.

1 Like

Thanks for your reply – much appreciated!
I’m absolutely up to share more knowledge on the topic. Every bit of help is highly welcome!

My research pointed towards the results/references you posted here. My idea is to build a light source which is universally useable with every camera and film on the market. Given the distribution of blue/green/red pixels on the sensors are basically the same for every camera it should be possible to come up with a great average value for the amount of film/development/sensor combinations, which will allow for good results, minimising the need for digital correction.

During the next weeks I’m going to test different setups and research the topic of LED controllers, which will allow to have presets (for colour / b&w / slide film) and should ideally also allow for custom settings, if anyone would want to dig that subject during usage. This would definitely help to work with more experimental film, e.g. Lomo film. It would also allow to have look-up tables, so settings could be shared.

When it comes to neutralising the orange mask in practice, it’s important to note that printing density is metameric to a range of color negative stocks intended for their particular print medium. What this means is every stock designed for a particular print medium (assuming effective processing) will result in the same printing densities. The only relevant component that differs across color negative stocks within a printing system is the specific printing densities of D-min for each stock. D-min will have different printing densities for each stock.

If you have a scanner which achieves printing spectral responsivities. All you have to do is capture
D-min then subtract it. So let’s say D-min for the positive has 10-bit code values of R-22 G-60 B-87. Okay so just subtract 22 from every code value in the red channel, 60 from every code value in the green channel, and 87 from every code value in the blue channel, then offset all densities above D-min so the lowest density sits with enough room above 0 IRE to avoid potential clipping. Motion picture scanners typically encode a printing density of 0.000 above D-min at a code value of 95. Repeat this process for each stock and build an ICC profile for each transform.

This can only be achieved linearly if you have achieved printing spectral responsivities with your scanner. If you haven’t, you will have to deal with non-linear interpolation. A simple elegant offset won’t cut it.

Published characteristic curves for various color negatives are typically measured in status m density. These sensitometric tests are typically using Kodak supplied control strips containing various neutral grey patches across the entire dynamic range for a given stock. The reason why the blue channel is capturing the most density, then the green channel, then the red channel is because that is an inverted representation of the orange mask for that particular color negative. And if you recall what I said above, the goal is to subtract D-min. Representing that visually on a characteristic curve would be to offset all densities linearly until they are the same density. Because of course in order to achieve a perfect grey scale, all channel densities / code values need to be identical per patch.

bc249342588b4009e842c324eaedfcbd

Okay if we try that for status m density, this is the result.

f6bcb30cb481f2d967a4932631f6aed8

As you can see, the problem is that because the dye density is not represented proportionately, a non-linear transformation will be required to neutralise every grey patch.

Why does this occur?

Like printing density and your scanner density, status m density has it’s own set of spectral responsivities. The peak spectral response of the red channel for status m sits around 640nm. For an example, the peak spectral dye absorption for the cyan dye layer of Kodak Vision3 stocks typical sits around 690nm. This is why if you look at the characteristic curve shown above for 200T, the red channel is not perfectly perpendicular with the green and blue channels, it has a lower status m density because status m is not effectively seeing the peak cyan dye absorption.

Here’s a visual representation of what that looks like. The dotted spectral responsivities represent status m, the streaked spectral responsivities represent printing density, and the black spectra represents the spectral dye absorption of 200T including the orange mask. Each hump in the 200T curve represents the individual yellow, magenta, and cyan dyes. Everything is tabulated in 10nm increments.

9a04a79c738e9e3b7ffbd5d1d0487b2a (1)

What do you notice? As you can see the cyan dye absorbs most light around 690nm, but the status m spectral response for the red channel is around 640nm. Compared to the printing density spectral response for the red channel which is right dead on 690nm.

If one were to create a characteristic curve in printing density, this is what you’d see for every grey patch.

6c00f8de3fede764c01ab156ae9e2e2d (1)

Each underlying dye density is represented proportionately. They are perpendicular to one another. Now just subtract D-min via a simple offset, and you’ve achieved a balanced representation of that color negative.

33ae9ca0e0ac7563f7d3bd281bfd1b9c

2 Likes

Thanks for sharing! I’m actually building one right now and would love to get some suggestions from you.

LED Design

Red 660nm Green 530nm Blue 450nm I would love to get a monochrome sensor eventually for the best result but that's a bit out of my budget for a DIY project for now. Any tips or suggestions for getting the best result on the software side? How should I go about calibrating the greyscale for a particular negative stock and potentially generating a 3D LUT for nonlinear correction?

Thank you in advance!

Software wise, you should take a look at ColourSpace by light illusions. The software alone is quite expensive but you’re able to rent it for a fair price. ColourSpace is the most advanced ICC profiling software on the market. It can even be used for accurately creating film emulation LUTs from print materials.

For a grey scale test target, you could either purchase a color sensitometer on ebay and expose a grey scale onto various stocks you want to profile, you could also expose a color checker chart that has as many graduated grey patches as possible, but to completely cover the full density range, you’d be better off to buy an 18% grey card, and expose the grey card across the entire density range so that you’re completely reaching d-min and d-max. So find out what 0EV is for the grey card, then start the exposure test -8EV under, and go all the way up to +9EV. In either full stop or 1/2 stop increments. if you take this route just ensure you’re illuminating the target in the color temperature the film is made for, so if it’s daylight balanced, ensure you use high CRI white light in a controlled environment to illuminate the test target, possibly even try using an ARRI HMI.

Scan those frames, then in ColourSpace, read the RGB code values for those exposures across the entire density range, and assuming you either have the ability to increase or decrease the analog gain per channel for your sensor, or can dim the individual RGB LED’s, do so until the RGB code values are as consistent as they can be across the entire density range per stock in ColourSpace. When you get as close as you can, note the illumination intensity settings or analog gain adjustment used for that specific stock so you can use those settings as a preset for later. After getting as close as you can linearly, next go back into ColourSpace and transform each code value across the density range until they have the same RGB code values per patch. The analog gain adjustment will do most of the work, and the final non-linear transformation is just a touch up. All transforms should be conducted as a 3D LUT.

I strongly suggest you try to profile each stock with the same test conducted various times, because there will always be developmental offsets. What you should do is create an ICC profile per batch, then when you have enough samples of that color negative exposed, developed, and corrected in scan, go through each profile and average them together into the most usable profile to account for processing offsets. If you’re able to program, you could conversely try to write something similar to NLP which could automatically correct for non-linear offsets. That would automatically help account for processing errors, if that’s not an option just find the best average profile per color negative.

I do have colourspace CAL and an i1 pro spectrometer w/ i1 display pro. I will have to investigate, but generally, I like to use nuke for most of my R&D. Would it make more sense to scan dmin just below clipping on all 3 channels and figure out the best lift/gain value for individual channels of linear raw in nuke, just so I can get the best SNR out of the sensor? Once I have all three channels as close as I can linearly, should I use LAD aim for reference when generating the non-linear 3D LUT?
When you talk about automatically correcting for non-linear offset, are you referring to using film base as a reference for automatic correction?

Sorry for all the questions, it’s really hard to find relevant info online regarding this subject. And I really appreciate you for sharing your knowledge with us.

1 Like

If your scanner density is unambiguous in it’s attempt to match printing density, then a LAD aim would be ideal. Assuming D-min is already subtracted, you could use an aim for a 2% black target, 18% grey, and 90% white. Given a density encoding of say 0.002 per code value, you could associate the expected printing densities for these targets and simply ensure they fall on the correct code values for every stock you scan after subtracting D-min for each.

“When you talk about automatically correcting for non-linear offset, are you referring to using film base as a reference for automatic correction?”

Subtracting D-min via code value offset is only a linear process which assumes you’ve achieved printing spectral responsivities already with your scanner. Non-linear interpolation will require channel independent gamma and gain adjustment, since D-min will be fixed to a minimum code value for each channel, you wouldn’t be playing around with lift. Non-linear corrections should be minor. For the most part, you will want to ensure D-min and D-max are reaching as close to the same code values as possible after subtracting D-min, assuming D-max is exposure to a neutral grey target. This should ideally be achieved from the capturing of light it self. You want to achieve that response before any encoding occurs, so before the A/D conversion. So achieve it in the physical and analog space. That way you can utilise the dynamic range of your sensor per channel.

1 Like