Integrating sphere as a uniform backlight

Hey all, just wanted to make a note here that I am the author the Tricolor scanning article on Medium, and while I still think tricolor scanning is the way to go I now think some of my reasoning in that article is incorrect. I believe that the explanation in this document (not authored by me) is more accurate: Color Separation for Color Negatives using Digital Cameras - Google Docs

Also, I want to reiterate a point I made above about how to properly assess the results of a tricolor scan of a color negative vs a scan made with high CRI white light. The problem that we encounter with white light scans is that in order to avoid color casts in the shadows and highlights, we need to apply a per channel gamma correction. This is in effect what NLP does, calculate what per-channel gamma correction is required to correctly invert the negative. In Photoshop or Lightroom terms, this means that each channel would need an individual curves adjustments. The way color negative film is supposed to work, is that you should be able to make a simple, linear intensity change to the green and blue channels. In essence, if you white balance on medium grey, the color channels should all align, and there should be no color casts in the shadows or highlights. The way you will know if the process is working is correct is by scanning a color negative image of a greyscale step progession and white-balancing on any neutral patch. Now color negative films tend to not be 100% linear in the shadows and highlights, and this is in fact a lot of what defines the characteristic “look” of a given emulsion, so this will never be perfect, but it should be close. Here’s an example of what I’m talking about:

Here is a neutral step progression scanned with high CRI white light:


I did the following steps in Photoshop:

  1. Add an invert layer.
  2. Add a levels layer.
  3. White balance in the levels layer on the third square from the left.
  4. Add a curves layer with the adjustment mode set to luminosity to up the contrast.

Here’s what you get:


I used the eyedropper with a 51x51 sample (to average out the noise), and sampled the darkest and lightest square. Keep in mind I white balance on the third from the left. The values are:

Darkest Square: R34 G27 B7
Lightest Square: R171 G173 B182

To correct this, you would need add a curve to blue channel to add blue to the shadows and remove it from the highlights. There’s no linear adjustment that can correct this. Based in the emulsion, you might have to do this for the blue channel, or the blue and green channels.

Here’s an example with tricolor light:



And the RGB sample values:

Darkest Square: R13 G16 B10
Lightest Square: R181 G183 B186

As you can see, while the white light introduces a variation of 10-20, the tricolor variation is < 10, which is well within the range that is native to the emulsion. When you’re testing out your tricolor lights, I recommend using this as an assessment for how well your setup works. If you can invert a greyscale step progression like this by just inverting and white balancing in the middle, you’ll know you’ve got it right. I would also note that if you get this working properly, you won’t need NLP at all. You can simply invert the colors on all of your negatives, and apply a global white balance from a reference negative (assuming you did all of your scans with the same white balance).

Over the past few years, since I wrote that article, I have been working on my own custom RGB light that emits narrow-band wavelengths that align with the Status-M densitometry standard (the one developed specifically for color negative film) which I used to conduct the test above. I designed an LED grid and found a company in China that would sell me 5050 3-channel LED emitters at the custom wavelengths I needed. Then after a lot of trial and error, I used a reference design from Texas Instruments using the LM3409 driver to build a board that can drive a 3 channel LED grid at high frequency (30khz) and high resolution (10bit, for values from 0-1024) PWM.

For anyone going down this path I will warn you up-front, PCB design for use-cases with high frequency signals is not trivial. High resolution PWM in the tens of kilohertz range can involve signal transition times in the single nanoseconds. Combine a rapidly pulsing signal with a power output of a few hundred mW and you can accidentally build yourself a neat little radio jammer for certain frequencies. If you’re want to do something similar, I recommend using the analog dimming function that many LED drivers offer. I also went the route of having all of the LED channels in my grid share a cathode which simplified the layout and reduced the number of connections between the driver and emitter boards (a single ground connection vs one ground per channel). If you go this route, you’ll find that most drivers utilize a feedback mechanism that precludes a shared cathode.

Here’s the current state of my light, two recent revisions of my board, the LED grid and my partially assembled 3D printed enclosure. I’m using 3 rotary encoders to control the brightness of the individual channels, and ESP32 for all of the control logic, and a little OLED screen to display the brightness values. There’s also an interface to analog control of the brightness, and pins that provide an interface to disable and enable each channel independent of the controller for future automation.

One major issue I have run into, and one I am still struggling with, is that it is incredibly difficult to get an even field with an LED array. With my original grid, the LED emitters were all oriented the same way, which led to the layout of the emitters getting projected onto the diffuers of the light. This resulted in a backlight that was actually a diffuse representation of the layout of the emitter, basically a faint gradient from magenta to green which stuck around no matter how much diffusion I used. You can see an example of this here:


My current model rotates every other emitter 180 degrees, but for some reason I now have color shifts in the corners of the fame, which persists through any level of diffusion an remains constant regardless of how I position the light (which indicates its not related to the diffusion). I’m currently in the process of figuring out if this problem is lens related, or some property of the grid that I don’t understand. I have a sneaking suspicious that the lenses on top of the LED emitters cause a lot of these problems.


The other issue I have encountered, which also seem evidence in the tricolor scans others have posted above, is that the red channel ends up oversaturated. I’m not sure why this happens, though the easiest solution I’ve found is to reduce the intensity of the red channel relative to green and blue. If I had to hazard a guess, I’d say it has something to with the fact that the red channel in the positive is captured by the green channel in the negative, and there are two green photo sides to every one of the red and blue sites in most bayer sensors. Though that doesn’t answer why reducing the intensity of red light channel addresses the issue.

Still a lot to figure out here, but it’s great to see how many people are working on this, and I’m confident we can find solutions for the problems. I’m also doing some testing with integrating spheres at the suggestion of mightimatti, but I haven’t spend nearly as much time on that as I have on the LED array. If anyone has any question about the design process for the LED light or my tricolor scanning tests I’d be happy to answer them.