I’ve started scanning a collection of old photo negatives, and after some disappointing results using commercial scanners, I decided to go the DSLR route. With the help of this forum and the NLP plugin, I’ve been able to create a nice scanning setup and I got really good results. I wanted to report back what I did here and maybe start some interesting discussions. I’m an optical scientist by training and I approached this a little differently from photography professionals.
Integrating sphere backlight
I read the various discussions about setups, backlights and RGB light sources. A common recommendation is to use a high CRI white light source as backlight, but in my view the optimal light source should instead be an RGB source with wavelengths selected for low crosstalk (i.e. optimal color separation, see link above), and independently controllable red, green and blue intensities. In a DSLR setup, I think this is the best way to recover as much color information as possible from the negatives. White light sources, on the contrary, must cause more mixing between color channels. This may or may not be problematic depending on the scene and film used.
Besides that, the thing we all want out of a light source is uniformity. The most common way to achieve this around here is with large light panels and diffusers, but this approach may cause a slight decrease of intensity around the borders due to the lighting geometry. I decided instead to 3D print an integrating sphere based on this publication. An integrating sphere is basically a sphere whose inside is painted white, with a small inlet for the light source and an outlet for the thing you want to illuminate (the film). When the light enters the sphere, it bounces around randomly several times. By the time the light finds the exit, the beam is almost perfectly uniform. This pairs nicely with the usage of separate red, green and blue LEDs because you don’t need to fiddle around too much to combine the three colors into a single uniform beam.
Independent LED control
I put together an electronic board to control the LED intensities independently. One caveat I’ll mention here is to use LED driver electronics that either do not flicker (linear current driver), or that flicker much faster than the shutter speed of the camera (switched current driver with a high enough frequency), otherwise even the “perfect” light beam from the sphere will appear non-uniform to the camera due to rolling shutter artifacts. I used a switched driver with shutter speeds from 1/20s to 1/50s, and I obtained a backlight which is uniform over the entire field of view to within a few percent. With a non-flickering LED driver, I think it would probably be better than 1%.
With these independent RGB lights, I was able to do the white balance by adjusting the illumination, before even taking the picture. The borders of the negatives then appear white to the camera instead of orange. This way, each color channel is optimally sampled over the full dynamic range of the sensor (as opposed to the blue and green channel being “underexposed” for example). Here are examples without and with RGB pre-adjustment:
The wavelengths I used are 460nm, 530nm and 660nm, which is a compromise between LED availabilities and the recommendations found here about color separation.
I ordered a high CRI white light LED as well for comparison, but it hasn’t shipped since weeks. I’ll try to post a comparison between high CRI white and RGB when I finally receive it.
A technical note for anybody who would like to reproduce this: I initially thought I would need a lot of light so I ordered high-power LED bulbs and 1A drivers, but it turns out this is way more than necessary. Using an integrating sphere with 10cm diameter and 5% port fraction, a 150mW red, 100mW green and 150mW blue LED will be enough to scan typical negatives at 1/20 to 1/50 shutter speed, f/5.6 and ISO 100 on a Sony A7 (the milliwatts are optical output power, not electrical power). With 1500mW LEDs, I had difficulties fine-tuning the light output of each channel within sufficient sensitivity at the range of shutter speeds I wanted.
My camera is a Sony A7 with a Sony 50mm f/2.8 macro lens. I read many people recommending to use macro rings together with a commonly available non-macro lens. Yes, this is cheap, but I tested it and immediately figured out that I could only focus the center of my negatives properly whereas the edges would consistently remain out of focus. I would urge you to use a “real” macro lens, because it makes it possible to get the full film in focus from center to edge.
Contrarily to what I first assumed, it is not necessary to use a very wide aperture. By my calculations, at f/5.6 and with a magnification of 1:1, the optical resolution is 6µm which is the pixel size of my camera and which is better than the usual granularity of color film. In other words, beyond f/5.6, I was not going to resolve more detail than either the film or the sensor could support, and the narrower depth of field actually makes it harder to get both center and edge of the film in focus (even the best lenses don’t have a perfectly flat focus field). Therefore, I used f/5.6 for all my scans. I could clearly distinguish the film grain this way. I note, though, that not everybody agrees that focusing on the film grain is the right thing to do.
Collimated versus diffuse lighting
I was very curious about the suggestion (here, here and here) to use collimated light to scan the negatives. Initially I thought that this must be better, because in theory it leads to capturing the film with a better resolution whereas diffuse lighting introduces a slight blur. There is a compelling example on the Wikipedia page about this effect. My thinking was: yes, collimated light makes all the cracks and dust on the film visible, but surely you should be able to get rid of those by applying a blurring filter in post-processing if needed, right? Wrong. After looking into this further, I realized that diffuse makes it possible to preferentially capture the film, to the exclusion of dust and scratches. In simple terms, if a scratch or dust particle is present on the film, there is usually another path the diffuse light can take to still pass through the film and reach the DSLR. With collimated lighting, this information would be lost. Of course, this holds true only if the defects are not too large. But the point remains: even though diffuse lighting introduces a bit of blur, this disadvantage is (to me) completely offset by the advantage of naturally suppressing dust and cracks.
I did encounter problems scanning some of my negatives. One frequent problem I have is that colors that should be deep red appear purple instead. Other color shifts I noticed are light brown to dark brown, pink to red, red to orange, etc. These negatives require color adjustment in light room (aqua hue +30 or +60). I do not know if this is a byproduct of the RGB scanning process, or if the negatives degraded with time, or if perhaps different settings are necessary in NLP when using RGB instead of white light. I’m posting an example below (cropped) of an armrest and a carpet. This happens a lot more frequently on items of clothing, but I can’t post examples here for privacy reasons.
I’m happy with this scanning setup. It’s fast and it delivers great results. If I would do a second iteration of this setup, I would probably improve a bunch of practical little things on the design of the sphere, the LED boards, the negative holder, etc. But I’m on a budget so I think I will keep scanning my photo collection like this, and maybe other photo geeks might take it from here I hope you enjoyed reading this, and look forward to any comments/suggestions.