Improving scanning for low contrast negatives

When scanning old black and white negatives with a digital camera I often see from the histogram that the image data has a significantly lower dynamic range than the camera and computer system is capable of. This can be due to an inherently low dynamic range scene or to underdeveloped negatives. For example, the image data inside the film border might have values from 68 to 212. This means we end up with only 144 captured gray levels out of the possible 256 levels.

I’m pretty sure that the film, as an analog medium, has many more gray levels in the image than are represented by the 144 that we end up with in the camera scan. NLP conversion attempts to map the 144 levels across the full dynamic range with some buffer on the high and low ends. This works well, but I think you sacrifice smoothness in the image since some of the 144 levels now get mapped to 2 or more final levels. An analogy from audio digitization would be sampling sound at a low sampling rate and then digitally converting up to a higher playback rate. The solution in audio is to oversample and then convert down.

To my eye, images from low contrast negatives that are intended for full-range printing look better when printed with analog printing paper because of the paper’s almost infinite tonal gradation.
With film scanning it seems like it would be useful to have a capture system that could sample the image data, determine the actual dynamic range, then rescan with oversampling to map the image data into the available 256 final gray values. Or maybe just oversample to begin with and allow the user to map the range. You could think of this as the opposite of HDR where you combine multiple exposures to map high dynamic range image inputs onto a lower dynamic range file or output device.

Does anyone know of an oversampling scanning system like I’ve just described?

Hi @lushomet

Great question!

In the past, I have searched for exactly what you are asking for and have not been able to find it, whether it is the “opposite of HDR merging” or a “low dynamic range camera sensor.” I’ve even looked into creating my own scripts/algorithms to do a “reverse HDR merge.” But so far, no luck.

There’s some more discussion on the thread here:

For now, some key takeaways from the above thread:

  1. Make sure you are using the highest bit-depth you can during capture (usually this is 14-bit for digital cameras). This will go a long way to keeping tonality smooth, even when stretched during processing.

  2. Remember to “expose to the right” (i.e. you want your histogram to be on the right, brighter side). This is because the way the linear sensor in the camera works. You will effectively utilize more available “levels” of information when you capture to the right.

-Nate

1 Like

If you are using an 8-bit per channel scanner, or a camera that captures JPEGs, that is your problem right there.

If you scan to 16-bit-per-channel TIFF files, or photograph your negatives to raw data files for post-processing, you will capture the full range of what is on film, with enough bit depth to post process to the tonality you want. Even with a 12-bit raw file, you can do pretty serious “tone bending.”

When I went from a cheap 8-bit scanner to a photo scanner that captures 16-bit-per-channel TIFF files, I saw a world of difference. But when I started using a macro lens on my camera to photograph my film to raw files, suddenly I had a workflow that matched everything else I was doing digitally.

I hope that helps!

Good point, but I digitize with a Sony A7R2 in raw 14 bit mode so that’s not the issue.