Improving scanning for low contrast negatives

When scanning old black and white negatives with a digital camera I often see from the histogram that the image data has a significantly lower dynamic range than the camera and computer system is capable of. This can be due to an inherently low dynamic range scene or to underdeveloped negatives. For example, the image data inside the film border might have values from 68 to 212. This means we end up with only 144 captured gray levels out of the possible 256 levels.

I’m pretty sure that the film, as an analog medium, has many more gray levels in the image than are represented by the 144 that we end up with in the camera scan. NLP conversion attempts to map the 144 levels across the full dynamic range with some buffer on the high and low ends. This works well, but I think you sacrifice smoothness in the image since some of the 144 levels now get mapped to 2 or more final levels. An analogy from audio digitization would be sampling sound at a low sampling rate and then digitally converting up to a higher playback rate. The solution in audio is to oversample and then convert down.

To my eye, images from low contrast negatives that are intended for full-range printing look better when printed with analog printing paper because of the paper’s almost infinite tonal gradation.
With film scanning it seems like it would be useful to have a capture system that could sample the image data, determine the actual dynamic range, then rescan with oversampling to map the image data into the available 256 final gray values. Or maybe just oversample to begin with and allow the user to map the range. You could think of this as the opposite of HDR where you combine multiple exposures to map high dynamic range image inputs onto a lower dynamic range file or output device.

Does anyone know of an oversampling scanning system like I’ve just described?

Hi @lushomet

Great question!

In the past, I have searched for exactly what you are asking for and have not been able to find it, whether it is the “opposite of HDR merging” or a “low dynamic range camera sensor.” I’ve even looked into creating my own scripts/algorithms to do a “reverse HDR merge.” But so far, no luck.

There’s some more discussion on the thread here:

For now, some key takeaways from the above thread:

  1. Make sure you are using the highest bit-depth you can during capture (usually this is 14-bit for digital cameras). This will go a long way to keeping tonality smooth, even when stretched during processing.

  2. Remember to “expose to the right” (i.e. you want your histogram to be on the right, brighter side). This is because the way the linear sensor in the camera works. You will effectively utilize more available “levels” of information when you capture to the right.

-Nate

1 Like

If you are using an 8-bit per channel scanner, or a camera that captures JPEGs, that is your problem right there.

If you scan to 16-bit-per-channel TIFF files, or photograph your negatives to raw data files for post-processing, you will capture the full range of what is on film, with enough bit depth to post process to the tonality you want. Even with a 12-bit raw file, you can do pretty serious “tone bending.”

When I went from a cheap 8-bit scanner to a photo scanner that captures 16-bit-per-channel TIFF files, I saw a world of difference. But when I started using a macro lens on my camera to photograph my film to raw files, suddenly I had a workflow that matched everything else I was doing digitally.

I hope that helps!

Good point, but I digitize with a Sony A7R2 in raw 14 bit mode so that’s not the issue.

With all the progress in machine learning, there might be a way to address this problem with a brute force approach that instructs a system to analyze hundreds or thousands of carefully generated matched image pairs and determine the best way to consistently make low-contrast digitized film images look like full dynamic range images of the same scene.

I imagine something like the following:

  • Create a data set by photographing many scenes with a side-by-side film camera (A) and a high quality digital camera (B). Call the resulting film frame A1 and the digital frame B1.
  • Process the film and digitize it using commonly available, low-cost means (like digital cameras) yielding a new digital image A2 for each film frame.
  • Ingest the A2 and B1 image pairs into a machine learning system.
  • Instruct the system to evaluate or generate techniques that map each A2 image onto a resulting A3 image that matches the corresponding B1 image as closely as possible.

Has someone already done this? I don’t see anything similar described on the web. Does it seem likely to produce better results than application of conventional image-processing techniques and algorithms?

I’m mainly thinking about luminance values in black and white film scans, but on a grander scale, this might be a way to produce a supercharged AI version of NLP that addresses all the variables, including color.

Sometimes your digital camera won’t be capable of helping a lot. In such cases it may be worthwhile to have such film scanned with a film scanner. Hence this reply.

Some film scanners have the option to adjust the exposure, sometimes in combination with a 2-pass scanning at different exposure levels. Sometimes there’s a multi-sampling feature that either runs additional scan passes or re-samples the same scanned line while scanning (without moving the sensor). All of the above can help in this case.

The Epson V600 flatbed scanner offers these features. In my workflow, I capture RAW linear DNG with VueScan and enable all of these features. In the preview, a default exposure level will be proposed. I always adjust it, either based on values of identical film emulsions I scanned earlier, or based on the level of over- (darker) or under-exposure (lighter) of the frames.

I have this series of underexposed captures taken in low light from mixed sources like tungsten and energy saving fluorescent bulbs. Straightforward conversions were so-so and I wanted to test how Negative Lab Pro handles such takes depending on how I exposed the “scans” and what effect different border buffer widths (BBW) could produce.

Here is a “contact sheet” showing the results. Please take them with a grain of salt because of what’s in the image like candles and scarcity of different colours.

From top left per row
1.: Conversions of captures with increasing exposure (roughly 0-2 EV in thirds) and BBW=0
2.: Conversions of captures with increasing exposure (roughly 0-2 EV in thirds) and BBW=5
3.: Conversions of captures with increasing exposure (roughly 0-2 EV in thirds) and BBW=10
4.: Conversions of captures with increasing exposure (roughly 0-2 EV in thirds) and BBW=15
5.: Conversions of captures with increasing exposure (roughly 0-2 EV in thirds) and BBW=20
6.: Conversions of captures with increasing exposure (roughly 0-2 EV in thirds) and BBW=25

The widest border buffer produced bright results because the candles were cut off. Therefore, NLP adjusted the white point based on the furniture behind the person, while all other conversions targeted the flames for white points.
→ Lesson: (Cropping off) highlights can greatly influence results.

We can see colours casts and shift slightly depending on exposure and BBW. These casts/shifts follow a logic that I can currently only guess … and your guess is as good as mine …

I find the first row to be closest to what I’d want from this take. Results are fairly independent of exposure (nice) even though some colour cast creeps in and out in the series.

All things considered, I recommend to

  • test systematically the “difficult” negatives of subjects that are worthwhile
  • accept that conversions are starting points for post-processing, preferably on 16 bit TIFFs
  • consider cropping off parts with blown highlights before converting. Redo the crop afterwards.

I found this (VERY IN DEPTH) deep dive recently. I will link to the blending portion but the whole piece is fantastic and utilizes NLP. It taught me much more than I expected and explained all the processes. For certain challening images, blending + NLP seems to be an incredible tool. Could be useful here, though I know its extra steps.

https://photopxl.com/digitizing-negatives-with-a-camera-revisited/#:~:text=The%20sample%20I,35%20and%2036).

…and written by a contributor to this very forum, Mark D Segal.