Dynamic Range, Bit Depth and DSLR Scanning

(I’m pasting this from a post I made on DPReview, since I’m interested in getting the opinions of NLP users as well)

So I’ve been scanning film with a Nikon d750 for quite a while, B/W initially but recently colour.

Unlike B/W, I’ve noticed that the information from colour film only takes up a small proportion of the histogram, and therefore the dynamic range of the camera.

If I understand correctly, dynamic range measures the difference between the darkest and lightest values a sensor can record simultaneously, and the bit depth measures the number of recorded values between these points.

It therefore stands to reason that, when scanning a low contrast negative that doesn’t utilise the DSLR’s full dynamic range, one is losing a lot of potential information. This is because the high and low ends of the sensor’s dynamic range are recording nothing, whereas with a smaller, compressed dynamic range they could be recording data from the negative. This means that all the sensor’s ‘bits’ would be recording important information.

My question is whether it is possible, either practically or theoretically, to compress the dynamic range of a DSLR in this way, since I have never heard of something like this being done.

Yes, I think this is true, although I’m not sure I would say you are “losing” information, since there are no values that fall outside of the range of the sensor. But you are losing tonal smoothness, and creating the potential for posterization during conversion.

Let’s put some numbers to it…

Let’s say you are capturing your negative using a 12-bit RAW file. “12-bit” is just another way to say that there are 4096 potentially brightness values available to store the information of each color channel (with 0 being black and 4096 being max brightness).

Then let’s say that the exposed part of the negative only uses up 1/4th of the total histogram of your camera sensor, then you only capturing ~1,024 distinct levels of information (which is another way of saying you only capture 10-bits of information, 2^10 = 1024). The problem gets even worse, since the bit-depth is per color channel, and typically your blue color channel on the negative will carry less information than what you see in the total histogram.

During the conversion process, the information is stretched out to fill the histogram, and if there isn’t enough levels of information, then you will create gaps in tonality, i.e. posterization. In theory, if you are targeting 8-bit output, you are basically guaranteed posterization if any one of your color channels uses up less than 6% of the total histogram on a 12-bit capture (8-bit has 256 values, and 6% of the 4096 values in 12-bit is just 245 values).

Shooting RAW at 14-bit will give us a LOT more room. Since 14-bit slices up the same dynamic range into 16384 potential value levels, we would need a color channel to use less than 1.5% of the total histogram before we are guaranteed posterization in an 8-bit output.

(There are obviously a lot of other factors that go into this, but just showing the important of bit-depth, and giving some very general guidelines of when you would start running into real trouble).

As I understand, the dynamic range of a DSLR is based on the physical properties of the sensor itself. Perhaps in theory there would be a way to create a sensor specifically with the purpose of having a low dynamic range for this type of work, but I don’t believe there is a way to make an existing sensor have lower dynamic range.

Some other ideas maximize tonal smoothness:

  • Make sure that your DSLR can capture at least 14-bit RAW files. This will make a significant difference over 12-bit.
  • Make sure your original film shot is exposed enough to capture shadow detail. With film, it is typically OK to over-exposed by a few stops, but if you underexpose by a few stops, you will end up with less information, and a higher likelihood of posterization or awful shadow grain during the digitization.
  • Expose to the Right on your digitization - I typically bump up the exposure by about 1 stop over what the camera reads as the proper exposure during digitization. There’s some disagreement about why this works (some say there are more levels of information on the right side of the histogram, others say it just improves signal-to-noise ratio), but regardless, it’s a good habit.

I’ve also seen some users experiment with taking multiple exposures and using HDR Merge. I’ve tried this as well, but the algorithms used by Lightroom (and others) for merging multiple exposures is definitely NOT optimized for this use case, and in my experience, will lead to sporadic results.

Hope that helps!