(I’m pasting this from a post I made on DPReview, since I’m interested in getting the opinions of NLP users as well)
So I’ve been scanning film with a Nikon d750 for quite a while, B/W initially but recently colour.
Unlike B/W, I’ve noticed that the information from colour film only takes up a small proportion of the histogram, and therefore the dynamic range of the camera.
If I understand correctly, dynamic range measures the difference between the darkest and lightest values a sensor can record simultaneously, and the bit depth measures the number of recorded values between these points.
It therefore stands to reason that, when scanning a low contrast negative that doesn’t utilise the DSLR’s full dynamic range, one is losing a lot of potential information. This is because the high and low ends of the sensor’s dynamic range are recording nothing, whereas with a smaller, compressed dynamic range they could be recording data from the negative. This means that all the sensor’s ‘bits’ would be recording important information.
My question is whether it is possible, either practically or theoretically, to compress the dynamic range of a DSLR in this way, since I have never heard of something like this being done.
Yes, I think this is true, although I’m not sure I would say you are “losing” information, since there are no values that fall outside of the range of the sensor. But you are losing tonal smoothness, and creating the potential for posterization during conversion.
Let’s put some numbers to it…
Let’s say you are capturing your negative using a 12-bit RAW file. “12-bit” is just another way to say that there are 4096 potentially brightness values available to store the information of each color channel (with 0 being black and 4096 being max brightness).
Then let’s say that the exposed part of the negative only uses up 1/4th of the total histogram of your camera sensor, then you only capturing ~1,024 distinct levels of information (which is another way of saying you only capture 10-bits of information, 2^10 = 1024). The problem gets even worse, since the bit-depth is per color channel, and typically your blue color channel on the negative will carry less information than what you see in the total histogram.
During the conversion process, the information is stretched out to fill the histogram, and if there isn’t enough levels of information, then you will create gaps in tonality, i.e. posterization. In theory, if you are targeting 8-bit output, you are basically guaranteed posterization if any one of your color channels uses up less than 6% of the total histogram on a 12-bit capture (8-bit has 256 values, and 6% of the 4096 values in 12-bit is just 245 values).
Shooting RAW at 14-bit will give us a LOT more room. Since 14-bit slices up the same dynamic range into 16384 potential value levels, we would need a color channel to use less than 1.5% of the total histogram before we are guaranteed posterization in an 8-bit output.
(There are obviously a lot of other factors that go into this, but just showing the important of bit-depth, and giving some very general guidelines of when you would start running into real trouble).
As I understand, the dynamic range of a DSLR is based on the physical properties of the sensor itself. Perhaps in theory there would be a way to create a sensor specifically with the purpose of having a low dynamic range for this type of work, but I don’t believe there is a way to make an existing sensor have lower dynamic range.
Some other ideas maximize tonal smoothness:
Make sure that your DSLR can capture at least 14-bit RAW files. This will make a significant difference over 12-bit.
Make sure your original film shot is exposed enough to capture shadow detail. With film, it is typically OK to over-exposed by a few stops, but if you underexpose by a few stops, you will end up with less information, and a higher likelihood of posterization or awful shadow grain during the digitization.
Expose to the Right on your digitization - I typically bump up the exposure by about 1 stop over what the camera reads as the proper exposure during digitization. There’s some disagreement about why this works (some say there are more levels of information on the right side of the histogram, others say it just improves signal-to-noise ratio), but regardless, it’s a good habit.
I’ve also seen some users experiment with taking multiple exposures and using HDR Merge. I’ve tried this as well, but the algorithms used by Lightroom (and others) for merging multiple exposures is definitely NOT optimized for this use case, and in my experience, will lead to sporadic results.
Reading the above triggers a question I had with NLP. I digitize with a Nikon Df in 14 bit raw using aperture priority, and keep an eye on the histogram while doing so. I try to make the histogram as “full” as possible using exposure correction without clipping on the high or low end. Indeed I noticed that this mostly requires around +1 stop to shift it a little to the right, since otherwise the histogram has nothing in the last 10% or so close to highlight max. I believe that is the same as the exposing for the highlights that you mention.
However, once I activate NLP on the scans, and before I press the conversion button, my carefully filled histogram seems suddenly compressed, with sections in the dark and light areas where no info exists. Why is the shown histogram compressed at that point? Is it because NLP calculates in 16 bit? Otherwise, compressing the histogram seems throwing away information, no? Just wondering.
The usual histogram goes from 0 to 255, the left half goes from 0 to 127, the right half goes from 128 to 255. This means that both parts can show the same amount of information. In order to be able to show 256 distinct values along the horizontal axis, a histogram needs to be 256 screen pixels wide. Again, both sides of the histogram contain the same number of values and each of these histogram values represent a range of values in the image file.
1 value for 8 bit files
16 values for 12 bit files
64 values for 14 bit files
Imagine talking with a friend near a noisy street. You need to talk louder in order to make yourself heard and understood. Exposing to the right gets the recorded image out of the noise.
The difference between having 1,536 levels of usable information in RAW file (vs. having only 192) is very significant, especially considering how we will be stretching these values these during conversion. The benefit of these additional levels of information is greater tonal smoothness and less likelihood of posterization during editing.
I’m not disputing that signal-to-noise ratio is also significant - just want to point out that at the RAW level, you’re able to capture more discrete levels of information when exposing to the right, and that this can have a significant and unique impact for camera scanning negatives.
When digital cameras show you a histogram preview, the histogram is based on the “processed” image - not the RAW files itself. For instance, if you change your picture mode settings (like “Standard” vs “Landscape”), you will see your histogram change in response. So this is just good to be aware of… the histogram you see on your camera screen is definitely inflated because brightness is being added to the linear RAW file in all of the picture modes.
And that’s fine! You don’t want to clip any data, so it isn’t necessarily a bad thing that the histogram at this stage is a bit exaggerated in it’s brightness.
(Side note: you should try shooting “manual” or be sure to lock the exposure settings… you want to keep exposure settings on your camera constant while shooting your roll)
When you load a RAW file into Lightroom, it is assigning a profile (usually “Adobe Color”) to interpret the RAW file. So the histogram you see is now based on the Adobe Color profile - which is adding all types of adjustments (tone curves, black point compensation, highlight rolloff) that are all well and fine for digital images, but throw off the accuracy we need to properly convert a negative.
When you open Negative Lab Pro, it is automatically assigning the Negative Lab Pro RAW Profile, which is designed specifically for negative conversions. It uses a linear tone curve, turns off black point compensation, turns off highlight rolloff, and much more. So you are now seeing a truer representation of the underlying data of the image.
So you are seeing exactly what you should be seeing!
Thanks for the inputs. Though I’m not referring to the back of camera histogram, but running thetered (?) and looking at the histogram in Lightroom for each shot. Indeed my thought about ETTR was that the highlights in the scan will become shadows when converted, so to obtain as much as possible shadow detail I would need to ETTR.
To obtain as much dynamic range from my scan as possible, I need to “fill” as much as possible the histogram. As Nate points out, however, the histogram that I look at, is some processed RAW interpretation. Will it make sense to tell Lightroom to assign the NLP profile before showing the histogram, provided that this is possible, so I can optimize the histogram for negative conversion while shooting?
Same concept… the default histogram in Lightroom during tether will be with whatever default camera profile you are using (typically “Adobe Color”).
Yes, you can do that if you’d like. When you have tethering set up, there should be an area called “Develop Settings.” Here you can have it set to “Same As Previous,” so after setting the camera profile to “Negative Lab v2.3,” on the first capture, it should then automatically be applied to future captures. I’m not sure if this will impact the live preview, but at least as soon as it is captured, you can get a better sense of the actual histogram.
Just be careful not to clip! Also, I would advise that you use the same exposure on your digitizations throughout the entire roll – there’s no need to try to adjust each shot to push it as far right as possible. Keeping the same exposure throughout the roll will make certain features (like “Sync Scene”) much more consistent between shots.