Using a blue filter when photographing negatives?

The guide contains the following statement:

the orange masks makes the blue channel darker. So this channel in particular is prone to posterization

In other discussions I have found the suggestion to use a blue filter in front of the lens of the DSLR. It would shift the color balance in favor of the blue channel. When looking at the histograms of negatives captured with my DSLR without the blue filter, I cannot detect a significant shift of one of the three channels in Lightroom. Therefore I’m not sure if a blue filter will be helpful or rather a bad idea.

Has anybody used such a filter or does anybody have an opinion about using one for “scanning” film negatives using a DSLR?

I’ve used a color head from an enlarger and tried to compensate at least some of the orange mask by setting the C/M/Y filters accordingly. While the film base was as neutral as I could make it, the converted images did not look much different from the ones taken without corrections. NLP does a pretty good job in getting you a decent starting point, no matter what kind of (good) light you use. My conclusion was that it was not worth the effort.

I’ve also tried with an inverted shot of the film base as backlight on an iPad. Same conclusion.

As for the idea of getting the blue channel out of the noisy corner, a bit of overexposure will do that too.
Expose to the right, keeping highlights a bit away from the right edge of the histogram. Colour negatives tend to have low contrast and therefore allow for some overexposure.
You can find some background info/hints on related techniques by searching for “ETTR” and “UniWB”.

A lot of ideas and tests are documented in this part of the forum.

1 Like

That’s interesting, because my attempts to improve the result by using a blue filter didn’t show any significant differences. And as I mentioned before the histogram also didn’t show the necessity to shift the colors in any way. All channels were exposed quite evenly, but with the blue filter it was significantly off.

The other plugin I have used so far requires the conversion of RAW files to linear TIFFs. using a dedicated executable. The result in a quite big file that looks very green’ish. I have experimented with UniWB in the past and I have created a white balance for UniWB, which is the only reliable way to assess the exposure of the different color channels. However, I wouldn’t want to use that WB setting for actually taking the shots. Actually I would take a black photo, which basically shows the pure orange layer, use the UniWB white balance setting in the camera to set the exposure correctly. This would be the ETTR approach based on the maximum brightness possible. Of course individual photos can be darker and lack these brightest possible values. Ideally you would use UniWB again for each photo to figure out the optimal exposure based on the histogram.

Thanks for the link to the page (which I have already studied of course… :wink:).

I use the UniWB setting while taking the shots. It makes the previews look green instead of orange but the sensor gets exactly the same amount of light (at equal exposure)…
After manual WB, both shots look the same. :grin:

Sure, it will look the same once the preview is updated in LR. However, my latest idea was about using the digital filter “invert color” and then store the images in RAW+ mode, which means one JPG with already inverted colors and a RAW file alongside. However, I haven’t yet tested the “invert color” filter with negatives. I could imagine that it doesn’t automatically remove the orange mask, but I’ll have to test that. For a RAW file I would actually go with the UniWB as this provides the only safe way to assess the exposure without downloading the image to the computer. Interesting to see that others are using UniWB as well. :wink:

I have read an article of someone who said, that he had definitely better color separation results with using a blue filter. But he wasn’t using NLP and i don’t know what he was using as light source. But to me this makes sense. Without a filter the blue pixels in the camera get underexposed.

I think it depends on the dynamic rang of the camera, wether this is a problem or not.

The best way to find out if the blue filter works for you is to simply test it…

There might be a catch though.

Adding humps in the filter curves changes balance between not only R, G and B colours, but also of all hues in between R, G and B, which can impair the desired colour rendering (as in lowering CRI and TLCI values). Digital cameras are thought of separating R, G and B cleanly, but “red” pixels also record some green and blue light (and hues in between) etc.

Adding a blue filter might help in some constellations, if yours is in such a case is easy to find out: Try it.

One interesting detail about my RAW files from negatives: I’m using the software “Fast Raw Viewer”, which is one of the care programs that will allow you to view the actual RAW file instead of the JPG interpretation. And it’s capable of showing the histogram of the RAW file, not only of the resulting JPG (embedded preview). I have inspected some of my shots using it and in fact the blue channel is a bit “behind” the other two, but not by far. I have seen that my attempts with the blue filter were way over the top in terms of compensation for blue. Of course you could try to find a blue filter that is just strong enough to provide you with the most uniform color channel levels, but then again: there doesn’t seem to be much indication from my tests and from @Digitizer that this would improve your conversion results in a significant way. Considering the potential problems a color filter could add, I’m rather conviced and determined to continue without one for the remaining negatives.
It’s as always: if you really see a problem with the quality of your pictures, go for this potential solution. But not just for the sake of a more evenly looking histrogram that might not have any visible impact on your results. Using the RAW format when shooting your negatives seems to be enough in terms of headroom for your captures. Keeping it simple is often key. A filter can also result in inferior optical performance of your lens. I prefer the most clean signal path and think I’m getting very good conversions this way.

…there is one source of light with raised levels of blue: The sky…

So your proposal is to use the sky as your light source? :wink:

1 Like

You are not totally wrong here.

The Pakon 135 scanner is known for its great power to scan film negatives but it’s not good at all when it comes to slides. The reason? It uses a blue light source, and this exactly to compensate for the negative warm tone. So, although I haven’t tried this yet, I am thinking of setting my video light to a colder white balance when scanning negatives and see how it impacts the results.

1 Like

Before using NLP I was using a double CTB gel on my light source (flash). This was only to prevent clipping because the dynamic range of the negative with orange mask was too large. (Very bright red channel and dark blue channel).
Later I have been doing comparisons with NLP (gel vs no gel) and I’ve seen very little difference.

Hope it helps :wink:

1 Like

The main problem is not in the fact that you can not get the right balance without the blue filter. You can, but a huge orange shift causes a need in a huge white balance shifting. So for example my sony a99 says that there is a white balance error if I try to set the WB manually by using the kodak vision unexposed film part. If you go to lightroom to set the WB properly, it actually fails to its extreme positions. Only C1 can go far away whith the WB. But the result is still not that good - as we all know the RAW format still has its limitations and all extreme corrections cause unwanted results. My sony a99 allows me to push or pull my RAW files only ±1 stop without unwanted colors or grain.
If we look at the histogram while changing the color balance of the negative we will observe the same effect - we are pushing or pulling our channels. So what happens when we have normal results in green and red, but need to change the blue channel - we shift it and pull some unwanted noise out of it.
The color separation also should be better with the blue filtered light - the cool-tinted parts (blue and cyan) of the negative will become more saturated and luminant if we use cool light instead of a cool lens filter for example. Because the film has channel masking technology in its emulsion and it is better to use that mechanism instead of blocking unwanted tint on post or with a lens filter. If you look through an orange filter on a blue source - it will block that light and you will see a dark region. That is color masking. What kind of masking will you get with just dropping your WB to extreme positions?
The last thing that stands for the blue light is that modern non-flatbed film scanners have an RGB led that calibrates each time you put a strip of film in it. It is much better to calibrate your light instead of pulling things out in post. So for different type of film you actually will need different type of filters.
The negative Lab team made a great tool to bring our negatives to live, cause they had tonns of good quality references, I guess.
These are just my thoughts about it and I am now working on my home DSLR-scanning setup. Want to improve it and just ordered some cyan-tinted transparent polycarbonate to make some tests.