Digitizing a B&W Negative While Preserving Its Subtlest Textures

I was very kindly welcomed to this forum and was advised to create a “separate topic”. I would never have dared to do it on my own. So here it is.

Trained in professional silver printing in one of the best Parisian B&W laboratories in the 60s, I sought to achieve the best possible results when digitizing 135 negative films. However, I was disappointed by the results I obtained with the various processes I tested. Digitization with a DSLR seemed the most promising method, yet something wasn’t right…

My research on the internet helped me understand that the cause of my disappointment was the diffuse light, which is nonetheless used, more or less discreetly, by all processes. Collimated point light thus became the only possible recourse, even though it is well established that its use is not without drawbacks. Would I be able to master it?

Spoiler alert: yes! But it’s anything but a simple and quick solution. On the other hand, what a pleasure to obtain A3+ prints that I am not ashamed of :wink:

I set up an ugly monster for a very small budget: less than 250 USD with second-hand equipment (excluding the DSLR, which is also second-hand).

I have just put online the complete description of my battle:

(Also available in PDF: [https://bw-film-scanning.oguse.fr/document/bw-film-scanning-excerpt-en.pdf])

I hope this can help or give ideas to some.
And my apologies for my poor English.

1 Like

Hi @Alain_Oguse

Q: Do you actually use Negative Lab Pro to convert your negative scans? I see that you used darktable and would be interested in reading about reasons to use one over the other, apart from licensing cost.

Using monochromatic (green) light for scanning greyscale negatives seems like a smart move to me, even though the influence of the red and blue photosites on apparent quality of the grain and image is unclear to me at the moment. They might amplify or attenuate the grittiness, depending on how the file is de-mosaiced. Do you have any information about that? (Changing algorithms in darktable might reveal differences)

Bonjour :wink:

I chose Darktable because it’s Open Source. Thanks to this, we know exactly what it’s doing. It’s not a black box that develops our images as it sees fit without telling us what it’s doing.

The developers and the community have always answered my questions. Without them, I wouldn’t have been able to improve my results.

This is a very important question, which I was able to answer only with their help.

In fact, the photosites never gave me any problems during my tests, as soon as I switched from a 12 MPX to a 36 MPS. For 135 mm film at 400 ISO B&W, such a resolution is more than sufficient and glosses over the problem. If I chose green light, as I explain in detail in my documents, it’s essentially to get rid of chromatic aberrations, which have proved to be very penalizing due to my condensers and my APO RODAGON lens, neither of which are coated :frowning:

And on this subject, Aurélien PIERRE, one of Ansel/Darktable’s developers, came to my aid. Here’s the configuration we ended up with.

  • In Preferences/Processing/Pixel interpolation select: bicubic.
    ** Optimize use of green light only :
    ** In the Dematrixing module, change the PPG method (default) to VGN4.
    **Download the IdentityRGB-elle-V2-g10.icc2 profile and place it in
    Windows - C:\Users[YourName]\AppDataLocal\Ansel\colorin\ and
    Linux: ~./config/ansel/color/in/ and /out/
  • In the Input Color Profile module, select this new profile as input color profile AND working profile.
  • In the Color Calibration module, CAT tab, select Adaptation None and in the Gray tab, set the Green channel to 1, and the other two (R and B) to 0.

And to conclude : "[Configured this way] if you scan under quasi-monochromatic green light, the sharpness of your scan will be maximum because only the green photosites of the camera sensor will be used. In practice, this is equivalent to completely removing trichromy from the graphic pipeline”.

I apologize for my poor English.

I am fascinated by all this, though much of it is far outside my knowledge zone. I want to read and understand more as I have a truly massive tranche of B&W negs to get through. Mostly Tri-x, some Pan X, old 3200 and HP5. Anything I can glean to help me complete the job better is a bonus, especially for the selects.

ADDED EDIT: I am reading the PDF, too! Just trying to grasp concepts and ask related questions that came to mind as I started… back to the original post…

At face value, its clearly got some advantages. Though it has more specialized components for software and light/filtration, and also time to complete the image.

To anyone wanting to discuss the practicalities, theory, or real life experience …

  1. What are the downsides to the process? Lack of automation? Other things?
  2. Are there any benefits to doing this without a green light (i.e. white light that most people have) but still using only the green photosites of the color filter array? Or perhaps using a traditional green filter on the lens?
  3. How does this compare to using a monochrome sensor with white light? Which, of course, have extremely limited options.
  4. Or using RGB band blends or green on a monochrome sensor?
  5. Or using pixel shifting? This perhaps accomplishes some similar effects as monochrome sensors in theory, perhaps, but at a great time penalty even on the fastest models with built in processing.
  6. What is the effect on this using a higher MP camera, as many pro models today use roughly 45mp? Or even larger?

And your english is great, @Alain_Oguse … truly. Far better than any french I feign remembering.

As far as I understand @Alain_Oguse, the goal is to bring forth, in converted images, the grain of B&W film.

Generic approach: In analog/chemical photography, this used to be done by using directed light, whilst diffuse light was recommended for colour.

Specific add-on: In order to improve the effect, green light of respective quality will lead to less recorded photons on red and green sensor pixels. This can (and seems to) reduce destructive interference by red and blue pixels and interpolation in the de-mosaicking process. Using coloured light also makes chromatic aberrations less visible or go away altogether.

In reference to your questions @SSelvidge, we need to do two things that can be approximated by a) using a band pass green filter like the Hoya X1 and by e.g. moving the backlight away from the negative so that light does not reach the negative from wide angles but only from narrow angles.

Trying to approximate the effect - e.g. if you have no suitable B/W enlarger to modify - is best done with a short tele macro lens. See this post for a simple test with white light.

I also tried R, G and B light from an iPad to construct a colour image from B&W captures, also approximated as documented here.

1 Like

Digitizer, as luck would have it, I do have a spare Besseler 23C XL :sweat_smile:, which I just posted. Thanks for the reminder. I use one as my main scanning rig for critical items.

I am not yet sure about adding another to my space yet nor modifying the one I use again before I really grasp the ease, and end value of, the process described.

It sounds like I should try a green filter, and if I can, a green light. I am not quite ready to go over to buying a monochrome camera but I imagine that would be most phenomenal with this process, based on my primitive understand thus far.

But wow! is this all very interesting, because my main goal is creating great copies to print, not just show digitally. Darkroom access and large prints are challenging these days afterall… and costly. And as always with me, I am scanning for a large project much more than I am scanning for myself.

“Monochromatic” cameras usually have sensors that record colours like e.g. panchromatic films. If they were truly monochromatic, they’d only record e.g. green and anything red or blue would simply not be recorded and appear as black blobs. In order to eliminate effects like chromatic aberrations, monochromatic light (or light with limited spectral width) could be used.

Now, if you put a Hoya X1 onto your light or lens, you should be able to see reduced chromatic aberrations. In order to simulate directed light, you could use a long black cardboard tube with the light at its far end. Nevertheless, the light should be seen by the complete sensor. If it weren’t, the scan would show dark corners.

I perhaps also do not understand what the monochrome cameras see. I assume it was all light without any color taken into account. Just the brightness values.

But that seems tangential to the main topics and possibilities of @Alain_Oguse’s research/experience. I was just theorizing that removing the color filter array would make full CRI 100 light, such as from a halogen bulb, very useful. So maybe monochrome with green light or filtration would be even better is what you’re saying?

I can see a green filter in my future either way.

As for light coverage, the long lens + some light enhancement sheets should help quite a bit. And I wager controlling/testing the distance of that light and its relative diffusion.

“Monochrome” cameras see, as you guessed, brightness only, no matter what colour or mix of colours the light is.

Whether you use a camera that can or cannot record colours does not really matter. The image will be recorded as the camera sees it. If one only processes B&W (greyscale) material, not recording colours will provide higher resolution.

@Alain_Oguse is taking the long road to get what he wants. Whether that is the same for you is something that you can either find out by trying to replicate the gear modifications Try a green LED lamp in your other Beseler and point your camera upwards like so: Let's see your DSLR film scanning setup! - #35 by Digitizer
(please note that the picture only shows the general idea. In reality, the lamp house is much closer to the camera and the enlarger’s bellows surrounds the lens.

Quick Test: Scan taken with green backlight from an iPad as seen in Lr at 300%

Thank you very much. But I don’t dare let you think that this is really MY skill. I’ll pass on your compliments to my favorite AI. I know her well, she’s very sensitive, this will please her :wink:

I’d like to stress that grain is not my ultimate goal. It is merely a “whistle-blower”. My real aim is to render the finest textures: skin, fabrics, frosted surfaces… Restitution of these textures is more difficult to assess than the appearance of the grain. So that’s what alerted me at first. Then I got back to main.

Je me suis beaucoup interrogé sur cette question par le passé. Mais aujourd’hui, avec la solution que m’a donné Aurelien PIERRE (décrite ci-dessus) , je suis convaincu qu’un monochrome n’apporterait rien de plus.

You’re right on target. That’s exactly what I’ve been looking for. And it’s true that it’s not trivial. So much effort can only be compensated by the pleasure of getting good prints. It’s much more gratifying than scrolling through dozens of images on a smartphone. Much, much more!

I don’t think I quite understand what you mean (perhaps because of the language). But I’m afraid you’re asking the wrong questions. Can you give me some details?

I understand. And if we understand the grain, its size and absence, as analog pixels, we need those have the means to reproduce the texture. Therefore, clean capture of grain with as many pixels as possible will provide the best starting point for textures, unless the the gear or software used eliminate it.

Recording B&W images taken under one colour of light is a good basis for eliminating chromatic aberrations, both lateral and longitudinal. Using green light for sensors with Bayer pattern CFA helps preserve resolution, although it just provides something like 9 M pixels from a 36 M pixel sensor. Sensors recording all colours per pixel should deliver much higher resolution.

From a physics point of view. blue light should be better than green light, but blue response of camera sensors tends to be relatively poor. hence, green is the optimal compromise between resolution achievable due to wavelength vs. sensor resolution.

Printing with 300 pixels/inch requires 4000x6000 (=24 M) pixels for a 20x30 in print. If we want bigger prints at that pixel density, we need bigger sensors. 100M pixels should be good for 40x60 inches. For even bigger prints, use Topaz and/or get phase one gear and similar gear…or stick to analog.

Thank you for opening my eyes to this issue that I had totally overlooked. But now I’m faced with new questions. How is it possible that I’m seeing such an improvement in sharpness with a green LED while the definition is dropping by 75%, from 36 to 9 MPX? Is it possible for chromatic aberrations to be so violent?

Confused, I searched the net and found this information I’d already had in mind, but which I’d never linked directly to definition: for a Bayer matrix, 50% of photosites capture green, 25% red and 25% blue. So how do you get to only 9 MPX? Why not 18 MPX?

The important point for me is that with my old Nikon D810 and a green LED I get more than enough sharpness. And so much so that in a significant proportion of my scans I have to add “grain softening” (i.e. blurring) in post-production to give the image the “velvety” quality you’d expect. Otherwise, the grain becomes too present, too rough and “falls off the other side of the horse”, damaging the textures.

50% green pixels, yes, but they are separated by an “empty” pixel. This cuts vertical and horizontal resolution in half, 1/2 * 1/2 = 1/4. We could now say that this is correct whether we use green light or white light, but interpolating missing pixels can improve the apparent resolution, but it’s apparent resolution only, not true resolution, alas.

For best resolution, sensor without mosaic would be best, e.g. in a monochrome type camera or with stacked pixels of Foveon sensors.

RAW image as recorded in green light:
G-G-G-G-G-G-
-G-G-G-G-G-G
G-G-G-G-G-G-


A full frame 24 Mpixel panchromatic sensor can resolve about 80 lp/mm. An APS-C Pentax Monochrome resolves about 130 lp/mm. Whether these theoretical limits can be reached remains to be seen - but only with the best lenses and software.

Schneider Kreuznach publishes MTF charts with curves of up to 80 lp/mm, e.g. for this lens that looks good for scanning. On eba If’ve also seen the Macro Varon, a lens that rates high in this test.

And then there’s Fuji X-Trans, a repeating pattern of 6x6 pixels, so a 36 pixel matrix with 8 blue, 8 red and 20 green. Heavier on the green than the Bayer sensor and no anti-alias filter. DxO have the best explanation of how this works that I’ve seen, and it’s only fairly recently that they’ve been prepared to handle Fuji X-Trans files.

"Note that with X-Trans, any given row or column is capable of “seeing” all three colors. By contrast, an individual row or column on a Bayer filter is always missing either a red or blue pixel: "

Not saying it’s better, but it’s different.

Thanks for adding that info. X-Trans is used in Fuji’s lighter cameras.
GFX cameras use Bayer CFA sensors.

Yes, Fuji have gone for a headline maximum of 40MP with their newest X-Trans models and with that comes the smallest pixel pitch of 3.0 µm for any APS-C camera. That’s the same as the 25MP Micro 4/3 Panasonic GH6 but otherwise only 1" sensor cameras have a pixel pitch less than that.

I don’t own one but I’m not convinced that in normal resolution tests that would be a match for the 36MP Nikon D810 though, and it wouldn’t match the dynamic range. Might be relevant when discussing this green lit monochrome stuff though.

Fascinating proposition, @Harry

This conversation has been helpful and I want to test some ideas next week. Need to find a cheap green filter though at minimum. A green light in the correct wavelength will be harder to acquire.

@Alain_Oguse @Digitizer thanks for going so deep on this topic. A lot for me to chew on before I can respond in any manner that has worth!