Scanning Colour Negatives With a Monochrome Camera

This is just a proof of concept.

Owners of monochrome cameras like a Phase One or Leica might profit from it. The process consumes a bit more time than simply using Negative Lab Pro, which is not needed for it anyway. In order to get some extra hurdles, I tried the process using my worst lit (mixed low light, underexposed) negative.

I used/did the following items/things

  • iPad as a backlight - with three images, one red, one green, one blue
  • Camera scanning setup
  • Photoshop (and Monochrome2dng)
  1. Scan the negative, one shot per red, green and blue backlight
  2. Simulate monochrome camera: Convert the shots to monochrome with Monochrome2dng
    note: this step is not necessary for people with monochromatic cameras
  3. Invert and convert the 3 monochrome DNGs to 16 bit TIFF and add to Photoshop as layers
  4. Premix image - lower % for R and G layers
  5. Add curve and colour balance layers and work them until the result is close to what you want
  6. Collapse layers and export

Caveats

  • Keep your setup untouched while taking the shots in order to prevent misaligned shots
    the result I got looks like polaroid autochrome when pixel peeped, Probably due to step 2
  • I’m not that good with Photoshop, others might get better balanced results
  • I also shot CMY instead of RGB, but have not yet processed that lot
3 Likes

Update: The CMY road lead nowhere near something I liked.

Thanks for the update, @Digitizer

I’ve gone down a similar path before, and also did not find that it improved the results.

The results may be better with a truly monochromatic sensor, but with typical color sensors that use a matrix of sensors (like a Bayer Array or X-Trans), the profiling of the sensor is hugely important.

For instance, you can imagine a test were you shoot a color target with a digital camera with a typical Bayer Array, and then create a RAW profile based on that target. It’s possible to get nearly perfect color with this technique, due to the Forward Matrices used to define the right relationship between the R, G and B sensors on the array.

Meanwhile, you could also take that same shot of the color target three times, each time with a filter over the camera lens that only lets in R, G, or B light. Then follow your process to make each shot monochromatic and then merge. My expectation is that this will result in very inaccurate and poorly distinguished colors compared to the first shot, because you need the Forward Matrices to define the spillover between the R, G, and B channels.

At least that’s my theory as to why it doesn’t work as well in practices as it seems it should!

-Nate