Each shot is converted differently

I’m scanning a roll that is shot with the same camera settings, under the same lighting conditions, in the intervals of seconds to couple minutes each.

In Lightroom, I use White Balance Select tool outside of the frame to set the white balance. Then, I crop the photo so it doesn’t show any border and then I convert it using NLP:

  • Source: Digital Camera
  • Color Model: None
  • Pre-Saturation: 3
  • Border Buffer: 0%

After conversion, each shot looks totally different. One has magenta tint, another is too cold. One is more contrasty than the other, etc. I’m spending a lot of time adjusting each shot to look similar to the others, but I’m wondering why NLP changes them so much if they should be pretty much equal.

I’d like to understand how this works, and if there’s an option to somehow turn off this auto-settings, if there are any?

Thanks.

Hi,

What you are looking for is the Sync Scene feature.

If you still see variances after using Sync Scene, then the variances are being caused by other issues (for instance, differences in exposure during original capture or digitization).

Hope that helps!
-Nate

Checklist:

  1. Were all your film images exposed in camera using the exact same ISO, f/stop, and shutter speed?
  2. Were all your digital raw files of the negatives exposed in camera using the exact same ISO, f/stop, and shutter speed?
  3. Have you tried Nate’s advice for your situation?
  1. All are exactly the same. ISO 200, f/5.6, 1/60s. I also used the same white balance for all shots.
  2. Not sure how is this different than point 1? I shoot all film negatives using those camera settings.
  3. Yes, but it didn’t work. I got various results, sometimes better, sometimes ever worse.

By FILM IMAGES, I meant the original film negative exposures.
By DIGITAL RAW FILES, I meant the digital camera exposures of the film negatives.

I used to work in the school portrait industry, where in the early 2000s, I ran our scanning lab. We trained our photographers to lock down their lighting, lock down their exposure, and submit 100’ rolls of film with every frame exposed exactly the same way… same subject to film plane distance and subject to lighting instruments distances. That ensured consistent negatives throughout one camera’s output.

Once we got a general color balance off of a gray card, we locked in the density/brightness and RGB values that worked. Then we scanned the whole roll of film at the same settings.

NLP can do essentially the same thing. But you may have to do it after processing by opening the “match frame” in NLP first, by then also selecting all the files you want to match the match frame, then invoking NLP, clicking Sync Settings or Sync Scene (mouse over each button to learn what it does), and finally, click Apply.

Oh, gotcha. Then:

  1. No, I was adjusting shutter speed according to the camera’s light meter.
  2. Yes.

Could this be a problem that causes color shift and increased contrast in some shots. Some colors are so off, that when NLP sync them, they turned completely blue, like if I was crank the white balance all the way to the cool.

Exposure affects color balance on film. There is a phenomenon known as slope. Basically, underexposure might go dark and red, while overexposure goes light and cyan. This is because the three different emulsion layers respond to light with different sensitivity, depending upon exposure. If you plot the curves for red, green, and blue sensitivity, you find that they are not exactly parallel as exposure varies over a -2 to +3 stop range.

When I ran a scan lab, we had nine Kodak Bremson HR500 scanners. Those were $50,000 scanners that used the Kodak DP2 database as a render engine.

The scanners had to be calibrated for slope, using negatives exposed under controlled conditions in a studio at -2, -1.5, -1, -.5, normal (0), +.5, +1, +1.5, +2, +2.5, and +3 stops. Every different film stock had to have its own set of “film terms.” We would print tests and adjust color and brightness for each test negative until all the images matched as closely as possible. The results would be used to create lookup tables for the scanner. The scanner used the values for each “over” and “under” test to adjust color balance based on exposure.

When we set up to scan a roll of film, we would evaluate the photographer’s gray balance test negative at the lead end of the film. Once we dialed in the color on a calibrated monitor (and by the BRGB numbers), the entire roll would be scanned at that set of values, since the photographer had exposed each frame the same way.

Unfortunately, if you vary the FILM exposure, then when you digitize it, you are going to run into this phenomenon of slope, at least to some degree. NLP seems to attempt to compensate for it, but the compensation is content-dependent to some degree.

I haven’t found it possible to NOT adjust each frame at least a little when the film camera was allowed to auto adjust either ISO, or time, or aperture, or when auto flash was involved. I HAVE found that I can sync similar film exposures to each other with a high degree of reliability. But they usually look different at first, if they are evaluated automatically.