Kodak gold 200 unfixable yellow cast

When posting about Image Conversion Issues, please include the following information:

  1. Which version of Negative Lab Pro are you using?
    3.0.1

  2. If using DSLR scanning, please include: 1) camera make/model, 2) lens make/model, 3) light source make/model
    Canon EOS R5, Sigma 70mm ART MACRO, Cinestill CS-LITE

  3. Please add the conversion you are having difficulty with, along with a short description of what you are seeing wrong with it.
    I have a negative of kodak gold, and i’m unable to get rid of the yellow colour cast - i’m not sure if this is just how the negative is, or i’m missing something. If i push the white balance too far everything goes purple / magenta / lilac and all the faces look really weird.

  4. It’s not required, but it’s very helpful if you can provide a link to the original RAW or TIFF negative before conversion. If you don’t want to share this file publicly, you can also email it to me at nate@natephotographic.com

Maybe post a copy of your own results so others can see and compare when they have a play with your file(I can’t at this very moment)

This is as far as i can push it with NLP:

using photoshop on the positive copy i can push it a bit further:

(note the weird colours in faces. the rest of the image is looking better though. still has a bit of a strange look, as if the image is almost yellow-monochrome?)

… and this is what makes NLP struggle with the conversion. Images that don’t have full swings of red, green and blue tend to turn out strange. Check out this thread.

Moreover, the screen looks as having been taken in household lighting, which tends to be less than ideal.

UPDATE:
Tried a few things and found that the issue can somehow be eased by lowering the pre-saturation value and by manually white balancing on the background door. I’ts probably best to save such images as 16 bit TIFFs and correct the tints with local adjustments in Lightroom Classic because NLP’s internal colour tweakers are not acting selectively enough.

@Sciencentistguy, IMHO your expectations are bit to high. After all film is not digital :wink: - if there is not enough light - there is nothing you can really do about this. First of all the original film shot is underexposed by at least couple of stops. Look, the details in darks are completely lost - the film camera in the picture has no details whatsoever. The light apparently is mixed from different sources and the picture is hued by light reflected from wall behind stairs. So the picture will never look properly balanced no matter what you try. Here is my shot at conversion in photoshop - there is nothing here for NLP to work with - and NLP cannot convert water into wine. If picture looks like it came from analog photolab then you basically achieved your goal.

Thanks, @VladS. I was pretty sure this negative had nothing to go on tbh, it was taken relatively early in my film photography journey, before i really knew what i was doing.

Tbh, I mainly posted it here in case someone could work their magic and extract colour information I didn’t know was there.

Well, I think I pretty much got rid of it with a bunch of custom fixes in NLP and Lr. First I neutralized the orange mask and deleted all borders as instructed in NLP. Then:

In NLP:
WB: Auto-AVG
HSL: Lab
Sat: 5
Tone Profile: Lab Standard.

In LR:
Temp 3900
Tint - 30

I hope my result conveyed here. OK I now see it did - except in the big file the people bottom right re a bit brighter and the whole thing has a tad more contrast.

Yes, this is the best of all the posts above, and better than I could do (NLP noob) in LR. Still better than I could do with the settings above. What is HSL Lab?
Anyway I had to move to Photoshop. I will post my image, only to compare, who knows unless they are displayed thru the same medium.
1 question, there is no “orange mask” in the CR3 file provided or sprocket holes or borders, how relevent is the orange mask? Here the staircase, the door, and the brunette’s teeth offer good color balance samples. Still I had to do a crazy amount of color correction. This is likley incandesant lighting (3200-3400k) but I found it strange how much light falloff there is from camera guy to the brunette. Along with that a color shift is odd unless there is another light source reflecting…I’m glad to see this is recoverable without needing to “paint” the front row.

I think it has a reddish-pink bias. Shifting the Tint slider in NLP toward green would likely fix it.

HSL Lab: Lab is one of the choices you can make in the NLP edit menu beside the “HSL”
HSL- Lab
I usually leave mune set to natural, but one should use whatever setting best achieves what is needed for the specific photo.

Agreed…and it involves a lot of trial-and-error testing due to the mass of settings and presets that can be combined.

I agree with Vlad, plus it would seem the photo is taken of a mirror reflection of the group which adds to the difficulty and introduces colour cast reflections.

Also the photo displays with a slightly different colour when posted?

Right - not only within NLP, but also in Lr, as some settings in Lr can combine with other settings in NLP to improve a photo. Part of a good editing strategy is learning what works best in each application and then combined.

David, I missed answering one of the questions you posed about the orange mask in the provided CR3 file. Actually, every colour negative has this dye additive, and it’s totally relevant to the processing steps required in making a conversion from negative to positive. It’s just that the file provided has an extremely thin border from which to tap Lr’s white balance tool in order to neutralize it, but there is just enough there that I was able to use. That is an essential first step and vastly reduces the amount of WB correction needed in the post-conversion editing stage.

NLP reacts to image content, which brings in another degree of confusion. Knowing the tools is one thing, knowing when to apply them can be challenging under these conditions.

Digitizer, could you please explain what you mean by “NLP reacts to image content”?

NLP analyses the image to find how each of the r, g and b tone curves must be positioned and bent in order to produce a presentable result. NLP needs enough data and if this is not the case, results can get less than presentable.

NLP does not (yet/afaik) analyse images for faces, skin tones or objects, but given the progress in Lightroom, this could happen some day. Until then, the analysis remains mostly “statistical”.

Ah, OK, yes from observing the individual R, G, B curves in Lr’s Curve tool interface, that explanation looks correct. I think that would also explain why the correct WB is so important, and by providing additional data, how “roll analysis” in version 3.0 can often produce more satisfactory colour than the data available from a single capture.

to my eyes, original looks best. It looks graded which is a good thing

I just fixed whitepoint but imo the image is unfixiable with such a strong yellow cast if you wanted a neutral image