How does NLP handle different film emulations?

Really elementary question (I hope I didn’t miss this somewhere in the how-tos): how do I tell NLP what film stock I’m using? Or does this happen automagically within the brains of NLP? How do I get a true representation of the colors for a particular film stock?

For example, are my scans of Fuji 400H going to look like Fuji 400H and my scans of Kodak Gold going to look like Kodak Gold? There’s “Fuji” and “Kodak” options in the color dropdown in the NLP window. Is this the only amount of granularity we get?

As far as I know, the drop down box gives “styles” for different types of film; Fuji, Kodak etc, by manufacturer or by look - warm, cool etc. You can see what they do as they move the sliders. If these don’t suit, either digitise in a setting that looks good for you and adjust in Lightroom or use the NLP sliders to get the look you want.

It sounds like you are looking for something similar to what silverfast gives for their scanner software where you can dig deeper and get sample film types (but not all), Kodak Portra etc. NLP doesn’t go that far but the scan is better out of the bag so you probably don’t need to dig down that deep. It’s all personal taste anyway, what looks good to me might look awful to you.

You actually raise a good point in that it would be nice if you could emulate the film stocks colours, but I would imagine that would be a shed load of work for the developer when maybe there are better things to develop. What would be interesting if, at some point, NLP allowed styles or plugins by 3rd parties to be used, in this case just knowing the slider positions for each film stock would be a great starting point - has anyone done this who would share? I’d assume it would be a list of settings?

I’m going to say the controversial thing… But what you want doesn’t exist in the film world. There is no ‘true correct way’ to invert a type of film.

In the old days the result would also be affected by the method of correcting the color balance and the type of paper it was printed on.
The ‘inversion’ has a lot of effect on the result, and no difference in the digital world.

This is also were the difference between different scanning labs come in. One just uses a preset and calls it done without judging the output, the other is doing half of a photo edit to give you a result.

A lot of the result is determined by artistic choices during the inversion and editing. A good proper lab will create profiles for its customers so they know what your tastes are and your expected result,so they will work towards that.

As far as i know - but I’m no expert at all!! - most c41 inversion software still needs to set a blackpoint and whitepoint. They try to determine this or you could set it yourself, but basically these vales need to be chosen during inversion. Blackpoint can be picked from unexposed piece of film, but you might set it higher for more contrast, or if you overexposed or if you want a different colorcast in your shadows.
Same for the whitepoint, but this can only be guessed or picked manually.

What makes the difference between the filmstock is how the color and exposure is distributed between those points.

Imagine a film has been made for a certain type of sensitivity for reds around the exposure sensitivity for caucasian skin. But what happens to the the reds around that exposure point? Do they become shadows gradually or abruptly (contrast), do they keep saturation or loose it (or gain it), etc… This is what makes the films differently.

But imagine two people shoot the same film stock (down to even the same batch number of modern Porta 400 135 for example).

Those two people use different developments of their film, use a different scanner with different exposure settings (or a different DSLR with different lightsource and different lens). No way any software could do ‘the one perfect inversion’ for this film. There are way too many variables… And way more astistic choices to make.

That’s why you scan your film yourself. The inversion is already 50% of your image edit, and I want to do it myself (or the professionals I would trust to do this are to expensive for my hobby :)).

And don’t forget, most films with the same name have huge differences. The portra 400 135 is different to the 120… It’s different to the one sold 15 years ago. It might be different between the US and Europe… It might be fresh or it might be stored in the sun. It might be expired. It might have been in a camera for a year, or it might have been shot and developed on the same day…

There is no ‘correct’ way for a film to look. That’s not how film works (even though people are let to believe it).

5 Likes

Totally agree with this. We need to start looking at film with a more modern approach. Tools like NLP can help to extract more quality and give us more flexibility than ever. There is a romanticised idea of the “film look” which I’m not sure is a good thing and many people don’t understand what’s really behind it. Different films will look different but is all up to interpretation, really.

2 Likes

There may not be a “correct” way to scan film, but there are expected norms for a given film stock when we pay a pro lab to scan it. If I go shoot a scene with a roll of Portra and a roll of Ekter - same scene, same light, everything - and send it off to any respectable, professional lab for develop and scan, my Portra is going to come back “looking” like Portra, and my Ektar is going to come back “looking” like Ektar. That’s all I’m asking in my original question: if I do the above experiment with NLP, do I get two different images as I would with the pro lab?

Yes, if you do the experiment with Negative Lab Pro, you should see the characteristics of each film stock coming through. NLP works very similarly to the algorithms used by professional lab scanners, and keeps the natural looks of the film itself. Ironically, it is possible to lose the defining characteristics of the film itself by trying to overcorrect everything to a predefined ‘standard’ look, which is why I don’t have film stock specific corrections (beyond the color balance levels for manufacturers).

Hope that helps!
-Nate

Thanks for the response, Nate - and with the answer I was hoping to hear, too! :smiley:

1 Like