Standardisation in NLP

Hi Dear Group))

I’m not new to scanning but to NLP.

I have a question to you all. Did you make a Data base of film stocks and there orange masks?

I’m coming from Cinematography and scanning motion picture film is a bit different, then I experience now with still photography.

In cinema the scanners have saved setups ( Made buy the operator) for each Film Stock for standardisation. You achieve that by analysing an empty emulsion pice of film to get a clear orange mask. As well as the black level and the white level for a good logarithmic output. But that’s a different story)

Having such settings you always know and can judge if a chemical bath was right or wrong because you know how the negative and the contained picture should look.

Now my question is, we cut of the empty edges of the frame so NLP can do the magic. But how does the plugin know how orange the mask is? Fuji has a different orange mask than Kodak. What if I put an orange filter in front of the lens to get a 80s style sunset?

How does NLP know what I did and how I creatively manipulated the negative if there is no standard set for each film stock?

I find NLP very powerful and I’m happy to replace FlexColor with that but still I would like to know )


Before buying NLP, I did a many tests with manual conversion of negatives. While b/w negatives are straightforward, color negatives are a bit more complicated. After a while, I found that inversion of the r, g and b tone curves gave me a better head-start than an inverted rgb tone curve. I also found that taking a white balance from a medium grey area (if I remembered how things looked) worked well too. All I had to do was to bring the r, g and b top and bottom edges in, so that they added up to white and black. Usually, images had a color cast that could be cured by dragging a point or two in the steep part of the r, g or b tone curve.

Cutting off the film mask parts around the image made it easier to find the white and black points in an image. I tried to compensate the orange mask by feeding the negative of it into an image on my iPad that I used as a backlight. The difference was not worth the effort in most cases though. The mask never really mattered because the steps somehow compensated it without bigger effort.

When I tried NLP after my tests, I found that NLP follows the principles I used, added a lot of convenience by having correction sliders that do not work in reverse - and a lot more.