Hi Dear Group))
I’m not new to scanning but to NLP.
I have a question to you all. Did you make a Data base of film stocks and there orange masks?
I’m coming from Cinematography and scanning motion picture film is a bit different, then I experience now with still photography.
In cinema the scanners have saved setups ( Made buy the operator) for each Film Stock for standardisation. You achieve that by analysing an empty emulsion pice of film to get a clear orange mask. As well as the black level and the white level for a good logarithmic output. But that’s a different story)
Having such settings you always know and can judge if a chemical bath was right or wrong because you know how the negative and the contained picture should look.
Now my question is, we cut of the empty edges of the frame so NLP can do the magic. But how does the plugin know how orange the mask is? Fuji has a different orange mask than Kodak. What if I put an orange filter in front of the lens to get a 80s style sunset?
How does NLP know what I did and how I creatively manipulated the negative if there is no standard set for each film stock?
I find NLP very powerful and I’m happy to replace FlexColor with that but still I would like to know )
Thanks