Now I understand if the developers might be reluctant to answer all these questions, however, the knowledge about this appears to be quite fundamental if we want to bring DSLR scanning to the level of professional films scanners.
Foremost I am interested in the somewhat mysterious mechanisms of the “convert” process. As I wish to waste as little of your time as possible I will list a few assumptions I have. I would ask you to be so kind you either confirm or correct them.
- Digital camera - Analyses the negative scene and tries to balance the negative channels to achieve a flawless negative distribution and then inverts (or vice versa).
- VueScan/SF - Uses preset colour curves only applying exposure correction and inverts (or vice versa).
- TIFF Scan - Uses preset colour curves only applying exposure correction and inverts (or vice versa).
- Scanner - Applies a secondary curve correction to the convert curve trying to bring it closer to the selected device.
- None - leaves it at the state of the conversion curve
- Basic - applies a positive scene correction.
This is how I imagine it. The thing that puzzles me is the tertiary correction and the fourth one with the newly introduced LUTs where we can again select a device emulation.
Now, this is the place where I start wondering. I can select a Noritsu conversion and later apply a Pakon LUT. I am given choices at the wrong end of the process. All the secondary and tertiary manipulation can be done in Lightroom or PS etc. What I need is control over the primary conversion.
In my long experience with film scanners from various manufacturers, I can see much simpler and thus much more consistent process which is necessary for professional work. It makes me wish someone would break down the NikonScan or Dimage Scan software and simply try to “steal” their conversion math. Perhaps those company would even open up such legacy code. The scanner softer appears to correct the exposure at scanning but leaves the colours fixed. The consistency of colour across different shots of the same scene is never an issue because there is no math to screw it up going on.
With NLP however, most of the time I have the feeling that each shot was made with a different film and different developer which makes me feel uneasy. Of course - that software knew the hardware it has run and was set to it - and your software needs to work with an uncontrollable variety of light sources etc. Still this “correction layer” should be optional as some of us have very reliable and even changeable light sources like enlarger colour heads etc.
Anyway - I am sure you get my drift. I am now trying to select DNG scan even though I am scanning with a DSLR and this is why I am so curious if this solves my problem. I do not want to have colours analyzed in each scene, I simply want a fix invert with exposure or gamma correction. Everything else should be optional in further stages or editing.
Please do not read this post as a rant - I simply wish I would finally have a workflow that would make me feel comfortable ditching my old 8000.
Ah also - I know that is a beast and a fortune. However, did you ever have a chance at looking into how they convert the negs? https://www.youtube.com/watch?v=9lb8L7nFC2c
It seems their archival software must have a negative conversion module.
All righty that was a longish one. Thank you for your answer and all best