Yeah. I’m still not being clear enough I guess. That’s on me.
So… If this was all as simple as “invert in PS and you’re done” then there wouldn’t be NLP. There wouldn’t be people buying higher CRI lamps and so forth. You clearly have a dim view of my output but that’s just taste. Even cinema movies which will be colour graded all to hell are generally shot to high technical standards with high process control before the “artistic” stage.
I went for a walk yesterday and I guess I mentally satisfied myself that my conclusions long ago that the unprofiled TIFF is pretty close to a “RAW” in essence. In the information it carries. It pretty much should put the green channel info in G, the red channel in R and so forth. In my own previous words, it should be doing an OK job of doing the “colour separation”.
If you’ve been following some of these colour science threads on here that I have you’ll know that a lot of talk has gone on around white light vs R G B for the camera scanning crowd. And part of that got into the problem of cross-talk between colour channels. In short, imperfect filtering. As you know, R G and B aren’t equally spaced spectrally and so these effect don’t just affect simply, they distort colorometrically. Again, if it were all that simple none of us would be here.
So for the DSLR crowd you have, on the one hand, the problem of an often white light backlight and often fairly permissive colour filters on the sensor. But on the other hand, all of this is pretty precisely profiled by the manufacturer (or Adobe) and when you shoot RAW this can all be accounted for. The CS 9000 has a lot of theoretical advantages with the R G and B separate and monochromatic exposures but what’s missing is the profiling.
Why do I care? I work methodically. I like to have a deterministic process. I find that when I know what all aspects of the process and equipment will do I know how to get what I’m after. And, again, when colours come out “off” I often miss the “memory colour” I was after in the first place. And the illusion of reality suffers. And I’m taking the film as my starting point. I’m not trying to precisely reproduce reality, I’m trying to faithfully reproduce the film’s palette. There’s a certain rendering built into each film that I’m after. With slide film this part is automatic. With negative film it should still be achievable if all the variables are accounted for and nailed down.
That’s the best I can do at explaining myself. But I suspect it is the same for many on this site. And given the articles you linked to you can’t be far off yourself. Even if your tastes for final output differ.
So, ultimately, when one inverts the unprofiled TIFF from the CS9000 of a negative scan there is no simple metrix profile which will be faithful. I’ve tried that. There are ways of getting closer with LUT profiles but these are by nature finicky and very hard for the home hobbyist to get right. Again, I’ve actually made one that is close in a lot of cases. And, as I mentioned before, there are some LUT profiles that Nikon install as part of Nikon Scan which are frustratingly close but obviously will only work if used with the same inversion logic as the original software. What I was sharing (in case anyone else had made any progress) was that I’ve had the following basic experiences
- Reasonably good with ColorPerfect and my own LUT – obviously in this case I’m making the LUT based on an inversion logic I’ll use again on subsequent images. But not a great fit for the Nikon LUT
- Reasonably good with NLP and my own LUT. Sometimes not too bad with a matrix LUT but again with some obvious misses as noted in my first post
- Actually very good with manual PS inversion and the Nikon LUT. But then the process becomes a lot more manual and fiddly
Anyway, was just seeing if anyone else had been playing and pondering in similar directions.
Sam