Nikon Scan with Negative Lab Pro

Why do cerebral pull-ups for accuracy if the goal is something highly inaccurate relative to the captured object?

Who knows if the negative is accurate before it is scanned. It’s simply a product of its design, of how it’s made and physical and chemical processes, all of which can, maybe, provide results that might be as accurate as 6-8 bits digital. We can agree to call this accurate though.

Yeah. I’m still not being clear enough I guess. That’s on me.

So… If this was all as simple as “invert in PS and you’re done” then there wouldn’t be NLP. There wouldn’t be people buying higher CRI lamps and so forth. You clearly have a dim view of my output but that’s just taste. Even cinema movies which will be colour graded all to hell are generally shot to high technical standards with high process control before the “artistic” stage.

I went for a walk yesterday and I guess I mentally satisfied myself that my conclusions long ago that the unprofiled TIFF is pretty close to a “RAW” in essence. In the information it carries. It pretty much should put the green channel info in G, the red channel in R and so forth. In my own previous words, it should be doing an OK job of doing the “colour separation”.

If you’ve been following some of these colour science threads on here that I have you’ll know that a lot of talk has gone on around white light vs R G B for the camera scanning crowd. And part of that got into the problem of cross-talk between colour channels. In short, imperfect filtering. As you know, R G and B aren’t equally spaced spectrally and so these effect don’t just affect simply, they distort colorometrically. Again, if it were all that simple none of us would be here.

So for the DSLR crowd you have, on the one hand, the problem of an often white light backlight and often fairly permissive colour filters on the sensor. But on the other hand, all of this is pretty precisely profiled by the manufacturer (or Adobe) and when you shoot RAW this can all be accounted for. The CS 9000 has a lot of theoretical advantages with the R G and B separate and monochromatic exposures but what’s missing is the profiling.

Why do I care? I work methodically. I like to have a deterministic process. I find that when I know what all aspects of the process and equipment will do I know how to get what I’m after. And, again, when colours come out “off” I often miss the “memory colour” I was after in the first place. And the illusion of reality suffers. And I’m taking the film as my starting point. I’m not trying to precisely reproduce reality, I’m trying to faithfully reproduce the film’s palette. There’s a certain rendering built into each film that I’m after. With slide film this part is automatic. With negative film it should still be achievable if all the variables are accounted for and nailed down.

That’s the best I can do at explaining myself. But I suspect it is the same for many on this site. And given the articles you linked to you can’t be far off yourself. Even if your tastes for final output differ.

So, ultimately, when one inverts the unprofiled TIFF from the CS9000 of a negative scan there is no simple metrix profile which will be faithful. I’ve tried that. There are ways of getting closer with LUT profiles but these are by nature finicky and very hard for the home hobbyist to get right. Again, I’ve actually made one that is close in a lot of cases. And, as I mentioned before, there are some LUT profiles that Nikon install as part of Nikon Scan which are frustratingly close but obviously will only work if used with the same inversion logic as the original software. What I was sharing (in case anyone else had made any progress) was that I’ve had the following basic experiences

  • Reasonably good with ColorPerfect and my own LUT – obviously in this case I’m making the LUT based on an inversion logic I’ll use again on subsequent images. But not a great fit for the Nikon LUT
  • Reasonably good with NLP and my own LUT. Sometimes not too bad with a matrix LUT but again with some obvious misses as noted in my first post
  • Actually very good with manual PS inversion and the Nikon LUT. But then the process becomes a lot more manual and fiddly

Anyway, was just seeing if anyone else had been playing and pondering in similar directions.

Sam

I guessed that much and addressed it indirectly with the two paragraphs in my previous post.
Nothing against an industrial process for artistic output!

As for RAW & TIFF: both transport an image that will provide the R, G and B numbers that allow to show the image using respective technology put into software and hardware. Nowadays, these numbers are packed into tuples representing a picture element with a single pixel or group of pixels (RGB/RGB/RGB/RGB for TIFF and R/G/GB) in a way that makes showing the image relatively easy or drive space efficient. Whether the one or the other is used does not matter that much, as long as the processes have been established - and that includes lighting.

None of it matters, unless we can compare what we see to a reference. Without reference, “accuracy” is a bunch of characters only.

Hey,

I was thinking what are your core question inside all your questions :slight_smile: , and I guess it is the following:

Is what I see or capture by any electronic device correct and what is with the (correct) colors when they are captured through a negative. Is that right?

The first question is fairly simple i guess. An object with a color emitting light with a specific spectral distribution. These are by your eyes and brain interpreted in a color. You now want your device be capable of capturing the color in the same way but you have some limitations like color space. But for having it correct you should make a profile for your camera with an target, because the target is telling your cam what it exactly see. But if you look at it on a sRGB device this could be not the truth, but I guess this is only a Problem for real extreme colors. All right…question 1 checked.

So for the second question it is propably this way: Now you taking your filmcamera and capture the above mentioned scene. The film picks up the rays of light and does this in a non-linear and certainly not truthful way, and that is exactly how it is intended. You want the specific colour reproduction of a particular film. How the film reacts to the spectrum can be found in the data sheet under Spectral Sensitivity Curves. For instance Kodak Gold you find here: Datasheet Kodak Gold. All right…that means that a red is by any film different captured and burned in the negative.

What is important for scanning is probably the following: You have skewed colours on your negative, but you still have colours in the visible spectrum of your camera. And as long as it’s within the capturable colour space, and with the profiled camera, you should get exactly what’s burned on the negative. If you want to know if your inversion matches the specifications of the film in question, you need to take some measurements. My guess is that the process should look like this: You measure the actual colour and use the spectral sensitivity to calculate where the colour is shifted to by the film in question. Then you invert your image and measure again where the colour is after the conversion. The problem is that the film is non-linear and the transferfunctions look propably different for diffrerent lightingsituations… and therefore the output is usually unpredictable. Kodak can tell you exactly what the calculations looks like.

I don’t know. Maybe I’m wrong, it’s what I think it is.
Greetings

I notice in that Kodak Gold specification that the film gives “High-quality results when scanned
for digital output”. I wonder if the marketing department insisted that this should be included or if the film was in any way optimised for digital output when scanned and if so what changes were made to its formulation?

It is also optimised for printing on a select bunch of Kodak papers, digital and conventional, and I wonder how different the results would be when printed on Fuji Crystal Archive. Would they be different at all?

It’s always seemed to me that the manufacturers’ intended ‘look’ of a film relates to how it prints using the conventional processes where the only intervention is the combination of Cyan, Magenta And Yellow filtration, plus the exposure, however I’ve only ever printed in the darkroom with a colour enlarger.

Back in the day when companies like Fuji and Noritsu brought in automatic negative scanning in their processing machines they would have needed to ensure that the output compared favourably with prints made using the conventional method, particularly for professionals, and they clearly did because those machines took over. Once such a machine has been perfected you also enable a skilled operator to process the colour in a certain way, to change the ‘look’, or to make the ‘look’ synonymous with the machine itself.

I’ve never used one of those machines though, it would be great to have some input from someone who has.

Accurate and look are two words to chew on.

Manufacturers designed their films to deliver a brand look when processed according to recipes and materials. It was about looks and if someone needed accurate or true rendering, a reference card was part of the photo. Depending on operator skills, reproductions were more or less close to how the objects looked when the photo was taken.

Wirh NLP, the process from snapping an object to converted image includes a few steps that provide looks plus the adaptive conversion. All of it is primarily meant fir getting into the ballpark rather than accurate, true to life results.

Again, all if the above does not stem from accessing original documentation but from what I deduct from engineering knowledge and more.I used to hunt accurate too, but came to conclude that it isn’t relavant for what I do in my imaging activities.

As a systems engineer by trade I have one phrase I live by and it has served me well:

“In theory, there is no difference between theory and practice. In practice, there is.”

Long before I entered this rabbit hole I was simply a bit dissatisfied with the colours I was getting when inverting a positive scan of a negative. And I was doing that because I had already tried everything I could to take control of the Nikon Scan “negative” mode. When you scan film as a negative in Nikon Scan you actually get some really great colour. But it is immovably automatic in two senses that really threw off my post processing flow.

  • An auto-exposure that could not be overridden and that would clip both blacks and whites
  • An auto-white balance that was stubbornly frame-by-frame

Honestly, my first battle was trying everything possible to overcome those. But I couldn’t. In positive mode you can turn off everything and fix exposure. But now you have to do your own inversion.

Like many of the people commenting the theory of this seemed pretty straightforward but I soon found that it really wasn’t. It should just be a simple matter of getting the R G and B from the scan and inverting and maybe some messing with gammas and an appropriate matrix profile and you’re done. But it just isn’t.

I soon discovered that depending upon which scanner I used (I also had a Minolta scanner and I had an Epson flatbed for 5x4 film) you just couldn’t get the same result. In my mind you really should be able to. But again, as we see with the camera scanning, this is not the case. There are more subtle things at play.

For me, my quest has been to try to nail this down to my established platform which is my Coolscan 9000, Nikon Scan and its unprofiled TIFF output. Just wondered if anyone else was on the same journey since I’m aware a lot of people camera scan and many many Coolscan users are on Vuescan.

Sam

The first time I recall them doing this was with the “new Portra” films. E.g. the reformulated Portra 400 and Portra 160. I shot a LOT of Portra 400 over the years. From what I could see the focus on digital friendly seemed to boil down to making it easier for home scanners to work with. Noticeably the film

  • Is much better at sitting flat in a holder and not curling – strong film curl often ruins scan focus using typical flimsy scanner film holders
  • Has overall a much lower density than some older emulsions – again, some home scanners with weaker sensors struggled with high film density

I don’t know what else they did but those were the two noticeable ones to me. And Kodak Gold 200 seems to follow in those footsteps.

Sam

That’s interesting, in that case I think that the marketing team missed a trick, they should make more of it, particularly with respect to camera scanning. Thanks.

Camera scanning and converting negatives on mostly Kodak material from the 1975 and later, I found a few films I had used at a certain time and geographical location were really easy to convert (manually), while others were tough nuts to crack (even with NLP). I regularly had my films processed in local stores in order to avoid damage by X-Ray while traveling or from climatic conditions. All the films I had checked lately were in good condition, but the slides from the 1950s/60s are a different story…but NLP 3.1b7 was able to bring slides back from something that looked wayyy off of recoverable.

For me, the strong part of NLP is its capability to cope with almost anything I threw at it. Many times, I got conversions that could be left alone, sometimes, they needed a few edits and only a few were giving me colours in a way that made me convert to B&W.

Just for a fun contrast, Kodak never reformulated Portra 800. It’s the same recipe as before. Now, Portra 800 is definitely one of my favourite emulsions ever for colour. And, full disclosure, I’ve only ever shot it in medium format. But that film is just about the definition of scanner unfriendly. It is on a particularly thin and flimsy base which loves to fight going on to development reels. Once you successfully develop it, the thing whips and curls like a snake whilst drying it. Then, it not only exhibits a strong curl once developed but it is the opposite curl of most other films. Most films curl inward on the emulsion side meaning that if you use a top glass (and can exert enough pressure with the film holder) you can flatten it. But Portra 800 curls outward on the emulsion side making it want to curl out the bottom of the holder where typically there is no glass (unless you make a sandwich – and then you get to fight newton rings)… I hate it when I’m developing it. I hate it when I’m scanning it. I hate it when I’m buying it (it is super expensive). But when I see the pictures I always love it. It really is beautiful.

Sam

Just out of interest what kind of tank do you use for 120, ‘Paterson’ in plastic with the ball bearings to jiggle the film in, or stainless steel, and if the latter do you use a loader accessory? I don’t shoot 120 enough now to feel comfortable loading it.