I’m currently checking out NLP but I always run into the same problem:
As soon as I convert a B&W negative with NLP I get blown out highlights which are not recoverable (no matter what tool in NLP I try before or after conversion).
There is a similiar behavior when I use colorperfect, but with the tools in colorperfect it is at least possible to correct this to a certain extent.
The strange thing is, if I look at the unvonverted scan of the negative, there are no blown out areas at all and you can see all the fine details in these areas. The best result I get by simply switching the tonecurve in Lightroom and then tweaking the contrast resp. the highlights, shadows and so on by myself. But this ist time-consuming.
Nikon LS 8000
scanned exactly as recommended by you (for the colorneg test I used a linear RAW TIF instead the RAW DNG)
used film is Ilford FP4
the source-picture itself has correct exposure
Alternatively I tried the DSLR Method but with the same results.
So why do I always get these blown out highlights? Any ideas?
And a fundamental question: Why are you using Vuescan RAW DNG (which is not linear as fa as I know) and not the linear RAW TIF File in NLP?
If you’re willing to share an image, I can take a look. It’s possible that something has gone wrong with your installation, or you just need to tweak the settings a bit.
Firstly, for black and white, I’d recommend starting off with the “Linear + Gamma” tone profile (select it from the “Tones” dropdown). The default tone profile (standard) adds a fair bit of contrast to images (similar to a lab scanner), and may be why you see blown highlights.
After starting with “Linear + Gamma”, try slightly bumping down the “highs” slider.
If that isn’t enough, in the next version (v2.1), there will be a control specifically for the setting the clipping threshold (which you will be able to set negatively to avoid any clipping that you may be seeing). As you’ve noted, the original data is not clipped (and everything in Negative Lab Pro is working non-destructively against the original RAW data), so you will be able to directly control that point.
Once you find the settings you like, you can hit the “save” button to make them the default, or you can sync these settings to other existing scans you’ve converted by using the “sync settings” feature in batch mode.
In terms of your question on linearity, the short answer is simply that Lightroom will work better with 2.2 gamma. It comes down to having only 8-bits of control, so working on a linear tiff file, there aren’t enough levels of control to work on such narrow data…
Hope that helps!
thank you for your quick response!
Of course you can have some images:
This is the scan from Vuescan imported in Lightroom. Have a look especially at the Headlamp. You can see the structure of the glas even in the dense areas.
This is the result of my manual conversion and tweaking in LR
And this a crop to the headlamp section
This in contrast is the result after the NLP conversion (already set Linear + Gamma after conversion as recommended)
And this the magnification of the headlamp area in the NLP conversion
Hope it helps to better understand my problem.
Please look at the images in magnification.
If you want, you can have the RAW File for your own experiments.