Confused about preparation for NLP processing

Hi,

I am a little bit confused about how to prepare my color and B/W negative scans for NLP processing.
Some say scan at gamma 1.0 without ICC embedded. Some say need to use the TIFF prep utility. Some say no need to whitebalance with color negatives. Guides and forum conversations are clashing with each other.
What is the right way?

I am scanning 50 years old color negatives as positive in Silverfast 9 AI software, saved as 48bit TIFF with iSRD since i really need the cleaning for these old negatives. So i can’t use DNG files.
Please don’t recommend to use Vuescan to me, because i find its dust and scratches removal weak/bad.

B/W negatives should be also scanned 48bit but without iSRD cleaning, but i had scanned these 16bit which was not correct i think. B/W doesn’t need whitebalancing before using NLP as i understand.

I scan all with gamma 2.2 and AdobeRGB icc embedded, which is the default of Silverfast.

Afterwards i open the TIFF in lightroom, whitebalance it for color negatives, crop it, apply NLP with soft saturation. Then i choose either the default LAB or Lineair Gamma profile inside NLP. When i choose Lineair Gamma i get clipped black/whites which i need to adjust blacks/whites slider inside NLP.

I turn off NLP sharpening, since i might need to process the resulting TIFF further in Lightroom or Photoshop.Some results are quite grainy compared to normal scans from Silverfast, so i wonder if some aggressive processing is done during the conversion. I also compared NLP with grain2pixel plugin and there the grain is same as a normal scan.

If i switch to Photoshop for dust and scratches removal with xSRD plugin, i am leaving the non-destructive workflow right? How do others process their dust and scratches? Using the healing spot is quite time consuming.

In Lightroom i see editing in Photoshop is set to ProPhoto RGB and resolution to 240 px. Should this not be changed to a matching colorspace and resolution?

I am scanning with a Nikon Coolscan V at 4000dpi and an Epson V850 at 2400dpi. Both with Silverfast 9.
I noticed with negative to positive conversion that the used software NikonScan or Silverfast can have some effect on the resulting colors, so i decided to use the more up to date Silverfast software.

Lightroom uses ProPhoto RGB, should this not be set equal to the scanned input AdobeRGB also when i start processing with NLP?

Hope someone can give more info on this, so i don’t need to waste time redoing my scans. Much appreciated. Thanks in advance.

Have you seen the scanning guides? They can be found here and may help with future scans:

As for general processing, I stay within NLP and Lightroom as far as they will take me, sticking to a raw workflow and the initial colour space.

  1. Adjust white balance, colours and tonality in NLP to get as close to what my image should look
  2. Further adjustments in Lightroom
  3. If necessary, export to 16 bit TIFF positive for further processing in Lr or Ps
  4. Output: sRGB for screen viewing, sharing or printing services, AdobeRGB for “serious” printing.

I try to avoid step 3. Some negatives need it though.

Thanks. I have been checking the guide’s scanning section before for the TIFF scanning and Silverfast. That’s what confusing me with some discussions here.

The guide says Silverfast can scan 48 bit in DNG, but this only applies to 48 bit HDR RAW (Silverfast own format). Not if you want to use iSRD, so the guide should be saying normal 48 bit, not HDR. So TIFF is my only option in Silverfast.

At another discussion it is said to set ICC embedding off, but this is not mentioned in the guide or at least i have not seen that in the section i was checking.

As i understand TIFF prep is not necessary when i scan at gamma 2.2. My input colorspace is still Adobe RGB. Whitebalance is not needed for the TIFF, but i do get better colors if i do whitebalance…so that confuses me again.

After processing NLP and doing some post processing in Lightroom, if you go back to the 1st step of import, right before NLP, are the changes that you made in Lightroom after the previous NLP processing being discarded? I think history is still kept in Lightroom.

Since i am scanning family negatives, i don’t think i actually need the non-destructive workflow after NLP and will do the post processing in Photoshop. Chances that i keep modifying it after a final export would be low. I will still keep the original positive that i feed into NLP though.

Done some more experiments the last days and i finally getting great results. Super! Way better than me trying to get the colors right all the time. Just scan iSRD TIFFS, no whitebalance, gamma 2.2, embedded ICC, choose LAB standard, neutral and fix some brightness/contrast/blacks/whites or adjust the mids :slight_smile:

I think lightroom is showing a low quality preview maybe thats why i keep seeing lots of grain? When i export it as a test with 4000dpi it seems to be good. For sharing i will lower to 150/300dpi jpegs and use sRGB i think.

Now hope to quickly finish rescanning all my collection and start thinking how to get it organized in lightroom and how to back it up. A pity i sold the Nikon Coolscan 5000 before, now have to do it with a Coolscan V…darn.

Since lightroom catalog is just a database which will be slow as volume grows. What is the best strategy? Is it advisable to make a catalog per film roll?

From backup/restore perspective, should i keep the catalog together in the same directory of my working images? I hope lightroom is not storing absolute drive path names…i must be able to restore it on another drive letter in future if ever getting a new PC. For now copying all to my NAS.

You don’t need to worry much about your Lightroom catalog size. And one catalog per roll is overkill to the millionth degree! My catalog has over 35,000 images and I don’t have any slowdown. Other people have hundreds of thousands of images in one catalog. The bottom line is that catalog size isn’t a big issue.

1 Like