Hi guys, I’m trying out scanning negatives at home with NLP.
Here’s my setup:
- Camera: Sony A6400 with Takumar 50 f/1.4 + macro tubes
- Light source: Nokia 8.1 (LCD screen) with Screen Flashlight app
- Film: Fujicolor Superia X-Tra 400
The problem is, no matter how much I try tweaking the settings in NLP, I still can’t match the look of the scan I got from the lab (which used a Noritsu scanner), especially contrast. Is there any features like Negafix from Epson where I can choose different film profiles?
Here’s my DSLR scan (lab scan at comments):
Is it normal for the converted NLP scan to look like that? Can I tweak the settings to match the lab scan and save it for other photos?
And here’s the scan I got from the lab (I can’t post 2 images at a time)
Crop away the film border in Lightroom before converting, and then remove the crop once you get what you like. NLP analyzes the content of the image for calculating its adjustments and your film border and sprocket holes are throwing it off. Note how those sprocket holes are pure black, so NLP thinks the image already has a great black point, not knowing that in reality the actual photo is still really washed out.
NLP has a setting to automatically ignore a certain % of all 4 borders so you don’t have to crop, but given how much of your digital capture is non-image data along the top and bottom, I’d recommend just applying a custom crop first and then removing it after NLP has done its job.
Also make sure you use the Lightroom white balance picker to white balance off an unexposed section of film before cropping and converting.
Oh I actually used the WB picker + crop the border before converting and it still returns the same result. This is a different photo.
Cool, sounds like you’re on the right track. But your screenshot shows you using NLP without cropping the border, so it’s hard to tell what your workflow is right now. Can you describe your steps in more detail?
What is a different photo? You posted two examples: 1) a screenshot of NLP producing a washed-out result, and 2) an example lab scan of the same photo that you’re trying to emulate. It’s hard to comment on a different photo that you haven’t shared here.
Thanks for the advice! I edited my post and replace the screenshot with the actual photo in my workflow.
Great. Can you post a screenshot of NLP’s convert tab?
And have you tried dragging NLP’s darks and/or blacks sliders to the left, or adjusting NLP’s contrast slider?
Your updated screenshot with LR’s histogram shows there’s tones pushed all the way to the left in the image and also decent highlights. The lab scan just really added a lot of contrast. You may need to do the same manually, by crushing the blacks and maybe lowering the brightness a little.
I don’t personally try to emulate a scan in my development, so I may be at my limit here. It’ll be interesting to see what others say.
Selecting other settings from the “Tones” dropdown might do the trick.
Left image: Standard, right image: Linear + Gamma
BTW: I don’t try to emulate, I try to make an image that I like.
Thank you guys for your comments!
After a whole weekend of messing around with NLP’s convert tab, I finally got the final image that I like, with the lab scan as a reference plus some editing.
I personally enjoy out-of-the-box experience, which is the whole reason I love shooting film - no editing needed, the result is the film’s characteristics.
Maybe I didn’t shoot enough film to be uncomfortable with the lab scan? I like DIY, and I’m learning to develop + scan my film at home, which is why I try NLP in the first place.
Hi! You’re not that far off… I’d try “linear + gamma” tone profile, and “autocolor - warming”. Then from there, adjust the brightness to match, contrast to match, and tweak the strength and hue of the autocolor - warming. You may also need to tweak the colors in the “highs” and “shadows” tabs slightly.
If you want to email or PM me the raw, I can show you setting which will match.
This is only partly true… yes, it is true that different films have different inherent properties… but the result you get from the lab = film characteristics + scanner characteristics + human operator choices.
That said, I’m always working on NLP to improve the baseline conversions to be closer to the “no editing needed” experience!
Creator of Negative Lab Pro