Need Some Tips to Match My Reference Scan

Hi everyone! I’ve been experimenting with lots of scans and using a “reference” lab scan that I’m trying to match. I feel it’s one of the best scans I’ve ever gotten from a lab, and an image I really like.

I’d really like to figure this out before I embark on my entire backlog of film for some serious projects and festival/show submissions. I want to feel confident in my technique and not second guess or feel like I could’ve gotten a better scan from the lab. So I could really use some tips from those of you that know the software and settings inside out, maybe @nate and others?

I recently upgraded to NLP 2.1.2 and LR 9.2 - and the results now are much closer than my best ones previously with NLP1.1/LR 7.5, but it’s still not quite there.

If anyone can provide some tips on things/settings I can try to do differently in NLP and in Lightroom menus to get closer to my reference scan, that would really help me out.

Here’s my best so far, I need some tips to match #2 to #1 :

1. Original lab Scan with Nortisu QSS-3411

2. My Scan with NLP: Nikon D750 + Micro Nikkor 60mm, Digitaliza 120 holder, LED light table. I made the scan 1250px on the long side just so it’s the same size as the lab scan I want to copy.

I’m listing all my NLP settings below. What should I try to get it closer to the lab scan? Which looks a bit sharper and clearer for lack of a better word and the exposure seems a bit different …

NLP Settings:
Source [Digital Camera]
Color Model [Noritsu]
Pre-Saturation [3 - Default]
Border Buffer [5%]
Tones [Shadow Hard]
Brightness [-3]
Contrast [+6]
Lights [+0]
Darks [+8]
Whites [+2]
Blacks [+6]
[Soft Lows checked]
Film Color [Kodak] - my negative is Portra 400 and “Kodak” looked closest from the defaults
Mids Color Sliders [C+7 M-3 Y-5]
Sharpen [Lab]

LR Settings:
Lightroom Sharpening [Amount 100 | Radius 1.0 | Detail 0 |Masking 0]
Lightroom Export Menu Sharpening [NONE]
Lightroom Export Resizing [Long Edge 1255px]

Especially for sharpening, maybe I need to do something differently in LR as well? To get the same type of output as my reference scan.

I should also mention I’ve pored over both the text guide and the video guide on the NLP site and also Nate’s older Youtube video “DSLR Film Scanning Guide + RAW Lightroom Editing”, so I feel like I’m not missing any advice from those resources. The Youtube tip to flip the Digitaliza for better light distribution improved my scans a bit.

Thanks in advance for any tips!!!

Andriy Mishchenko
Montreal, Canada

Hi Andriy, welcome to the forum.

Gave it a short try because working on a jpeg is different from working from the converted raw. You could use 16 bit tiff after conversion for better wiggle room. Here’s what I get and how:

  1. Pick white balance from concrete post at the right edge
  2. Reduce blue saturation (you could also change the blue tone)
  3. Reduce dynamics
  4. Change Clarity and Structure
  5. Adjust WB just a bit, you could go stronger with tint

My goal here is not to faithfully reproduce your lab scan, but to show how you can manage the colour of your image. I’ve also changed the tone curves a bit, but this would not work easily with a converted image. You could also try with the “Calibration” sliders. As for sharpening: Press the option key (I’m on Mac) while you shift the sharpening sliders. Observe how contours and details can be balanced using a grey image (while you press option) instead.

1 Like

Thanks, @Digitizer ! That’s really detailed and really helpful, I will try to re-create that ASAP.

Re: sharpness, is there a better way to sharpen for downsizing than the LR settings I’ve posted above? I think that’s the key difference between my first two images. My scan looks good in the original size, but once downsized to 1255px wide (from 4200px wide) it’s not as sharp as the lab’s version. It looks like whatever the lab’s doing at the smaller size is better, and I’d like to know especially for posting online and to Instagram…

I almost never sharpen an image until right before the end. This means that I’d downsize first and sharpen the result if necessary. It’s an additional step - and full control by me.

That makes sense. What do you downsize with before that final sharpen? Lightroom/Photoshop and what algorithm if PS (bilinear etc)? Something I never got 100% the hang of …

And for that final sharpen after downsize, what’s your tool or plugin (USM, Smart Sharpen, Lightroom, etc)? Just trying to get to a similar level the lab scan had - which wasn’t anything crazy over sharpened, but makes my version look soft …

I do it all in Lightroom. And I’m not overly fond of digital sharpening of images originating from film. Not for “purity” reasons, but in order to preserve the film look.

The gray balance of your scan is definitely off, in the blue/green direction. In the sample images, look at the utility pole at the right. In the Noritsu scan, it looks fairly neutral; in your scan, it has a color cast. It might be interesting for you to measure the RGB values in Photoshop and see how they differ.

If I were doing this, I would try to get the gray area as neutral as possible using the controls in NLP, for the reason Digitizer mentioned: you have less latitude for correction after the image has been converted.

You might even want to try making a standard negative (a hand holding a photography “gray card” is useful and easy to make) and photograph it on your favorite film, then work with that to find the settings that give the most neutral result. This procedure (calibrating on a standard negative) is probably how your lab sets up its Noritsu scanner. Good news: once you have these settings, you’ll be able to use them as a starting point for other negatives made on the same film under similar lighting conditions.

(Obscure lore: you’ll find it’s impossible to get perfect settings for both the gray card and the skin tone, because of an annoying film quirk called “curve crossover.” It has been driving labs crazy since the Dawn of Kodacolor! You’ll have to decide on a picture-by-picture basis whether the skin tones or the neutrals are more important, and correct accordingly.)

PS — I think the Noritsu scan looks over-sharpened; yours is better…

1 Like

Hi @andriy,

If you can provide the original RAW file from your D750 scan, I can show the NLP settings to match the reference scan.


1 Like

Hi @nate , sorry for the late reply. And thanks once again for your amazing software. I think I speak for everyone when I say it’s been really life-changing for my analog photography.

Anyway, here’s the original NEF format RAW file from the D750:

Just FYI I followed your older guide on Youtube where you flip over the Digitaliza holder so the image as backwards, you’ll need to flip horizontal for the right orientation.

The easiest way to match the reference scan is to have all images converted by the reference scanner :wink:

Other than that, here’s my usual approach I take when I convert a film model that I’ve not converted before. It helps me getting the starting point.

Create physical copies, import into Lightroom, create virtual copies, treat with NLP using different settings. You can make out the procedure by reading the file names. The number after “Kopie” corresponds to the saturation setting for conversion. The bottom line shows a manual conversion and your reference scan. Out-of-the-box conversions don’t have the greenish cast of your reference scan and would therefore need some additional post-conversion tweaks using the sliders on the second tab of NLP.

Bringing the second tab settings into a test goes beyond my readiness here, but you might want de see what you can get. Take the image you like best as a starting point for a next iteration. Best devise a systematic approach for that step in order to get and store a conversion preset that you can test on different images. Said preset will probably work with images of the same film, but might not work with images on other films because of whatever reason.

Testing systematically does not take a lot of effort but is fairly boring and helps to understand what NLP can do and how. Nevertheless, I’d not test on ho-hum images and risk that the preset I’d create failed my wow images. Play and test with images that are worth to be shown, published or printed…

Note: I took WB from the tree in the background. The film base is too bright for Lr to WB from.

Here’s my :30 second match attempt…

A few notes:

  • This is using v2.2, which will be out soon, so the settings would be different for v2.1…
  • In LR, after conversion, I set “texture” to +23… this can help get the micro-contrast closer to the scan
  • The conversion settings were “Color Model: Basic” and “PreSat: 5”

Hope that helps!

Thanks!! That really helps. What did you use the dropper on for custom WB - the orange film mask? Or something in the photo?

And when is 2.2 out? Any cool new features? Eager to try this out in exactly the same way with the same NEF once it’s ready for release.

Also @nate any input on the sharpening questions above? It’s a little off topic but I’m sure other people wonder that when comparing to lab scans (especially lower res ones like I did)

In this case, I just used “Auto-Warm” for the White Balance setting, and then adjusted slightly to match.

Stay tuned :slight_smile:

It just depends on your tastes and the final medium. For posting lower-res (like Instagram, for instance) you just need to be careful that you don’t over-sharpen the original, and then use some form of output sharpening on the final file resolution… Lightroom has a built-in output sharpening in the export module, but I prefer to use something called an “unsharp mask” on the final size… you can do this in Photoshop, or you can get a plugin called “LR/Mogrify 2” that includes an unsharp mask that you can setup in Lightroom’s export module.

Thanks! Which one do you think would get me closest to the reference lab scan at the top (so far I haven’t been able to get there, especially with downsizing in the mix). Granted it may be a bit over sharpened - but still would like to know how to get there, and then tone it down from there …

Export to the final size, then open in Photoshop (you can get the same results with LR/Mogrify, but Photoshop is easier to learn the right settings because you can see and adjust the changes in real time.)

In Photoshop, go to Filter > Sharpen > Unsharp Mask

These are the settings that appear to me to match the sharpening on the reference:

Thank you Nate! That really really helps. I used to be more of a micro sharpening pro more like a decade ago, but my whole scanning workflow and equipment has changed (no more drum scanner) and so have the algorithms a bit probably.