Where are the presets?

I just installed Lightroom Classic and Negative Lab Pro and I have two beginner’s questions.

  1. I am unsure how to use the camera profiles that have been installed. All I can see in the Develop module is a Negative Lab Profile. Are there supposed to be profiles of the camera, lens or film with which the negatives were created?

  2. Reading the meta-data database. In the Library Module, I have pulled down the NLP metadata and it seems that except for a few pull-down menus (film format, etc.) most of the data is to be entered by the user. Is this correct?

Overall, it seems that the tutorials and videos are aimed at folks who are already up to speed on this stuff. I’ve been an amateur photographer for 40 years, but am quite puzzled about some of these basics.

  1. NegativeLab Pro gets profiles as needed by the settings on the “Convert” tab of the tool.
    When you are in Lightroom’s “Develop” tab and invoke NegativeLab Pro and change settings, e.g. the Color Model, you can see that color profiles change - although camera body specific details are not shown. Whether camera specific settings are used or not should best be answered by @nate.

  2. NLP metadata has to be entered by the user indeed. Info other than from the pull-downs can be critical. Depending on your entries, NLP can export images to tiff (as an example) or not. I therefore decided to not fill in these fields and use Lightroom’s native fields instead - until I can get more info about what entries don’t throw off NLP’s export function (in NLP’s “Edit” tab)

  3. Changing settings in the “Convert” tab will produce slightly different results. From top to bottom: Color Models - Basic/Frontier/Noritsu, from left to right: Negative (white balanced on the film border and cropped) and selected saturation levels - low to high.

Thanks Digitizer!

Since I am about to embark on a large project (9000 negatives from 35 years of film shooting), it’s important to know ahead of starting whether there are some hidden settings that I don’t know about; e.g., would NLP do a better job if it knew that this particular roll was shot on a Minolta XD11 using Kodacolor-II? I don’t want to find many rolls later that I could have gotten better results with a simple (unknown) click.

I assume that the conversions you show are different because of the specifics of the different scanners being emulated. It would be interesting to discover if they would be different when the same scanner emulation is used, but the metadata in the frame showing a different camera took the frame.

@Ami, a scan of a silver halide negative does not know what camera and lens it was taken with. Neither does it know any specifics on how the film was developed.

The negatives I work with now date back from anything between the 1970s and the early 2000s. I find negatives that are easy to convert even manually and I find negatives that I spent a lot of time adjusting.

The series I presented in my earlier post were meant to illustrate how results depend on NLP’s settings - converting a “Kodak Safety Film 9014” - and to find out what setting I liked most on this film of 1978. Whether these settings will work on later films or not remains to be seen though.

I found that NLP is fairly tolerant regarding the exposure of the camera scanned negatives, provided that exposure is not pushed too much. ETTR can help with noisy camera sensors, but again, don’t overdo it.

Check these out

1 Like