I have tried NLP numerous times but the default colour I get is so bad I might as well do it manually in Photoshop.
I’m scanning with a Canon 6D raised above an iPad or iPhone. WB from film border, crop, then convert. I’ve tried all the pre-saturation settings. This is the best I can get and to my eye it looks terrible.
No worries and thanks for the file. It is difficult. The settings I normally use and all variations of white balance etc. produced images similar to the one you posted. After a round of trial&error attempts, I ended up with the following:
Comments: The histogram I got from your file is wider than the ones I usually get from my old negatives. I also tried to invert manually and got a decent copy, but why DIY when we have Negative Lab Pro! The screenshots show the things I set starting with a low saturation basic conversion and some changes. The last image shows further tweaks I made with an exported TIFF. I like the skin tones but the overall image is too red. I also tried to upload the sidecar but the site does not accept .xmp files. Therefore, I simply add the file’s content below this section.
You have enough dust on there to actually throw off the conversion. NLP will tolerate a little bit of dust during conversion, but if there is enough, it will think that the dust is a part of the film and throw off the analysis. You should use an air blower or anti-static brush across the film prior to “scanning.” You can also crop in to an area that does not have prominent dust, and is more or less representative of the tones in the pictures, which is what I have done in my conversion.
Use a dedicated macro lens. I’m not sure what lens you are using, but in the metadata of the file, it does not show a model, and Lightroom does not automatically find a lens correction profile. You will get much better results if you use a dedicated macro lens.
If the contrast is too high, try starting with the “linear” tone profile.
Also, images with a lot of specular highlights are more difficult to convert, as by default NLP will try to retain the detail in the highlight. You can crop these out as well before the conversion.
Note that I cropped in to an area that did not have dust or specular highlights in it prior to converting. And my pre-conversion settings were “standard” color model, pre-saturation: 3. Also note that in this case I have left the white balance to daylight in Lightroom (5500k) prior to conversion. I’d be curious to know which iPhone or iPad you were using for illumination, as well as what film you were scanning, and is in this case, it appears to do a bit better without white balance correction prior to conversion.
@nate, can you give us a few hints on what amount of dust will trow off NLP and what we have to watch out for when we crop highlights off? Should we try to get all colors within the crop borders? I suppose that a very tight crop that eliminates some colors will give us a result that will be off too. Maybe that these hints could become part of the manual?
Did another test with your images and one of my negatives that I already tested with backlight coming from a Durst M605c enlarger, a Kaiser plano lighttable and my iPad. Resulting images look similar. The original shot was taken in 1978 on Kodak Safety Film 6014 which translates into Kodacolor II according to this page.
The colours I get from negatives from around that time correspond to the ones I see on prints that I had made then. The colours don’t compare to what we’re used today, images taken by digital cameras that are most often way too saturated. Colour negatives did not provide the saturation we see today, except maybe the fujifilm negatives, taken with a slight overexposure…
Now, what is great colour? It is most often a matter of personal preference and I often get what I want from Negative Lab Pro. Some of its results ask for further processing that I do on 16bit TIFFs, NLP’s result gets to be an intermediate result in these cases, something that I can live with (not all of my shots are winners )
Do you have a print that was made from the original negative? Does it show the colours you like? Are they your reference?
Just did a quick convert and edit of your iPad scan example, using the latest beta of v2.2… there’s certainly a lot of ways to adjust the color balance / tones to your liking, but I think this is a pretty good starting point, and took about 10 seconds of adjustment, after which you could use “sync scene” to bring these assumptions to your other similar images.
From the metadata, it shows that your aperture was set to f 1.4 during capture. Not sure if this is a mistake in the metadata, but you should be at around f 8.0 during capture to maximize sharpness across the entire frame.
There is still some weird patterns going on here that are particularly noticeable in the skin - I suspect it is coming from the iPad, and I’d recommend you have some type of diffuser in between the iPad and the frame.
Finally, the subject himself has quite a few blemishes, which makes it difficult to use this as a good example of skin tone.