Hey everyone, I’ve looked through the help section of the website and the forum and couldn’t quite find something that could help me so here I am.
I’m DSLR-scanning a large amount of “old” (1990s) color negatives. I obviously batch-convert them to save time but some of them come out ok and some of them come out with a very high contrast and very strong purple or green color cast:
My setup: Sony A7R2 + Sigma 105mm f/2.8 macro lens + Cinestill CS-lite as a light source. Shooting RAW files.
It works well with black and white negatives but I’m going crazy with the color ones… I can’t edit them one by one, I really don’t have the time.
When I scan, I always look at the histogram and keep it as close to the bright side as possible without being overexposed.
I simply batch-convert once. Though I’m having something weird: I batch-convert let’s say a hundred scans, when it’s done I click “apply” and nothing happens. I have to open NLP again and click “apply” once more to see the changes. Maybe that’s the source of the problem but how do I make it work without starting NLP twice? It never did this with B&W negatives.
On my iMac and with LrC 13.4, I tried a few things, but let’s first mention the following
The negative has low contrast and the histogram is all cramped at the right side.
NLP’s conversions can pick up almost any cast or strange colours with thin negatives.
Colour variety is small. Just green and pink (I exaggerate to make it more obvious)
NLP can deliver more balanced output with greater colour variety.
Thank you very much for looking into it!
In the meantime I also tried to lower the exposure when scanning in order to have more colors and contrast on the negative instead of having the histogram cramped to the bright side. I’ve also tried using the “blue light” on the CS-Lite (it has 3 settings : “orange light” mostly used for slide film, white light and “blue light” mostly used for negatives and until now I was only using the white light).
And it seems to work a LOT better, I only get a few shots with odd colors in large batches so that’s fine to me!
But I’m still having the problem where once the batch-conversion is finished, nothing changes (the negatives still show up as unconverted). I’m wasting quite some time with this as I don’t really understand what’s going on, it wasn’t doing that when I was batch-converting hundreds of BW negatives at once. Do you have any idea?
I see that in Lightroom, when I convert in the Develop module. Neither the preview nor the thumbnails in the film strip change - until I hit Apply. In Library module, things move differently and I do my conversions there for that reason and in most cases anyways.
Changing settings or pressing buttons in rapid succession seems to cause occasional issues too. When I have NLP convert bigger batches, I usually let it sit for a second or two before proceeding. This feels strange in days of high power processors, multithreading etc., but it helps to prevent hiccups. Conversion results aren’t touched by this though.
I always convert in the library module and I always wait a few seconds at each step to let my computer do its thing but really, it still doesn’t work the first time. With the last batch it didn’t work at all, I had to unconvert all the negatives (approx. 200) and reconvert them in smaller batches…
Sorry to insist but I really need help with this new issue, I’ve just lost more than 30 minutes trying to convert a batch of color negatives and… it just doesn’t work. I give my computer ample time to calculate, I tried deactivating GPU acceleration, nothing works, my negatives stay negatives…
edit: well, I’ve tried again and again and NLP just doesn’t work anymore. I really don’t understand what’s going on here. I’ve done that with tens of thousands of BW negatives and never had this issue.
Now, a few days/weeks/… later, colour negatives don’t convert.
From personal experience I can say that NLP 3.0.2 does NOT care whether it converts B&W or colour negatives…under my conditions, which are Lightroom Classic 13.4 and macOS 14.7.5.
Assuming that your NLP 3.0.2 (on whatever it is installed) worked then, but not now, something must have changed (smartass assumption, I know)…eg. versions of Lr and/or OS, something “killed” parts of NLP…etc.
Do you remember any change you or your “gear” did between then and now?
Please state exact versions of your gear (LrC, Mac/Win, etc.)
I don’t remember changing or updating anything except Lightroom maybe an hour ago, thinking that it would help with my NLP issue. It didn’t change anything unfortunately.
Something new though: it seems like NLP works when I convert batches of 50 negatives instead of the whole thing (more than 200 at once).
For now I’ll keep doing that but it’s still a little annoying so I’m still interested in a potential solution!
The exercise Digitizer went through is useful and provided an improved result, and his recommendations make sense. I’ll just add some generic and specific suggestions.
To obtain correct colour the required procedure before converting is to make sure you do two things: (1) In lightroom using the White Balance eye dropper in the Basic panel of Develop, click into an unexposed part of the same film the picture is on - let us say the bar between two negatives, or the blank bit at the start of a roll. Sync the White Balance settings that click produced (the “Tint” and “Temp” numbers in the Basic Panel of Develop) for all the negatives to be converted from the same roll. (2) Then make sure for each photo, that you have cropped out anything surrounding the image that is not part of the image. Finally, forget about ETTR - it could induce you to over-expose the negatives. For this purpose, you need only make sure that your exposure is not clipping the brightest and darkest ends of the histogram when you look at the histogram, I assume, in Sony Remote if that is what you use for managing the captures. The Sony Remote histogram is not as accurate as Raw Digger’s because of how it’s constructed, but serves this purpose.
Now for your light source used when capturing the negatives: you are correct to use the White light setting, but it will help you to better judge the camera exposures from that histogram to know the colour temperature of your light source in degrees K and set a custom WB in the camera equal or about equal to that temperature. This doesn’t affect raw data, but it could affect the histogram rendition and how you view the negative before capturing it.
Turning to this image, as Digitizer says - it is very weak. This means you either under-exposed the original photo when you made it in the film days, or you over-exposed the negative in your DSLR capture. If you have a bit of time and interest in playing along with me, let us do a little experiment. Could you please repost access to this same photo but this time include the unexposed bar between it and the next image on the strip. I would like to try WB and conversion here and see what happens. Also, if you don’t mind, please re-photograph this negative using a lower exposure so it comes out as a darker non-converted negative image and post that too including the unexposed bar separating this photo from the next one on the roll.Once I have those samples, I shall work on them here and report back.
I can’t help with the batch conversion feature not converting. I’ve never batch converted more than 4 negatives at a time and it never fails. This is something for Nate I think. I can only conjure a possibility that you may be sending too much at a time for conversion and your computer OS or the application is being overwhelmed somehow, but this is a guess. Perhaps try batch converting a small number and see whether it works, to rule-in or rule-out that thought.
I completely agree with what esteemed members @Digitizer and @Mark_Segal said. I think the key to success is not to rush things. If batch scanning of 50 images work and 200 does not - stick with 50 and be safe. It’s worth remembering something which is not advertised but well known in community: behind the scene NLP runs imagemagic utility which apparently converts raw files into linear 16 bit tiffs. The computing resources and disk space requirements can very quickly overwhelm any system regardless of number processors. Processing 200 tiff images can easily make drive and CPU run really hot. On top of that there is multithreading processing of Lightroom which still wants to reflect underlying changes into the interface. I can easily imagine that there is a resource starving and one can only wonder why system is not crashing. Anyway, I guess the bottom line is to find out by trial and error the size of batch NLP can handle on this particular computer system and stick with it . If you doing this on commercial basis you may start thinking about scaling things horizontally by employing more computers, but that’s something we can leave out of this discussion. That’s my 2 cents.
I had tested for such limits and found them to be around 100 images on my 2019 5k iMac.
The limit does not seem to be hardwired (numbers around 127 or 255 would be giveaways) but probably different on different hardware.
Frankly speaking I wonder why one would do even a 100 at a time. As far as I concerned, each roll of film has certain unique properties about it due to development and even more so the stock differences ( unless of course someone went out and shot 10 rolls of the same stock of same emulsion # and then processed them at the same lab as a batch) . So unless all frames in the batch are of the same “origin” with same stock and development process, I would not process them in the batch - it just does not make sense and defies the idea of batch processing.
scan 27 rolls of Kodak Gold 400 shot in 1978 and developed in the same lab
import these captures in Lightroom
convert these captures in one go
From a workflow point of view, doing this makes sense. Whether it makes sense, from a “technical” point of view, to use roll analysis on the whole lot, remains to be seen or thought about: Even though all films are the same kind and had the same development, some of them have been used under different climatic conditions, exposed at the beginning or the end of the trip etc.
I do have such a set of films and I’ll check out different procedures when ready.
Beware: Don’t hold your breath!
Interesting, for the case like this where emulsion and development is the same, it certainly might be interesting to process all shots exactly the same way. So here is what I would do for fun of it: select perfect negative shot on good sunny day when you can expect that light’s color temperature is optimal. I also presume that all 27 rolls scanned with the same lens/camera with same backing light with same exposure or bracketed around same base exposure. So by having one reference shot for roll analysis and doing as many batches as needed for NLP to work, I would get all 27 rolls inverted. The end result may be interesting as now all shots which were shot at different conditions , different color temperature would have these features preserved - just like they are preserved when you shoot reversable film which by the virtue of process “white balanced” to one color temperature only. Good luck with this endeavor!
Hi Mark, thank you very much for the insight but the first issue was already solved. I was overexposing when scanning, now that I don’t do that anymore, it works perfectly fine.