Been scanning with some variety of Coolscan film scanner, Vuescan, and LR/NLP for quite some time now. Sometimes I get great results, sometimes I get terrible results. For me it seemed part and parcel to the whole thing.
Anyway, I’m calling into question my workflow as it involves the Lock Exposure feature in Vuescan. Currently, my process is to use the crop box to select a bit of the film borders (edit: as in, I select only a portion of the film border) then press Lock Exposure. I assume this then sets ideal exposure based on the film base and the film base alone. I use that value to scan the entire roll. The values, in my experience, usually sit somewhere between 1.7 and 2.1.
I have noticed that the negatives I import into LR tend to be very, very washed out. Almost always I have to turn the “Brightness” slider down significantly- sometimes even all the way. Now, I can usually arrive at a decent output but when I come upon a photo that is giving me grief, I begin to question the validity of my scanning routine.
Can someone please comment on scanning best practices as far as they relate to the Lock Exposure setting in Vuescan?
That is absolutely the wrong way of going about this. It has nothing to do with Vuescan, it’s your whole procedure regardless of which scanner software you use. Reading the film border is only useful to NLP for neutralizing the orange mask. Then you crop the border away, because to make successful conversions, NLP needs to evaluate the overall exposure of the whole photo absent any non-photographic material such as the borders. Once NLP can evaluate the whole image, you will get better results provided that you have set the scanner software to a gamma and brightness that allows this to happen correctly.
Nate Johnson gave this advice in a post on the FB ‘Negative Lab Pro Users’ forum (7th July 2023), there’s a danger that I might be be quoting him out of context but I think it’s fair to say that it was in answer to a similar issue of inconsistency when converting in NLP:
“I notice that you have different exposure settings in your digitizations (I’m guessing your camera was on Aperture Priority). This will hurt consistency between shots. What I do now for my setup is I set my exposure manually while the only the light source is turned on, and I set it so that the entire light table is clipped to white. This should keep the film base just below clipping for all your shots.”
Maybe there is a mixup of scanning with a scanner and Vuescan vs. scanning with a digital camera?
The NLP user guide says something about Vuescan settings and you might want to check that text.
Nate proposes to use the same exposure, no matter if the scan is by scanner or digicam, for all images on a roll of film, which will probably give best consistency of rendering … but on my films, negatives vary from thin to dense on a single roll and I usually expose the “scanning” captures individually and to the right using a UniWB setting on my camera.
When I come across difficult negatives, I usually bracket and use the best suitable file. I’ve tried with combined HDR files too, but found that the effort is only worth it in a small number of cases.
Not sure I follow. The process I’ve described is meant to get an “ideal” exposure value when scanning a roll. It is forcing VueScan to “expose to the right” using digital camera terms - setting ideal exposure to the brightest possible part of the negative and thus ensuring that all other values are within acceptable range and in the most significant bits. We are talking about finding a reference point from which the scanner’s ideal gain is derived.
I’m not trying to argue your point, just trying to clarify the purpose of doing this so that you might explain to me what you mean.
Reading the film border is only useful to NLP for neutralizing the orange mask.
This sounds like you’re talking about color inversion, and is why I am confused.
It would seem that the process I described (which I swear I read Nate advise someplace in the past) is similar in philosophy to what you’ve quoted. I’m just not sure it is ideal or not…
Thanks for the reply. I did return to the guide, mainly to see if that is where I got this idea from. Nate says to select the entire frame including some of the film border, and the lock exposure on that. I think I will try that for my next rolls and see if the results are more ideal.
(1) Before you open NLP, neutralize the orange mask in LR by clicking on the empty border.
(2) Crop out all the empty border in LR.
(3) Open NLP and click on Convert Negative.
Mark, I’m sorry if I sound like a jerk here, but you do realize we are talking about scanning right? That this is the Film Scanner subforum? The title of the post says VueScan. The question/discussion is concerning best practices before the negative images are even imported into LR. I appreciate the help nevertheless.
We seem to be talking past each other, so let’s revert to 1st principles. NLP is an LR plugin primarily designed for converting digitized negatives to positives in a digital workflow and then editing the converted media. Whether you digitize the film using a camera or a scanner plus Vuescan, SilverFast or whatever it doesn’t matter, provided in the case of a scanner you set the scanner software to so that it does not convert the negative from negative to positive, in which case there would be no need for NLP. So use your scanner with Vuescan set to positive, import the result into LR and follow the steps I listed. Results should be good if your exposure in the scanner were satisfactory.
The entire point of my post is to discuss this:
Results should be good if your exposure in the scanner were satisfactory.
The process of achieving satisfactory, or ideal, exposure in the scanner. In particular, utilizing the “Lock Exposure” feature within VueScan. I thought this was painfully obvious. Your suggestions regarding how to deal with the negatives once scanned and inside LR are outside the scope of the discussion.
I went back to your opening post in this thread and to the replies from others. I also reviewed the instructions in the Vuescan on-line manual. You say in your opening post that you are getting very poor results using the workflow you describe there and indeed follows what is in the Vuescan guide. If that’s the case, forget that procedure. Vuescan has histograms. So find an exposure in the scanner that normalizes the histogram for each photo, scan it with those settings and import to LR then follow the steps I mentioned. Normalizing the histogram just means keeping the image data excluding the border within the B&W clipping points, and not having all the values bunched-up too far to the left or the right. Try something like that and let’s see what it does for you.
@Mark_Segal, if I read you correctly, you prefer to scan each negative with its own exposure settings, instead of using the same settings for all images on a roll as proposed in the user guide?
@radialMelt , following the user guide is certainly good practice, but if it doesn’t provide a sound basis for good conversions, I propose that you try to scan the same film strip a few times with different locked exposures as well as without exposure lock. You can then find the setting that seems best for the tested film strip. Testing takes some time, but I find that it helps to build confidence in the process and tools used, independently of whether a scanner or a camera is used to capture the negatives.
The OP said that the procedure in the guide was not working for him, so I suggested an alternative approach which should work better for him because it assures a normalized histogram for each photo. I don’t use scanners any longer. I use a Sony a7R4 and the Sony remote software for communication between the camera and the computer. It has a histogram and I use it as guidance in setting the exposure. Quite often I am using multiple exposures at the capture stage and HDR blends of the negatives in LR to assure a satisfactorily digitized negative before opening the blended result in NLP for conversion and further processing.
@Mark_Segal , I understand that you capture with individual exposures as a result of your experience. I do that too.
@radialMelt , locking exposure while capturing the negatives on a roll of film will probably provide more consistent results, e.g. for studio work with controlled lighting and subject. But on my films, the negatives vary by a degree that makes a locked exposure look like a silly idea. Nevertheless, I found NLP to be fairly tolerant to exposure and to provide good starting points for all but the most demanding (thin negatives taken in low, mixed light) negatives.
@nate , can you say a few words about locking exposure if negatives vary a lot on a film roll and how v3’s roll analysis will do in such a case?
Similar in philosophy but the advice that I quoted that Nate gave last week means locking the exposure off the light source so that it is (just) clipping. If you put in a negative and set the exposure from the film rebate then that will arrive at a different exposure though in that case it is trying to make that an average grey. Admittedly that advice was to someone shooting RAW on a camera but I would think the same should apply if scanning to RAW, or maybe to 16-bit Tiff, with a scanner using Vuescan. Certainly worth trying a range of different exposures to see what gives the best results for an individual negative and then basing your lock exposure on that. Nate does seem pretty clear that he suggests keeping the exposure consistent across a roll but quite clearly others get better results by changing the exposure for individual frames.
Hopefully Nate will clarify on this thread as he will know what kind of file NLP works best with.
I decided to go back and run some comparisons myself to see what, if any, differences existed in final output between the two approaches (locking exposure to the film base, or locking exposure to an entire frame including some film base).
Ironically enough the end result is basically the same!
Just out of interest are you scanning in RAW mode, I don’t think you’ve said?
Yes, I am. VueScan RAW DNG.
Thanks, I don’t have any explanation I’m afraid but I just wanted to be sure of the method.