I am bracketing my negative capture by 1/3 stops from 0 to +1, trying to establish a personal baseline for better consistency. Some say +.07 is ideal while other mention that +1 is ideal.
No matter my bracket/exposure of my negative - when I convert in NLP they all convert to the same exposure, ignoring my bracketing, making it impossible to establish a baseline for comparison. I know the bracketing is there because I can see it in the negs (see pic).
Is there an “auto exposure” button (or equivalent) that I’m missing?
On the face of it that seems to be a good thing, showing that your camera sensor has a good dynamic range and so can capture the full range of tones from colour negative at a range of different exposures (the same is unlikely to be true for colour transparencies). It might actually have been useful to show the histogram for each of these and I guess that slight differences in the processed images are likely to show up first in either the highlights and shadows of the final inverted result (so shadows and highlights of your digital capture). It’s a very good test to do and you should choose the exposure in in the middle of the acceptable range and then choose that exposure for the rest of that roll (Nate recommends using the same exposure if possible) there will be exceptions to every rule though).
Whatever exposure you arrive at you could compare it with the exposure needed for the unexposed part of the film in the rebate to start clipping (not the sprocket holes). This will then give you a rule of thumb for any film, so for example it might be 2 stops back from your ‘clipping’ exposure.
Thank you. If NLP automatically determines the best exposure then the goal is simply to get digital exposures that are, “in the ballpark,” allowing NLP to decide the exposure, but if we, the user. would like to have that control is there any way to do so manually? In other words, is there a way to turn off the “auto-exposure” feature of NLP?
At the end of the day you can use this to our advantage, in effect NLP is trying its best with every image you put through it. I’d recommend copying a very thin under-exposed negative and one that has been over-exposed, bracketing fairly widely for your digital exposures. Then you should see from the results that NLP delivers when your digital exposures begin to show up deficiencies for each type of ‘problem’ negative.