NEF, DNG & HDR - workflow?

I sometimes make 3 camera scans with different exposures to include a very wide tone range & make an HDR. What are people’s opinions on the order of work? Convert NEFs with NLP first & then HDR? Convert NEFs to a single HDR DNG & then to NLP? Or some other workflow?

Thanks
David Hoffman

I occasionally bracket exposures and combine the RAW files in Lightroom BEFORE converting with NLP.

  • Combining converted VCs results in an unconverted image (I use virtual copies)
  • Combining exported TIFFs worked, but the results were not what I wanted.

I do the same basic process as Digitizer. Worked well when I needed it. I don’t do it very often.

If I were to bracket negatives it would only to see if any of them would be better than other after conversion. I don’t believe color negatives need or benefit from HDR at all unless the film stock is something exotic . HDR certainly can be helpful for film slides which have much higher dynamic range than negatives. But in that case NLP is not needed, at least not in its classic mode.

Same here. Negatives tend to have low dynamic range when camera scanned. I bracketed scans also to find the sweet spot of ETTR between reducing noise and compressing shadows. Results have not been conclusive though, and while some negatives convert closer to what I like when overexposed, others don’t. There are too many variations in (old) analog photography and processing. BUT: NLP is fairly tolerant to exposure as far as my tests showed.

As far as my tests go, there is no need to create HDRs from camera scanned negatives.Nevertheless, some HDRs show different colours and tonalities and one might prefer the results of plain conversions vs. the ones done from HDRs. But again, NLP can handle both cases and provide useful starting points.

All of the above is based on conversions made with the “Basic” colour model. Using “Frontier” or “Noritsu” models simply creates different starting points … and whether we prefer one over the other is a matter of taste imo.

1 Like

As others say, col neg doesn’t have the dynamic range to need HDR. My work has been journalistic & not infrequently I’ve been working in very high brightness range scenes using only the existing light. Some transparencies need HDR but that’s not relevant to NLP workflows.

BW negs can have detail in the shadows (thin areas) that is barely visible, was never printable in the darkroom but can be recorded in a scan & for best results the exposure will put those tones a little way in from the right end of the histogram. Other areas, a sunlit white shirt or an exterior may be well overexposed in the same neg. Film that’s been pushed may also have extremely dense highlight areas. Unless given a long exposure so little light is transmitted through these areas that noise is increased & detail lost. HDR is the only way I know to make a file that captures the full range.
My question was whether to convert the out of camera files in NLP & then combine these positives into a single HDR file or to convert them into a single HDR negative & then convert that into an NLP positive.

David Hoffman

Given how NLP works - by employing some heuristics to properly scale luminosities across the image for any given image - it does not make any sense to make HDR out of NLP processed images. Basically if you pay attention , NLP makes all your Images in bracketed sequence look almost the same - if they are not processed as batch - and there is nothing to HDR. That leaves you with option to create HDR before NLP - if you insist. For BW pictures if you must put so much effort in recovering all details like HDR - you probably better off manually inverting HDR image . Inverting BW is not not as involved process as color so you may carefully address each range of tones you are interested in without having NLP at all - and probably with better results - provided you know what “look” you want to achieve

Thanks VladS (I think)
The ‘effort’ of making the HDR is making 3 exposures instead of 1 - about 15 seconds & then selecting 3 files in ACR & pressing alt+M. I have very many files with far better highlight & shadow detail as a result. Your work is likely better exposed than my hit & run negs.

David Hoffman

…sounds like you’ve already found best practices yourself.

For an “industrial” way of converting the approach with the HDR seems to be quite efficient.

Here’s my question: How do you set Lightroom’s HDR regarding exposure?

Whether that box is checked or not has some effect on the converted HDR images, if pre-saturation is e.g. set to 5. See example below.


Upper row: pre-saturation = 1; std. exposure image, HDR (box unchecked), HDR (box checked)
Lower row: pre-saturation = 5; rest as above

The negative has one of the widest histograms of my test collection. The negative is > 50 years old, developed in a small local shop who was very happy to process 27 rolls of 120. From a “natural look” point of view, I prefer the upper row. YMMV.

I currently scan to NEF & process with ACR in P’shop, merging to HDR at that point if needed. I hate LR (probably unfair but I have a long memory) & only use it to import files that I want to use NLP on before exporting them to storage as 16 bpc TIFs.

In ACR I will generally leave all settings at zero/default. Auto would try to even up the histogram, I want the extremes to be part of the HDR merge. I’ve never done methodical tests on these workflow options - that’s why I was asking here.

And I agree, the top version looks better. Centre pic for me but these settings aren’t very important for me. I’m looking to end up with a file that contains all the data from the film in a form that I can easily work on. I rarely expect to get a file I can send out straight from conversion.

David Hoffman

well, the images are the result of a simple series of virtual copies and whatever they show is, in many cases, not transferrable to other images. Nevertheless, and if an image is worth it, systematic conversions can provide a starting point that is probably a lot closer to target. And Lightroom makes such an approach really easy. Time spent for the series is mostly spent waiting for NLP to finish.

Because my camera scanning set-up uses a Sony a7r4, I use Sony’s Imaging Edge software for tethering the camera and operating the exposures. The app has a histogram allowing one to see the distribution of tone values across the range from Black to White, remembering of course that because these are negatives, in the final analysis the highlights in Imaging Edge (right end) are the shadows and the shadows (left end) the highlights. It uses the color space set in the camera, which for the a7r4 is Adobe RGB (98). It can happen with some photos that some tones climb the walls of either the shadows or highlights, indicating that there may be clipping at my default exposure setting. When this happens, it can be eliminated by making two different exposures of the same negative so that clipping is absent at least on one side in each, and then blending them in Lightroom using HDR. The resulting HDR blend file shows no clipping in the Lightroom histogram, which is the preferred result.

I know the argument that these histograms are not accurate - one should use Raw Digger or some such to see the true extent of clipping, however the guidance provided by the processing application’s histograms is good enough to end-up with fine results from the HDR in these cases.

When I do this, I blend the negative exposures in Lr, and then open the resulting HDR negative image in NLP for conversion to positive. This works well.

2 Likes

…setting UniWB will provide a histogram that is as close to raw data as your camera will allow.

Read about UniWB in various threads here or other sites out there.

The WB setting I use in my Sony a7r4 is 5100K which matches the measured value of the light from the Kaiser Slimlite Plano which I use for illuminating the negatives. it won’t get any better than that for purposes of capturing the colours of the negative and inverting them in NLP.

We’re talking about different things here. Setting whatever white balance in camera does not matter as long as we shoot RAW and follow Nate’s advice to pick up white balance from an unexposed part of the film.

UniWB can be set to make the camera display a histogram that represents sensor data instead of data taken from a processed preview or thumbnail. This helps to expose to the right without getting into Lightroom compression - or something that looks like it as far as Lr’s histogram reveals.

Ya, I know WB doesn’t affect the output of raw files. It does affect the appearance of the histogram in Sony Imaging Edge produced from photographing the negative using the tethered camera. That in turn influences what one thinks about the need for more than one exposure. From that point on, correct, the WB depends on what one picks up from the unexposed film one reads with the WB tool in Lr before proceeding to NLP for the conversion.

Using a UniWB is technically sensible. If anything, it may reduce the number of occasions in which I do make an HDR set, because generally speaking I’ve found a true raw histogram shows less clipping than one that is rendered without it (comparing a “Raw Digger” histogram with the usual kind in Ps or Lr of the same photo). If anything, not using it is kind of like an insurance policy because it is “over-warning” me about clipping risk. Not a big deal.