Raspberry instead of red

Both mine conversions and that of other people on the internet prominently render red color as ‘raspberry’ one, very far from reality. I’m attaching the example from Fuji Pro 400H film (first image). I use Nikon D750 with iPad Pro as backlight (masked with black paper insert), NLP 2.1.2, LR Classic 9.1. I know the scene contained red, not this false color. Also I know the raw file internally contains the ‘correct’ red because:

  1. other software (Silverfast HDR) obtains correct red from the same source file (however, SF has its own share of quirks so I would prefer to use NLP),
  2. when I export NLP image to TIFF positive and then edit HSL red Hue/Saturation, I’m able to get it right - see the second image. But this is far from optimal since TIFF is 3x the size of the original raw file.
    I use Basic NLP profile and Standard mapping with Presaturation 3 (and other values make it worse…). All NLP sliders are set in a way to render the whole image as most realistic. There is no combination in NLP settings alone that corrects this tint, I can correct it only via tiff export

Is it possible to address this somehow? I’ve bought NLP as I saw it’s promising but this thing is really troublesome, it ruins some photos. Similar thing is with orange autumn leaves.

Really hoping it can be fixed, thanks!

Pure NLP:
Screenshot 2020-01-14 at 21.05.30

NLP + TIFF Export edit:
Screenshot 2020-01-14 at 21.05.18

I could be off-base, but my first suspicion would be the iPad used as a lightsource. I know the newer model OLED has been cited as a recommendation for scanning light source but with a specific color shift like that I’d be very curious if there are others around here, also using an iPad that do or do not see this as well.

Try a more robust lightsource. I’ve been using new Raleno unit (model PLV-S192) and have nothing bad to say about it. And it doesn’t work you can return it (though honestly I’ve been using the Raleno for many other uses, photo and otherwise) since getting it. Win win.

have a look in this long thread if you’ve not already:

I‘m waiting for recommended Nanguang video light to arrive and will try it, but my point is other soft can get right red with my current light, so it sounds more like NLP limitation.

Hi @iliks - thanks for sharing the sample. I’m aware of this issue, that in some circumstance, bright saturated reds can be rendered with a raspberry hue.

If you’d be willing to email or private message me a link to the raw file, I can take a closer look at what is going on here. This is the one of the more extreme example I’ve seen to date, so would be interested to see if I could fix.

A few things that may help this render better:

  • Try reconverting, but with a less extreme white balance. For instance, if the white balance that Lightroom set using the picker off the border was 2200k, try a few less extreme settings (like 3000k or 4000k), and reconvert those and see if it is improved. Part of the issue for this happening is the way that Lightroom handles interpolating between white balance settings, using something called a chromatic adaptation transform (CAT). I’ve found a fix for this with raw scans from Vuescan and Silverfast (which is what the v2.1 profile is) but still working on improving this for camera scans.
  • If the above doesn’t help, you can also try reconverting at a LOWER pre-saturation amount, and making sure the tone profile is one of the “linear” options.

Thanks!
-Nate

Creator of Negative Lab Pro

2 Likes

Hi Nate! Big thanks for your attention!
Sure, here is the raw file:


I’ll try your suggestions tomorrow, but feel free to play with the raw however you want, hopefully you can find how to fix it!
Being an sw developer myself, I imagine the problems you face trying to adapt LR API for this very untypical task of negative conversion! So good luck!

Hi Nate! I’ve also tried your other recommendations, about WB and pre-saturation/linear and unfortunately they don’t make it any better, just worse, the red stays wrong and other colors become less realistic.

For reference, here’s my xmp file that gives me the best overall colors, just with raspberry tint on jacket:

I would like to keep the same colors with just red fixed

Ok, this is at basic color model, with “1” for pre-saturation (white balanced off the film border). Editing settings as shown.

Based on the context of the scene, I think the more magenta rendering of the jacket is probably the most accurate…my wife has a jacket just like that… also you can see reds and oranges in the scene that are rendered properly (like the car tail lights, red shoes, traffic signs, and skin tones.

So I think the biggest issue in the example image is that you can see that colors are going out of gamut and clipping. Which is why they are so intense and losing detail and texture. The fix for that is reconverting with a lower pre-saturation.

1 Like

(The “linear deep” tonal curve used here has become my best friend in the past few scan batches I’ve done.")

1 Like

Thanks Nate! Indeed, it looks a bit better, the clipping is gone, but I cannot say I like the colors, they lost atmosphere of sunny summer evening.
For contrast, here is Silverfast HDR conversion which looks more natural to me:


Both jacket is more red and ‘lime’ padding around pedestrian crossing street signs looks correct only in SF version, in NLP they are more like orange.

Anyway, I understand you are working within the limits of controls that LR gives to you while SF works with true precision so it may be not something you can fix.

It’s easy enough to make it warmer in NLP if that’s what you want… just set to “Auto Color - Warming” and adjust to taste.

Another trick you may find helpful is using the HSL panel in Lightroom against the original RAW file (instead of on the positive copy). To do this, first give your conversion in Negative Lab Pro to something you like, then hit “apply”, and then in Lightroom, click the little circle so you can adjust hue/sat/lum by clicking and dragging in the photo (by using the tool to drag in the photo, you don’t have to worry about what the underlying color in the original negative is.) Just be sure to try both saturation and hue adjustments!

Here’s what that looks like spending 5 seconds adjusting the hue/saturation of the jacket.

I would argue that Lightroom offers more power, control and precision than SilverFast (though the current version of Negative Lab Pro still has opportunity for improvement to tap into all that power). True, the biggest limit of Lightroom is that the points I can define on a tone curve are only on a 256x256 grid (8-bit tone curve precision), but with the DNG architecture of Lightroom, there’s an incredible amount of control and precision in the RAW Profiles themselves - down to adjusting a single degree of hue and how it responds in 3 dimensions to changes to the white balance. Future versions of NLP should only get better in this regard. Also with LR Classic, there is the ability to utilize 3D LUTs now in the profiles, so that will be another major boost in control when it gets implemented!

1 Like

Thanks for the tips Nate, that ‘circle’ in HSL view was new to me, indeed it sometimes allows to skip exporting to positive. I think it would be worth mentioning it in the guide.

As for NLP vs SF, my experience so far is yes, NLP allows a lot of tweaks, but SF somehow gets it more correct from the first try so less tweaks are actually required (note, by the way, it doesn’t require sampling film base/setting WB).

But I surely hope NLP evolves as it’s very young!

1 Like

SF can produce excellent results. I’ve used it for many years. But its interface (though I’ve become incredibly accustomed to it) is admittedly archaic and convoluted, even with post ver.8 improvements. It’s a standalone piece of software with an entirely different workflow from Lightroom, and nearly 5 times the size. That workflow, for my part at any rate, is fine for use with a dedicated scanner; but for digital-camera-scanning, introducing it into the equation is counter-intuitive to me. I’ve moved to NLP for the ratio of results to expediency, and even more for not necessarily needing to create gigantic TIFFs for each and every image I’d like to work with.
Syncing NLP dev settings across an entire roll is far, far, far quicker and more accurate in a folder-global sense for me thus far, and remarkably so. If I need more granularity and control, exporting to positive is still an option, and I can flatten the scan and work within Lightroom natively from there, which is something I’d always finish with nonetheless. I really think it’s all a matter of becoming comfortable within the workflow change and finding the possibilities therein.

1 Like

I’ve tried reconverting the same file with the new 2.3 profile and indeed it helps, now additional editing in positive is not needed. Thank you Nate!

Wonderful! Thank you for your help in all this!

-Nate

@iliks - how’d you end up correcting it? Was it just a matter of the newer v2.3 profile being improved, or did it involve some HSL tweaking? I am currently scanning with Silverfast software and while comparing the DNG file vs TIFF, I notice that the reds are somewhat pink compared to the reds on the DNG scans. It’s my only gripe with converting from TIFF format, so I was wondering if I could manually tweak the conversion. I’m finding it hard to get the right red from just the NLP plugin options.

@barney Most of it improved just with newer profile assignment. Then I improved it further by decreasing the new exposure slider in the NLP window. This allowed to get the same result I was able to achieve with the old NLP only via TIFF export and selectively editing hues there - so no disk space is taken with exploded tiffs.
That said, some frames I have still show pinky reds but at least it’s got better than it was. This definitely doesn’t have to do with the light/camera set up, as other soft was able to process them as not pink. But NLP often is faster to get ‘ok’ result and now this ‘pink’ area has improved too.

1 Like