Optimize NLP for RGB light sources?

Digitizer, I don’t think that NLP should care about old negatives. At all. In my view those are two separate steps. NLP’s focus is the quality color inversion, to be a perfect “electronic RA4 paper”.

I would argue that restoring faded or shifted color is better done before or after the NLP step. In fact, “auto-color” in Photoshop is probably all you really need at that point.

Actually if I were in your shoes I’d want this too, i.e. to have NLP provide a “reference look”, so you could evaluate if colors have shifted and by how much.

The purpose of a calibration is to get the RAW image into a well-known state. Currently NLP algorithms can make very few assumptions about the color because everyone’s light source is different, and camera manufacturers tinker with RAW data trying to make non-inverted colors more “pleasing”. Adobe Color camera profile tries to normalize across all camera models, but they are too guilty of optimizing for “pleasing” and they do not account for variations in light sources.

In fact, I believe that NLP is actually trying too hard. Having quality camera profiles would allow the algorithm to be dramatically simplified: no need to analyze picture data, don’t clip channels, don’t crop the border, adjust gamma + gain for each channel to match paper response, that’s all I need.

Well, looking around this forum, I got the impression that many posters do exactly that: care about their old negatives. NLP, in its current state, provides a starting point. Future versions of NLP will do that too. And no matter where these points are and will be, they shorten the path between the scan of the negative and the finished picture, finished in the sense of “what I want” rather than of “what is”.

I’d be perfectly happy with a scientifically precise conversion too.
It would still be a starting point - or the end of the road, whichever serves best.

1 Like

Did a proof of concept and found that converting my difficult (underexposed, mixed, low light) negative returned a slightly better result.

This is how I made the backlight image

  • In Photoshop, created a grey image
  • Added noise and changed the tone curve so that the histogram stretched from min to max
  • Exported the image to DxO PhotoLab
  • Desaturated all colours except R,G and B with the HSL tool
  • Exported the image to the iPad

Exposure time was fairly long because of the relative darkness of the iPad and the extra diffusor.

I’m confused… what is the purpose of all these steps rather than simply a pure white background image? with a pure “white” image, the iPad simply has every red, green and blue light fully illuminated within each pixel…

In the sample images I posted above, I just opened up safari and typed in: about:blank which takes you to a blank white page. I took a screenshot of the page, and just opened it up in photos, and zoom in a bit so there is nothing other than the pure white.

I understand your confusion and will try to shed some light into my proof of concept. But first. let’s step back to the first few posts in this thread.

@davidS and @DrBormental discuss possibilities to improve NLP by adding a “light source” dimension because of this: "

Also, the example of Frontier SP3000 light is given in the original post. To me, it looks like this scanner uses three light sources and a sensor without color filters - like three pass scanners (that seem to have vanished from consumer scanner offerings).

Now, instead of scanning with three passes in the time domain, my approach was to simulate this in a space domain and the noise was used to add shades of reds, greens and blues as well as yellows and cyans, the latter being removed from the mix by the filters in PhotoLab, filters that can be adjusted in width and wavelength as you might derive from the screenshot:
Bildschirmfoto 2021-09-21 um 09.40.00.926
The noise and filtering makes R, G and B wider than what pure primaries are.

My PoC is therefore positioned in between the “scanning with monochrome” setup and a “classic” high-CRI setup. This test setup is far from ideal and not very practical, but one of several ways to try RGB lighting:

  • Get an RGB light source
    (the real thing)
  • Use lighting as described in this thread
    (space domain simulation of an RGB light source)
  • Play a “rainbow movie” (with cut out yellows and cyans) while exposing
    (time domain simulation of an RGB light source)

But having a “pure white” image displayed on the iPad’s RGB screen is already accomplishing this in the space domain.

If you were to put a microscope on the iPad’s screen while displaying a “white” image, you would see separate red, green and blue components illuminated within each pixel, like this…

ipad-rgb-closeup

You can’t adjust the primaries or the wavelength of these Red, Green, Blue components… you can only mix them by adjusting the intensity of each component.

So when you adjust the HSL, you are really just creating metamers using these three fixed RGB components. (For instance, there is no yellow or cyan wavelengths emitted from the iPad’s RGB screen… there is just the perception of yellow and cyan by mixing the existing Red, Green and Blue wavelengths.)

Now, you could play with the pattern of how these red, green and blue components are distributed, and perhaps there is some reason that some pre-defined or random pattern will be more ideal than having all the components on at once (as they are in a “pure white” image). But the tradeoff will be brightness, which is already pretty low on a typical iPad.

TL/DR: iPad screens are just collections of red, green and blue illuminants. A “pure white” image just has each of these red, green and blue illuminants turned to 100%. You cannot change the wavelength of these RGB illuminants. You can only vary their placement and intensity. Based on this, I believe that a pure white image is the ideal way to test the RGB light from an iPad.

…true, but I can adjust the filter center frequency and width in PhotoLab’s HSL tool. This means that I can make greens to extend towards blue and red to a certain degree while thinking not on a pixel level, but on a level of a group of pixels that can, together, appear as yellow or cyan or orange etc. The diffusor will then mix (de-pixelate) the image to give wider reds, greens and blues than what the unaltered r, g and b pixels would produce. They are not monochromatic but, nevertheless, I wanted to see what comes off even wider, but not overlapping colours.

I still stand with my statement (that I’ve repeated in many threads) that NLP is fairly tolerant to the kind of lighting used when the negatives are captured. I find that results depend more on the characteristics of the negative and the film type than on characteristics of (not too crappy) lighting.

@Digitizer , Wouldn’t a spectrometer show the same spectral distribution with and without the diffuser?

The diffusor serves to average out the blotchy appearance of the source. Measuring a narrow sample with a spectrometer would not give the same results for the same sample with and without the diffusor, see image below.

@Digitizer I’m only arguing from intuition. I really need to build the spectrometer and see for myself. But I believe that in your example a spectroscope as you drag a slit over your sample would show variations in intensity but not in color. You would measure varying amounts of red, green, and blue------ and not find other colors. The other colors are made by our brains----not actually there.

In nature, light comes in all colours and most camera sensors react to red, green and blue, as do our retinas, which can also detect very faint light with separate receptors.

The question remains, if a spectrometer would only see rgb or also other colours. My best guess is, that it will measure peaks at different wavelengths, depending on where the slit sits over the pixel salad, including orange reds or greenish blues.

Nevermind, the experiment was nothing more than a proof of concept that showed the resilience of NLP against odd lighting once more.

To add a bit of closure to this discussion, I believe that the latest NLP version is so freaking good, any of these improvement suggestions seem to make very little sense anymore (my own included) :slight_smile:

The latest version is so good, I rarely can beat it with manual inversions.

1 Like

That’s wonderful to hear! Thank you so much for sharing :hugs:

-Nate

i’m glad to see this is still being discussed in 2021, but wish more progress had been made. i stopped working on it as i started shooting more, but this doc details the progress i’d made.

the light i used was the luxli viola. i had rented a sekonic color meter to take readings of the wavelengths for the R G and B LEDs, they weren’t perfect but close. bluetooth app control was handy to change colors w/o touching light box. i could also save presents, since i needed different intensities of each color so that they balanced to make the orange film a neutral gray.

i shot 1 exposure for each color. i used raw photo processor to extract raw linear data, and with a recorded photoshop action grabbed the R channel from the R exposure, G channel from G exposure, and B channel from B exposure, combined them.

after combined and inverted in PS, it had to be edited in PS and not Lr / ACR because ACR’s raw engine has limitations. C1 could’ve possible worked.

the next step i was going to take was to use a expodisc and flash to make a series of bracketed over and under gray exposures w/ the film i use most, portra 800, and make scans of each frame, and use them to calibrate a default baseline color adjustment that makes those gray exposures gray.

i wish i finished it but have moved on, scanning everything i shoot takes too much time. but i truly hope someone can build on what i’ve done, or take what everyone has done, and make a solid solution to scan color neg w/ appropriate color separation.

btw nate maybe you can unblock me from the NLP fb group, i won’t join since i don’t use NLP, but i never felt it was appropriate to be blocked for bringing attention to trichromatic scanning and color separation. thanks.

The primaries of an iPad are specifically designed for a colorimetric purpose. As you’re aware. What produces grey to a standard observer defined under CIE color matching functions does not produce the same grey for a color negative intended to be represented in printing density. Both high CRI white light sources and iPad displays are not ideal for film scanning because they’re optimised for colorimetry, not color negative densitometry.