Optimize NLP for RGB light sources?

Digitizer, I don’t think that NLP should care about old negatives. At all. In my view those are two separate steps. NLP’s focus is the quality color inversion, to be a perfect “electronic RA4 paper”.

I would argue that restoring faded or shifted color is better done before or after the NLP step. In fact, “auto-color” in Photoshop is probably all you really need at that point.

Actually if I were in your shoes I’d want this too, i.e. to have NLP provide a “reference look”, so you could evaluate if colors have shifted and by how much.

The purpose of a calibration is to get the RAW image into a well-known state. Currently NLP algorithms can make very few assumptions about the color because everyone’s light source is different, and camera manufacturers tinker with RAW data trying to make non-inverted colors more “pleasing”. Adobe Color camera profile tries to normalize across all camera models, but they are too guilty of optimizing for “pleasing” and they do not account for variations in light sources.

In fact, I believe that NLP is actually trying too hard. Having quality camera profiles would allow the algorithm to be dramatically simplified: no need to analyze picture data, don’t clip channels, don’t crop the border, adjust gamma + gain for each channel to match paper response, that’s all I need.

Well, looking around this forum, I got the impression that many posters do exactly that: care about their old negatives. NLP, in its current state, provides a starting point. Future versions of NLP will do that too. And no matter where these points are and will be, they shorten the path between the scan of the negative and the finished picture, finished in the sense of “what I want” rather than of “what is”.

I’d be perfectly happy with a scientifically precise conversion too.
It would still be a starting point - or the end of the road, whichever serves best.

1 Like

Did a proof of concept and found that converting my difficult (underexposed, mixed, low light) negative returned a slightly better result.

This is how I made the backlight image

  • In Photoshop, created a grey image
  • Added noise and changed the tone curve so that the histogram stretched from min to max
  • Exported the image to DxO PhotoLab
  • Desaturated all colours except R,G and B with the HSL tool
  • Exported the image to the iPad

Exposure time was fairly long because of the relative darkness of the iPad and the extra diffusor.

I’m confused… what is the purpose of all these steps rather than simply a pure white background image? with a pure “white” image, the iPad simply has every red, green and blue light fully illuminated within each pixel…

In the sample images I posted above, I just opened up safari and typed in: about:blank which takes you to a blank white page. I took a screenshot of the page, and just opened it up in photos, and zoom in a bit so there is nothing other than the pure white.

I understand your confusion and will try to shed some light into my proof of concept. But first. let’s step back to the first few posts in this thread.

@davidS and @DrBormental discuss possibilities to improve NLP by adding a “light source” dimension because of this: "

Also, the example of Frontier SP3000 light is given in the original post. To me, it looks like this scanner uses three light sources and a sensor without color filters - like three pass scanners (that seem to have vanished from consumer scanner offerings).

Now, instead of scanning with three passes in the time domain, my approach was to simulate this in a space domain and the noise was used to add shades of reds, greens and blues as well as yellows and cyans, the latter being removed from the mix by the filters in PhotoLab, filters that can be adjusted in width and wavelength as you might derive from the screenshot:


The noise and filtering makes R, G and B wider than what pure primaries are.

My PoC is therefore positioned in between the “scanning with monochrome” setup and a “classic” high-CRI setup. This test setup is far from ideal and not very practical, but one of several ways to try RGB lighting:

  • Get an RGB light source
    (the real thing)
  • Use lighting as described in this thread
    (space domain simulation of an RGB light source)
  • Play a “rainbow movie” (with cut out yellows and cyans) while exposing
    (time domain simulation of an RGB light source)

But having a “pure white” image displayed on the iPad’s RGB screen is already accomplishing this in the space domain.

If you were to put a microscope on the iPad’s screen while displaying a “white” image, you would see separate red, green and blue components illuminated within each pixel, like this…

ipad-rgb-closeup

You can’t adjust the primaries or the wavelength of these Red, Green, Blue components… you can only mix them by adjusting the intensity of each component.

So when you adjust the HSL, you are really just creating metamers using these three fixed RGB components. (For instance, there is no yellow or cyan wavelengths emitted from the iPad’s RGB screen… there is just the perception of yellow and cyan by mixing the existing Red, Green and Blue wavelengths.)

Now, you could play with the pattern of how these red, green and blue components are distributed, and perhaps there is some reason that some pre-defined or random pattern will be more ideal than having all the components on at once (as they are in a “pure white” image). But the tradeoff will be brightness, which is already pretty low on a typical iPad.

TL/DR: iPad screens are just collections of red, green and blue illuminants. A “pure white” image just has each of these red, green and blue illuminants turned to 100%. You cannot change the wavelength of these RGB illuminants. You can only vary their placement and intensity. Based on this, I believe that a pure white image is the ideal way to test the RGB light from an iPad.

…true, but I can adjust the filter center frequency and width in PhotoLab’s HSL tool. This means that I can make greens to extend towards blue and red to a certain degree while thinking not on a pixel level, but on a level of a group of pixels that can, together, appear as yellow or cyan or orange etc. The diffusor will then mix (de-pixelate) the image to give wider reds, greens and blues than what the unaltered r, g and b pixels would produce. They are not monochromatic but, nevertheless, I wanted to see what comes off even wider, but not overlapping colours.

I still stand with my statement (that I’ve repeated in many threads) that NLP is fairly tolerant to the kind of lighting used when the negatives are captured. I find that results depend more on the characteristics of the negative and the film type than on characteristics of (not too crappy) lighting.

@Digitizer , Wouldn’t a spectrometer show the same spectral distribution with and without the diffuser?

The diffusor serves to average out the blotchy appearance of the source. Measuring a narrow sample with a spectrometer would not give the same results for the same sample with and without the diffusor, see image below.

@Digitizer I’m only arguing from intuition. I really need to build the spectrometer and see for myself. But I believe that in your example a spectroscope as you drag a slit over your sample would show variations in intensity but not in color. You would measure varying amounts of red, green, and blue------ and not find other colors. The other colors are made by our brains----not actually there.

In nature, light comes in all colours and most camera sensors react to red, green and blue, as do our retinas, which can also detect very faint light with separate receptors.

The question remains, if a spectrometer would only see rgb or also other colours. My best guess is, that it will measure peaks at different wavelengths, depending on where the slit sits over the pixel salad, including orange reds or greenish blues.

Nevermind, the experiment was nothing more than a proof of concept that showed the resilience of NLP against odd lighting once more.