I think this statement introduced some confusion because as you say, scanners make passes but cameras make exposures and so you are in fact talking about 3 separate exposures albeit with different RGB filtrations. Yes, this is theoretically better but is hard to achieve and separate controllable LEDs and an integrating sphere would seem to be a better starting point than a dichroic enlarger head.
However a similar result must surely be obtained by using a Bayer sensor camera that uses pixel shift to take 4 separate exposures where the sensor is shifted to remove the requirement to interpret colours from neighbouring pixels. Some of these cameras are starting to be ‘affordable’ on the secondhand market. I’ve yet to see the actual benefit demonstrated on here though, it would be very interesting to see.
Edit: No, not entirely similar because you are not forming the final image from 3 separately tunable RGB parts.
That’s true but mono DSLRs are not common & changing filters to make 3 times as many exposures would make digitising a lot slower as well as adding a risk of movement between exposures. Adding filters would introduce a slight but probably noticeable degradation of image quality.
It would produce a 3 times larger file but I don’t think it would contain significant additional data as I am already copying grain or individual dye clumps.
Passes, exposures … irrelevant terminology matters for the idea. You dont need controlable LED – you can use glass filters that you put above the film, this wil howeve introduce two more surfaces of dirt …
Pixel shift is interesting however people using it report underwhelming results. It is more a resolution tool that color reproduction. Also I am not sure how one would postproces channels to get 3 times the channel togeteger …
It is not internded for average scan to post online etc, – it is thinking how to aproach the quality of unservicable drum scanners for critical scanns.
I should point out that it is not discussing your ‘3 exposure’ method, rather he describes his route from drum scanners to high-end camera scanning for museums.
Tried it with older versions of NLP, and it provided slightly “better” conversions.
With NLP 3, I get high quality conversions without the extra effort.
Roll Analysis can improve results, provided that the captures were taken according to recommendations which can be summarised as “same WB, aperture, exposure time and iso for all”.
Anyways, I consider NLP conversions to be a starting point and don’t expect them to be spot-on to what I want, even though it happens every now and then.