If you follow the online film photography community, it seems to me there is a shift that is occurring where there is more interest in RGB scanning and the beginnings of commercial RGB light sources.
With the recent release of the Cinestill CS-Lite+ SpectraCOLOR, an RGB light source and mechanical film carrier from the creator of the Tone Carrier in development, and recent release of the RGB Scanlight from Jack Whittaker, it seems to me this is becoming a more mainstream workflow.
I’ve taken an interest, and while shooting individual RGB frames and combining in Photoshop isn’t a workflow I’m particularly interested in, I do like the idea of using a combined narrowband light source, just as I would a white light source, even if it is technically less ideal.
With that in mind, I recently aquired Jack Whittaker’s “Big Scanlight”, which you can read about here:
Along with his write up on narrowband RGB light sources in general:
Jack is a proponent of doing manual negative inversions via tone curve, which is certainly doable, but in my preliminary scans, I’ve found I’m getting what I consider very good results using the same workflow I would with a white light source and using Negative Lab Pro. Compared to some of my older white light source scans, I do find the colors are more vivid, which I’m sure could be achieved with some post production and a white light source.
Anyway, I’m curious if there are any unintended consequences of using NLP with a combined narrowband RGB light source, or ways to optimize it for this type of scanning workflow.
Here are some examples of scans I’ve made using the “Big Scanlight” and NLP. These are straight out of NLP with no color model applied, just some sharpening added and some WB correction in one of the 35mm shots. 6x6 shots were with a Rolleiflex 3.5A and Porta 400, 35mm shots were with an Olympus OM4-Ti and Zuiko 28mm f/2 and Porta 800.
In case you hadn’t seen it Nate did address this topic back in 2021 here:
At that time, and I guess at any time in the intervening period there wasn’t a commercially available RGB light source but possibly this is the first realistic conternder. I like everything I read about this light source. For me, combined with the relatively simple inversion technique it promises to get nearer to the simplicity and consistency of darkroom printing without any invisible background colour processing going on leaving you to work on the inverted image in your own way. That certainly wouldn’t suit everyone though and certainly NLP seems to have done a sterling job here. There’s still the ever present problem with colour negative inversions of demonstrating reliably that one light source might be better than another though given there are so many possible variables. Much easier with positive copies where a colour profile could be determined and compared, though that requires specialised software and a target slide like an IT8.
Thanks for the link, I did come across that thread, and as I you said, at the time there didn’t see to be a commercially available narrowband RGB source. But that seems to be changing, and perhaps I’m wrong, but I suspect as the idea catches on, more people will gravitate toward it and potentially the market will shift further. I’m early on in my experiments with it, but I really thing the results are great, either doing a manual inversion or using NLP. Maybe we will see some further optimization of NLP on future versions as RGB catches on, assuming it is needed at all. I’m sure @nate could speak to that, but I cannot.
Does anyone know how “Sampling Efficiency” and MTF comes into play when doing 4-shot (or more) pixel shift? It seems to me that 4-shot pixel shift (debayering for full RGB) is the ideal way to use a light source like this, but not if the benefits of the RGB color are lost to the downsides of the technology/software/firmware as it is currently implemented by various camera makers.
Aside from that, single shot still seems to have great value, but as you pointed out, there hasnt been a product on the market though it seems people are trying in earnest. I have been wanting to try RGB in single-capture format for quite some time. The color channel bleed with white light is something I have seen enough examples of to at least want to try other things.
Eventually, I would love to do multi-capture of each individual channel like some high end scanners but until someone fully automates it effectively for home use, my volumes are far too high for it to make sense.
No problem! My understanding is that the benefits of debayering using a combined RGB source would be the same as using a white light source, but I’m no expert and perhaps someone else can chime in.
For what it’s worth, the 6x6 shots above were done using 32-frame pixel shift, whereas the two 35mm were single capture.
The creator of the Tone Carrier is working on a fully automated RGB scanning solution with individual RGB frames captured and combined in software. Not sure what the timeline is but it’s pretty intriguing.
Is a tighter RGB spectrum + debayering not more accurate (*) than white light + debayering? Maybe I missed something on that at some point.
*whatever “accurate” means since a lot of it is still interpretation at the end of the day on prints… but I do know I want the best possible data to start with for archival purposes.
Camera scanning is mostly done with cameras that do have som bleed between colours, check out dxomark.com for details. That being the case, the camera’s processing must remove parts of e.g. what is seen by green, because it’s something red etc. with narrowband RGB lights, we see that colours get cleaner, but maybe we’ll miss a few hues because the respective mix of light can’t produce that hue which exists in reality and that is understood via our eyes as composites of the somewhat elaborate chemical and electrical process of seeing colours (and brightness).
Whatever lighting we use, we get starting points for images that we can or can’t tweak to please us, to be “true” (for repro work) or to create our own style. If we can meet our or someone else’s requirements, the whole process (including lighting, conversion, customising etc.) is “good”, no matter what we did or used on the way. If we can’t meet requirements, the process is “bad” and we need to look at all parts of all steps. One of those parts is lighting, but it’s not the only one. And one important component is all but “true”: analog film". So, will we be true to the look of the film we used or should we be true to the scene we captured … under whatever our colour memory did between the capture and the print.
Long story short: Perfecting one link in the chain will simply make all other links weaker. Experimentation is cool (I have a degree in engineering) but not necessarily necessary.
Use what you have and stick to it if it does what you need it to do.
BTW: For cleanest separation of colours, use a monochrome sensor to scan with RGB light. Welcome to Pentax, Leica, Phase One etc.