Integrating sphere as a uniform backlight

Hello,

I’ve started scanning a collection of old photo negatives, and after some disappointing results using commercial scanners, I decided to go the DSLR route. With the help of this forum and the NLP plugin, I’ve been able to create a nice scanning setup and I got really good results. I wanted to report back what I did here and maybe start some interesting discussions. I’m an optical scientist by training and I approached this a little differently from photography professionals.

Integrating sphere backlight
I read the various discussions about setups, backlights and RGB light sources. A common recommendation is to use a high CRI white light source as backlight, but in my view the optimal light source should instead be an RGB source with wavelengths selected for low crosstalk (i.e. optimal color separation, see link above), and independently controllable red, green and blue intensities. In a DSLR setup, I think this is the best way to recover as much color information as possible from the negatives. White light sources, on the contrary, must cause more mixing between color channels. This may or may not be problematic depending on the scene and film used.

Besides that, the thing we all want out of a light source is uniformity. The most common way to achieve this around here is with large light panels and diffusers, but this approach may cause a slight decrease of intensity around the borders due to the lighting geometry. I decided instead to 3D print an integrating sphere based on this publication. An integrating sphere is basically a sphere whose inside is painted white, with a small inlet for the light source and an outlet for the thing you want to illuminate (the film). When the light enters the sphere, it bounces around randomly several times. By the time the light finds the exit, the beam is almost perfectly uniform. This pairs nicely with the usage of separate red, green and blue LEDs because you don’t need to fiddle around too much to combine the three colors into a single uniform beam.

Independent LED control
I put together an electronic board to control the LED intensities independently. One caveat I’ll mention here is to use LED driver electronics that either do not flicker (linear current driver), or that flicker much faster than the shutter speed of the camera (switched current driver with a high enough frequency), otherwise even the “perfect” light beam from the sphere will appear non-uniform to the camera due to rolling shutter artifacts. I used a switched driver with shutter speeds from 1/20s to 1/50s, and I obtained a backlight which is uniform over the entire field of view to within a few percent. With a non-flickering LED driver, I think it would probably be better than 1%.

_DSC9539

With these independent RGB lights, I was able to do the white balance by adjusting the illumination, before even taking the picture. The borders of the negatives then appear white to the camera instead of orange. This way, each color channel is optimally sampled over the full dynamic range of the sensor (as opposed to the blue and green channel being “underexposed” for example). Here are examples without and with RGB pre-adjustment:

The wavelengths I used are 460nm, 530nm and 660nm, which is a compromise between LED availabilities and the recommendations found here about color separation.

I ordered a high CRI white light LED as well for comparison, but it hasn’t shipped since weeks. I’ll try to post a comparison between high CRI white and RGB when I finally receive it.

A technical note for anybody who would like to reproduce this: I initially thought I would need a lot of light so I ordered high-power LED bulbs and 1A drivers, but it turns out this is way more than necessary. Using an integrating sphere with 10cm diameter and 5% port fraction, a 150mW red, 100mW green and 150mW blue LED will be enough to scan typical negatives at 1/20 to 1/50 shutter speed, f/5.6 and ISO 100 on a Sony A7 (the milliwatts are optical output power, not electrical power). With 1500mW LEDs, I had difficulties fine-tuning the light output of each channel within sufficient sensitivity at the range of shutter speeds I wanted.

Resolution
My camera is a Sony A7 with a Sony 50mm f/2.8 macro lens. I read many people recommending to use macro rings together with a commonly available non-macro lens. Yes, this is cheap, but I tested it and immediately figured out that I could only focus the center of my negatives properly whereas the edges would consistently remain out of focus. I would urge you to use a “real” macro lens, because it makes it possible to get the full film in focus from center to edge.

Contrarily to what I first assumed, it is not necessary to use a very wide aperture. By my calculations, at f/5.6 and with a magnification of 1:1, the optical resolution is 6µm which is the pixel size of my camera and which is better than the usual granularity of color film. In other words, beyond f/5.6, I was not going to resolve more detail than either the film or the sensor could support, and the narrower depth of field actually makes it harder to get both center and edge of the film in focus (even the best lenses don’t have a perfectly flat focus field). Therefore, I used f/5.6 for all my scans. I could clearly distinguish the film grain this way. I note, though, that not everybody agrees that focusing on the film grain is the right thing to do.

Collimated versus diffuse lighting

I was very curious about the suggestion (here, here and here) to use collimated light to scan the negatives. Initially I thought that this must be better, because in theory it leads to capturing the film with a better resolution whereas diffuse lighting introduces a slight blur. There is a compelling example on the Wikipedia page about this effect. My thinking was: yes, collimated light makes all the cracks and dust on the film visible, but surely you should be able to get rid of those by applying a blurring filter in post-processing if needed, right? Wrong. After looking into this further, I realized that diffuse makes it possible to preferentially capture the film, to the exclusion of dust and scratches. In simple terms, if a scratch or dust particle is present on the film, there is usually another path the diffuse light can take to still pass through the film and reach the DSLR. With collimated lighting, this information would be lost. Of course, this holds true only if the defects are not too large. But the point remains: even though diffuse lighting introduces a bit of blur, this disadvantage is (to me) completely offset by the advantage of naturally suppressing dust and cracks.

Purple problem
I did encounter problems scanning some of my negatives. One frequent problem I have is that colors that should be deep red appear purple instead. Other color shifts I noticed are light brown to dark brown, pink to red, red to orange, etc. These negatives require color adjustment in light room (aqua hue +30 or +60). I do not know if this is a byproduct of the RGB scanning process, or if the negatives degraded with time, or if perhaps different settings are necessary in NLP when using RGB instead of white light. I’m posting an example below (cropped) of an armrest and a carpet. This happens a lot more frequently on items of clothing, but I can’t post examples here for privacy reasons.

Conclusion
I’m happy with this scanning setup. It’s fast and it delivers great results. If I would do a second iteration of this setup, I would probably improve a bunch of practical little things on the design of the sphere, the LED boards, the negative holder, etc. But I’m on a budget so I think I will keep scanning my photo collection like this, and maybe other photo geeks might take it from here :wink: I hope you enjoyed reading this, and look forward to any comments/suggestions.

7 Likes

Ulbricht spheres were sometimes used in colour enlargers (roughly 50 years ago) and I enjoyed reading about your implementation.

Where did you find the LEDs and the controller? It would be nice if you could add links to the respective products and sources. Thanks.

Thanks so much for posting this! After much reading, trials, attempts, and time, I had very gradually reached similar conclusions. Sources that led me to conclude that distinct, well separated, small bandwidth R, G, and B sources are needed were “Exploring the color image”, a very interesting publication by Kodak, and “Investigation of Film Material–Scanner Interaction”, by Barbara Flueckiger of the university of Zurich: Color enlarger paper has distinct sensitivity for Y, C, and M, and the orange mask in color negatives is to counter imperfect color dyes in the layers in color film. I first started using 81 series blue filters in between my white light source to let the camera see something more close to grey as opposed to orange and prevent the red channel from being overloaded while G and B were still very low signal (i.e. noisy). I then started looking for separate R, G, and B LEDs which I found for example here:

And led lenses:

With the idea to combine the separate colored beams optically. A tunable for each channel led power supply board I found at

Which was easier than creating adjustable current sources myself from scratch. In the end, I decided to first try a cheap “Ulanzi VL49 rgb video light” which I got from Amazon for €33 in the hope that the R, G, and B LEDs in there were roughly the wavelengths that the OP mentions and had well defined colors with small wavelength spread. Using that I suddenly got already rather accurate colors from slides, which I had difficulty with using a white light source, but I did not yet try the Ulanzi for negatives. I purchased also a physics classroom “hand spectroscoop” from Astro Media through Astro Didaktik in Germany with the intent to be able to see which wavelengths the leds in the Ulanzi have. The spectroscoop can obviously come in handy to study other light sources as well.

In short, I was very slowly progressing towards what the OP was doing, but did not yet make the step to individual leds, since I hoped that this had already been taken care of in an RGB video light with adjustable color. The color adjustment can be used to make the orange mask grey, and is similar to adjustable power to three individual leds. I have homogeneous light intensity with the Ulanzi except for the extreme borders where indeed I loose about 8% as the OP correctly stated. I found this acceptable for now for the sake of the convenience of not having to custom build a light source.

Nonetheless, I was very pleased to see an optical scientist come to similar conclusions and doing all this very, very properly. I think all on this forum would very much appreciate a parts list, and perhaps a source file for the 3D printed integrating sphere ;->

Thanks again very much for sharing your knowledge and findings! Highly appreciated!

1 Like

Thanks for your replies. Sure, I can share a part list for the setup above. Just bear in mind this was the very first version, and I ran into several little problems with this component selection. I’ll share what I did, and my suggestions for improvements below.

Part list for the setup above
DSLR

  • Camera: Sony A7 (ILCE-7)
  • Objective lens: Sony FE 50 mm f2.8 macro (SEL50M28)

LEDs

I ordered these LEDs already soldered on aluminium star boards as shown in the links above. You can also order just the LEDs themselves from the same distributor. The heat sinks are from Aliexpress.

Driver electronics

Obviously, you also need wires, soldering supplies and a prototype board, but these are supplies I already had at hand and I don’t have order codes for them.

3D printed parts

I designed the 3D parts in Autodesk Fusion (free for hobbyist use) and had them produced by CraftCloud3D at the cheapest available rate (white PLA material, 20% infill).

This and this paper were helpful for the sphere design. For the negative holder and lens cover I started from this model by FruitieX and modified it to fit in my arrangement. The top of the negative holder is open because I had negatives with paper strips attached to them. I uploaded all my files here. I had to smoothen the inside of the negative holder with sandpaper as some plastic parts where sticking out from the 3D printing and could have scratched the negatives.

I painted the inside of the sphere with several layers of white primer paint and a custom barium sulfate paint as described in the paper I mentioned in the original post. You can find the necessary materials on Amazon or E-bay.

The two rails holding the parts together are 12mm copper tubes from the local hardware store. The parts are held together by M3 nuts and screws, tape, glue, and metal wires.

Improvements to do

If I would do this again, I would certainly do these things differently:

  • The LED drivers turned out to be way too powerful, and adjusting them in the lower-end of their power range is difficult due to insufficient sensitivity. Also, despite using low-flicker buck drivers I still see rolling shutter artifacts. I suggest to use lower-power (about 150mA) linear drivers with analog dimming instead.
  • Don’t solder the board yourself. I thought this was going to be straightforward, but there’s like 50 solder points to make and it was tedious. Instead, I would draw the board on EasyEDA and order the board with all the components already soldered on it from JLCPCB. It’s cheap and will save a lot of hassle.
  • You can order a single LED board with all 3 LEDs on it from this website. Then you can just make a single inlet in your sphere, which simplifies painting and assembly.
  • The custom BaSO4 paint for the inside of the sphere is overkill. The light from the sphere is already very uniform just with the (unpainted) white plastic surface from the 3D printer, and I did not see a very big improvement after painting. If you want to go the extra mile, order the 3D parts with a sanded finish and apply a layer of “normal” white spray paint from the hardware store.
  • There are a couple of flaws to fix in my 3D models:
    • Assembling the components using glued-in M3 nuts turned out to be trickier than I thought. The nuts come out easily from their sockets, so this would need to be improved in the 3D models.
    • The chariot for the camera body is not rigid enough. The weight of the objective makes it bend a little, and I had to tape on extra supports for the objective to point at the center of the negatives.
    • The negative holder could be made tighter, and it would need supports on each side so that the strips of film don’t fall out.
    • I had left the thumbscrew to move the negatives (from FruitieX’s model) in my 3D model, but it wasn’t delivered together with the other 3D printed parts. I think it’s because of the print-in-place concept he used, which must be confusing for the 3D printer operator. I would just leave this feature out. It’s totally fine to move the negatives by hand as well.
1 Like

Brilliant! Thanks so much! I’m still not sure whether I’ll go the entire mile to reproduce something similar since I did some further tests with my Ulanzi and I’m wondering if I’m already close to there with that.

Using a toy spectroscoop:

I analyzed the light coming out of the Ulanzi to find out which LEDs are used. The spectrum was a bit hard to photograph with my phone, which also has some distinct r, g, and b sensor. I was able to make a crummy video of the light cycling through some demo setting that roughly shows what I was looking for, but apparently I cannot upload that so I made some screenshots:



The video is very crummy and the actual R, G, and B lines are better defined when just looking. The Ulanzi uses LEDs with blue at 455 nm, green at 525 nm, and red at 625 nm (assuming that the scale of the toy spectroscoop is located accurately…). I would have preferred the red to be somewhat longer wavelength so as to lie further beyond orange (which enables better negating the mask), but alas. It’s convenient and cheap.

My current setup can be seen here:

As shown there, I also tried a proper Solux bulb, and a “white light” Rollei led array, but now abandoned both in favor of the Ulanzi RBG light with an additional diffuser (borrowed from the Rollei light).

This gives, when placed close to the negative, a quite homogeneous light intensity, apart from being about 8% less bright in the corners, which I find acceptable considering that many lenses have a stop or so falloff in the corners anyway, which is a factor two, and usually still not too disturbing. 8% is not very visible in actual photos.

Nevertheless your sphere is inspiring, and I am now playing with the thought of adding that. Perhaps some stainless steel half balls such as these:

https://www.amazon.com/dp/B096VGSCT9/ref=syn_sd_onsite_desktop_0?ie=UTF8&psc=1&pd_rd_plhdr=t

Can be polished internally to obtain a spherical reflecting surface. Just brainstorming along…it’s a never-ending optimization it seems, but for now I’m a happy camper. Perhaps when I find some time, I’ll go the extra mile and reproduce your highly inspiring setup!

1 Like

Forgot RE collimation: I had collimated light available in my previous setup, which was based on an old enlarger, but indeed ended up using diffused light and being perfectly happy with that as well. Nonetheless, the alternative to a sphere to get perfectly homogeneous light is to start with collimated light and then diffuse that: I did have very homogeneous intensity from my enlarger, also due to the collimating lenses being 15-20 cm in diameter and having a 10x10 cm opal glass diffuser, but my table-top setup is so much more convenient. The antistatic record brushes in do wonders in removing any dust BTW, as does carefully swiping negatives with a Swiffer dust magnet before insertion…

Thanks for sharing your results, and the suggestions concerning dust removal. I haven’t found a good solution yet. I’ll give those a try.

I am using a Nikor/Honeywell 6x7 colour enlarger head as a light source for my camera scanning setup. The enlarger heads light is an MR16 Halogen bulb, it shines into a wedge-shaped box which is metal, painted white, and has what looks to be styrofoam lining the inside, it then shines down through some translucent white plastic material which further diffuses the light. So it does what your globe does.

My question is, how well then does the Halogen bulb work in terms of getting good colour information versus using either a setup like yours or a high CRI LED bulb? As far as I can tell from my research online Halogen bulbs have a CRI of 100% which should give very good colour accuracy and information. I am just curious to hear your opinion as an optical scientist on using a colour enlarger head as a light source.

Also, I haven’t gotten to experimenting with it yet, but since it is a colour enlarger it has built-in YMC filters which I believe could be used to control any colour casts and possibly the contrast of the scans. So far the enlarger head seems to work extremely well, and I have little to no colour issues in the final scan when converted with NLP, which was not the case when I attempted to use my iPad Pro 11" as a light source.

Great stuff!

One possible reason for this would be the after-effect of neutralizing the orange mask. The mask participates in the color formation proportionally to the exposure. If we remove the orange component equally throughout the image (which is what we are doing either by individual RGB channels power or by subtracting cyan in Photoshop), we end up with the residual effect of the film dyes crosstalk, which the mask was supposed to compensate. The color deviation then should be more noticeable in the (positive) highlights areas than in the shadows.

Did you try to calibrate?

I think the color enlarger does pretty much the same thing indeed! I didn’t realize you could just buy this kind of equipment :wink: As for the halogen lamp, the key difference is that it is a broadband light source (i.e. the lamp produces light of every color) whereas an RGB is narrowband (it contains only three distinct colors).

On a spectrometer, a halogen lamp looks like this (from ResearchGate):

whereas an RGB light looks like this (from Wikipedia):
640px-RGB_LED_Spectrum.svg

A high CRI LED is a little different still. It starts with a blue LED and has a coating to partially convert it to other colors. How well it approximates a halogen light depends on how hard the manufacturer has worked to make it so. Most white LEDs on the market look quite different though (from the Cree brochure):

You would think that the halogen lamp should be optimal as far as color rendering is concerned. And actually, you will get results pretty close to optimal if the whole process (from taking the analog photo to rendering the digital image) has been optimized to render colors accurately under this illumination (which NLP was designed to do). My argument for using RGB is that the color negatives themselves contain only three distinct layers of color, not a continuous set of colors. By using halogen light, these color layers will not each be recorded separately on the R, G and B channels of your camera, but they will mix a little (e.g. some of the G from your color negative might end up on the R channel of your DSLR), which is called cross-talk. By using RGB light, you counteract this and achieve a better separation of the colors during recording. In theory, it should be easier to correct colors with an RGB recording than with a halogen one. In practice, I think the NLP plugin is so good that it probably doesn’t matter so much :wink:

If I could I would! It would solve this color problem once and for all. But my use case is scanning old family pictures, and nobody thought of photographing a color chart back in those days!

The scanning pipeline itself introduces some color deviation, compounding the color shift of the film. Calibration is supposed to eliminate the former, improving the baseline quality of scans.

The point is, the pipeline itself should be characterized and therefore taken out of equation, regardless of the quality of material being scanned.

If you wanted to perfectly capture each of the RGB layers wouldn’t you want to take three separate scans of a negative, one illuminating it with a red light, the second a green light, and the third a blue light and then merge the three photos? Would that not give you the best possible colours?

Because when you mix the RGB outputs in the sphere are you properly exposing each of the camera’s colour channels since the R, G, and B light is mixed? Could that be why you are seeing odd purples?

Thanks!
Seth

I see what you mean now. Indeed, characterizing the pipeline would make sense, but I need to think a bit more about which characterization process would make sense in this particular case. First, we would need a transmission target (like the photo negative). Second, what I was trying to achieve is to map each color layer of the photo negative separately onto the R, G, B channels of the camera, not color reproduction per se (I thought that faithful colors should be achieved by digital processing, depending on the film characteristics). With this goal in mind, probably the characterization process should look a bit different.

Yes, good point! In theory, if the LED wavelengths are picked perfectly well, taking the RGB scan is almost the same thing as taking three separate scans with successively a red, green and blue light. For this to be the case, you need each LED to hit the camera’s spectral sensitivity curve in a place where there is no cross-talk with other channels. In other words, taking separate red, green and blue light scans should result in photos with only light in the corresponding channel of the RAW file and complete darkness in the other ones. Then, taking a photo with all lights on would be the same as stacking separate photos. This is unfortunately not the case in my setup: the red and blue channels are completely clean but the green light leaks a little bit into both the red and blue channels of the camera. Luckily, this particular kind of leakage between channels is almost completely reversible in post-processing provided it has been measured (by taking at least once a separate R, G and B photo). For the sake of efficiency, I therefore chose to scan the negatives with combined RGB light.

What is not easily reversible, though, is cross-talk between the color layers of the negative. It’s difficult to solve this problem with normal photographic equipment because the layers of the photo negative are not actually red, green and blue but cyan, magenta and yellow (and they’re correspondingly called red-forming, green-forming and blue-forming). The best you can hope for is that it absorbs light from only one of your R, G or B lights and leaves the other two alone as much as possible. But the spectral layout of these dyes is not easy to characterize. In general, it’s unlikely match that of your DSLR or your LEDs exactly, so some cross-talk is inevitable in a hobbyist-grade setup such as mine. The exact amount of cross-talk depends on the particular brand / type of film negative, aging conditions, etc. You would need a spectroscopic measurement to figure this out precisely.

A completely regular positive characterization process would do fine. The task is not to characterize the whole scene - film - scanning sequence, but just the scanning. The idea is the scanned negative image should look exactly like the real one, with as little color deviation as possible. The fact it is a negative is not relevant. A positive slide should also look exactly the same as the original when scanned. The scanning pipeline itself does not have to be specific to negatives at all, does it? AFAIK no commercially made film negative targets exist anyway.

Regarding the positive transmissive targets and the procedure description please see the link above.

Then, having a scanned image which represents the original negative with better precision, the software interpretation should be expected to achieve a better result.

Hi @damien,
congratulations on this build. Looks like you made a lot of good decisions and the results look excellent. As I have been messing around with a similar system and developed two different reproduction systems(collimated and diffuse) based on integrating spheres (albeit using a monochrome camera) over the past year I wanted to share some ideas on the questions surrounding crosstalk, characterizing and 1-shot vs 3-shot.

While the questions surrounding calibration for photo negative film scanning arise frequently in these forums, I think it is important to remember, that the medium was never intended to be end-to-end calibrated and that it’s true potential was in the consistency it delivered in conjunction with RA4 paper. There is an intrinsic, irreducible component of interpretation in the transfer from the negative to the positive(esthtetic decisions made in the colour darkroom or in Adobe Lightroom). In an effort to not state the obvious and to keep this post concise, I will avoid going into too much detail, but I have personally come to the conclusion, that MY ideal system would tranfer a neutral, linear representation of what is stored on my negatives into the digital realm. To me this means, storing 16-bit linear data with every single channel’s histogram preferably filled side to side with information stemming from a single dye in the negative. To me this makes sense int he light of the negatives purpose and intermediate MEDIUM, a vessel for an image yet to be developed. Specifically, for clean data, this development would entail inverting the colors and applying gamma. This should give you a nice, neutral starting point to develop the image digitally.

While I understand the temptation of calibrating everything including light source, capture device and acquisition conditions, to represent the negative(as viewed on a lighttable) as truthfully as possible(in line with @nicnilov 's suggestion) I don’t see a benefit in transferring the informatiom on the film into a complex representation with correlated primaries, presumbaly just to have it in line with human perception.

To me this is a bad idea, because it inevitably limits the radiometric resolution(dynamic range) you will be getting out of your system, because it is an ill-defined and probably frustrating process to begin with and because from a data processing point of view it just introduces unneccsary complexity. NLP is an excellent tool to analyze such representations of complexly correlated colour information(taking advantage of sophisticated colour managment) and extracting a gorgeous positive. However If you have the posibility, to not require such computational means of “unpacking” your image, because you went to great lengths to solve the problem in hardware, I don’t see why you should cycle back to relying on subsequent computational (or manual) conversion, just to store a perceptually accurate representation of the intermediate medium.

Rather I think that your approach of neutralising the film base and using the RGB channels of your camera for the CMY records in the film makes more sense. I will further add, that I am convinced that most of the colour issiues you observe will disappear, if you switch to acquiring three shots with only one light source active for each. For instance: a purple tinge to red areas is plausibly explained by the fact that magenta(green’s complementary colour) is added on account of the green channel picking up some of that deep/photo red.

Here’s a graph of a Bayer filter’s typical spectral behaviour to corroborate this claim(this one specifically is the IMX455, the sensor in Sony Alpha 7R4, I couldn’t find the one of the A7).


You can see, that the green channel’s response at 455(Royal blue) and 660(photo/hyper red) is low(~.1) but not zero.

Aquiring a single channel at a time, you have effectively converted your camera to measuring densities in the the discrete points in which the negative’s colours were intended to offer optimal separation. If you don’t take my word for it, here’s a paper that preceeded the publication on the multi-spectral film scanner( I am sure you have seen) by the same author Giorgio Trumpy. I you don’t take away the message that seperate tri-color density measurements are the way to go, at least it will give you a practical guide to a possible calibration process you can use to numerically assess the crosstalk you seem to be quite concerned about.

Three separate shots are a simple practical means of avoiding channels “bleeding” into each other, and has the added benefit, that you can simply run your lights at full gas and use the camera’s shutter speed to reliably, repeatably and accurately control the intensity of every single channel. I have been doing this using a monochrome camera and acquiring three images in rapid succession and it has the added benefit, that the inversion of the colours becomes trivial. I’ll attach a video demonstrating one of last year’s crude prototypes which allowed me to view the positive in real time, relying on the consistent separation of channels that the narrow-band LEDs deliver.

While I have only used monochrome sensors(for the resolution benefits they bring) in conjunction with my light source so far I am currently waiting on some 98 CRI nichia LEDs and a loan of a Sony Alpha Camera from a friend to make some comparisons of White light vs 3-shot Trichrome. I intend to adapt the scrtpt in use in the video to synchronously control the lights and trigger acquisitions. If you’re interested, I can send it to you, though you will probably have to add an arduino (or comparable microcontroller board) to your setup to utilize it .

I’ll conclude this post, by emphasizing that I don’t intend to discredit/criticize any other approach to film digitization white light/single shot RGB definitely have their benefitts, but that I simply wished to point out some advantages of this approach, despite it’s procedural complexity. I apologize for the long-winded post

1 Like

The paper that you refer to comes from the same group at the university of Zurich from which a more extensive write up I referred to above came:

Barbara Flueckiger apparently wrote a MSc or PhD thesis on the subject from which the Trumpy paper is derived, or the other way around (didn’t look at the publication dates). This thesis drove me to individual R, G and B scanning initially and, hence, I was very pleased to see that Damian had moved in the same direction (but did a much better job than me…). From the extensive information in the thesis it seems that high end film scanners for old movies do indeed scan R, G, and B separately, and then combine in software, i.e., pretty much the same as you are doing. Overall, I concluded that going the extra mile was not worth the additional effort (for me, for what is just a hobby), as the convenience of a simple RGB video light with individual R, G, and B LEDs seems to work well enough, although I would not mind the red led having a longer wavelength, perhaps 660nm or higher. I feel I have more or less reached a point of limited returns with a setup that is cheap, easy to assemble and convenient. I do have a Solux halogen with a diffuser and optional collimator as a continuous spectrum fallback option, but so far the much more convenient very simple Ulanzi video light is working very well (for me).

@Sethsg : your enlarger box is not spherical so there will be a finite risk that the output light is not perfectly homogeneous, but instead might have “hotspots” depending on how the internal reflections combine towards the exit hole. If you can start with a wide collimated bundel and throw that onto a diffuser (without bouncing it in a box with internal reflections) then this might be better, perhaps, in theory…

Thank you all for your sharing your thoughts. On the topic of separate R, G, B acquisitions, it must indeed produce a better “measurement” of the negative than a combined acquisition. I was reluctant to do it because I was not sure whether this would become a headache in post-processing (whereas a regular combined shot from a DSLR can be processed directly with Lightroom and NLP). I hadn’t actually seen the multispectral scanning paper, very interesting (link)! It feels like this should be the ultimate solution as far as digitization goes, but certainly not the most pragmatic one :wink:

Not quite, but nevertheless: