Integrating sphere as a uniform backlight

Ok, thanks for the explanations, they make sense.

I’m still trying to reconcile why I’ve never experienced the 10% fall off at the edges in my own workflow.

And I know many other photographers with simple setup who don’t either.

Not to mention, if the main issue is light fall-off, I would expect the same thing to occur in traditional scanner setups (like with Epson v600), and I don’t see it there either.

It feels like there are still some pieces missing.

But even if the light source is radiating in all directions, not all of that light will reach the camera sensor.

I recognize this is probably quite complicated (based on the interaction of the dye layers of the film emulsion with the light, and the dispersion pattern that causes).

But looking at it very simply…

The light from point F and point G are not going to have much impact in this setup because they fall out of the angle necessary to pass through both the film emulsion and the camera lens.

If this theory were true, I would expect to see that longer length lenses would have less visible light fall-off in the corners of the negative than shorter lenses. And this is exactly what I’ve seen personally. I shoot with an 80mm macro on a cropped sensor, and see no visible light fall-off. Meanwhile, many of the emails I’ve received of extreme corner vignetting are from shorter lenses / larger sensors, like a 50mm on a full-frame sensor. This would make sense, as these lenses must be closer to the film and the angle of light which passes through both the film emulsion and the lens will have increased substantially.

Totally agree that it shouldn’t be necessary to create an additional DNG file.

Yes, this will be much more flexible, and it is easy in Lightroom Classic to save that mask as a preset, or copy/paste onto other scans. But, of course, doing it by hand is potentially sacrificing on accuracy.

It would be great to be able to have a “flat field correction” that was implemented using a mask layer, so you would have perfect accuracy, but it would also be easy to use non-destructively.

That may be something I could add to Negative Lab Pro :wink:

Hi,
having read @ArnoG’s writeup I would like to add the following remarks:

1.) Creating a an “infinitely large light source” or something which is effectively equivalent for practical purposes is actually not impossible: The diffusion boxes in diffusion enlargers or the diffusor used in the Fuji Frontier SP-3000 do just this, by using a rectangular arrangement of optical mirrors, the reflectance of which can be assumed to be >95% across the relevant spectrum. Here’s an image I lifted off the internet.:
image
What this does, coupled with a sufficiently opaque diffusor on top, is create an even illumination across the rectangle at the top. If this was good enough for professional enlargers(Durst Bimabox is the name of this article) or the Fuji SP-3000, arguably one of the greatest area-scan scanners of all time, I don’t see why it shouldn’t be good enough for our purposes.
2.) I agree with Nate, that what is FAR more important that the light source, and likely leads to more issues, is the lighting geometry. The distance between the various parts in the lightpath, the reflectance of the materials along the way and so on. With an optical diffusor in the lightpath there is always a compromise between having the diffusor as close as possible to the film plane(like in old slide duplicators) while keeping it far enough to not have dust/debris on said diffusor be in focus. The closer the diffusor is, the smaller it can be, the more light can be used for imaging and the diffuser the quality of the light, but the cleaner the diffusor needs to be.
3.)

When light is collimated, it only enters the negative perpendicular, and there would be no fall-off in the “ideal holder” case. The fall-off is due to points on the light source radiating in all directions. If this is focused, so as to create a perfect parallel beam, there will be no fall-off. Also your hypothetical 50 mm “tunnel” holder will not cause shading. in this way. Creating perfect collimated light is, however, not trivial, as others have shown here (I did some attempts as well).

This is actually not correct. A collimated lighting arrangent, as used in condenser enlargers, actually used light rays which are convergent in the taking lenses aperture. The only scenario such an illumination would be coherent and perpendicular to the film, is if you were using a telecentric lens, but no-one does that so it’s a useless aside. I found this out the hard way, as I once created such a lighting arrangement only to find my images to have a horrible vignette. Here’s a diagram illustrating the lighting geometry ( took this from the thorlabs website)
illumination
The Air-spaced doublet would be the taking lens. THe diffuser between the plano-convex condenser lenses could be removed.

Lastly and this is a more general observation, there’s a time and place for hardware solutions and a place for software solutions. There are so many factors involved in the light path used during digitization, that painstakingly trying to solve everything to the highest level of perfection in hardware is a fool’s errand IMHO. By all means, get the light to be as homogeneous as possible, but don’t rule out Flat-field correction. It makes perfect sense to use it and @ArnoG seems to be w.r.t. to the inconvenience of Adobe’s implementation. FFC addresess lens vignette, veiling glare, sensor dust and inevitable dust on the optical diffusor, all of which even a perfectly homogenous light source would leave unresolved.

Further it is worth mentioning that any dedicated film scanner I have ever used (Minolta, Epson, Imacon, Nikon, Fuji) performs some form of light-source calibration upon start-up

2 Likes

@mightimatti Having experimented extensively with a collimated light source, I was going to chip-in about “perpendicular” but chickened out. Glad you corrected that!

Hi Nate and Mightimatti,

Thanks for the inputs. First off, let’s ensure that my calculation is not taken beyond what it is intended to do: Demonstrate how a flat, finite, but assumed homogeneous plate of light will light up a flat, finite film. Not more and not less. There’s no optics involved, only photons. What happens after the film is not at all considered here, nor what creates the theoretical perfectly homogenous flat source of light.

Are you sure about that? A 10% fall-off won’t be visual by looking at the negative. It will visually look perfectly fine. One has to take a shot of a blank negative and compare the RGB values across the frame in LRC with the picker tool. The values should be constant across the frame. If they are not, then the light is inhomogeneous.

A flatbed scanner is an entirely different configuration, where a film basically lies on top of the light panel. Not sure there is any light fall-off in that case.

As stated above, I do not calculate beyond what light is falling onto the negative from the light source.

I have been wondering how I could transfer the calculated profile onto a mask in LRC, but don’t know how to get from calculated values to a mask. Homogeneity of the brightness of a frame is affected by any of the following imo:

  1. Potential inhomogeneity of the light source
  2. Potential shading due to the negative holder
  3. Potential vignetting of the lens with which the picture is taken
  4. Optics in the light source to sensor path
  5. The use of a planar light source with finite dimensions

I only calculated number 5.

Actually, I used a metal internal reflective box for my trichromatic light source (A custom-built trichromatic light for DSLR film scanning - arnogodeke) which seems to be the same principle, but is obviously not as good as the integrating sphere at the beginning of this post.

Actually, moving the source as close as possible to the negative does improve things a little, but moving it (much) further away is far better as I have shown above and in the linked writeup on my blog pages.

If the light falling onto the red THOR slab is converging, it is no longer collimated, no? As far as I understood from the definition of a collimated beam, is that the light is perfectly parallel (but I might be completely wrong here so please correct me if that is the case).

If FCC does a lot more than only brightness correction, then sure, it will be useful, but I believe in hardware correction more than in a software fix (garbage in, garbage out etc.), and I certainly don’t want an additional DNG file for every file I scan. I hope Adobe gets this and will allow us to create an FCC mask instead…

Yes, a purpose-built scanner that is designed by professionals does a lot more to “shape the light” than we do in our custom setups: See the Minolta 5400 light shaping happening here: Scanners and scanner lenses, middle of the page.

But alas, back to the original intent of my calculation and post: The issue of a finite light source and how it affects homogeneity is a standard problem in optics labs, and the calculations are done as I have shown. The ideal solution to evoke a perfectly homogeneous lit negative is an integrating sphere, as is done in the first post of this thread, but tending that not everyone has this, and not everyone has an infinite light source but uses a flat finite panel, physics dictates that there will be light fall-off at the edges, and the amount of which depends purely on geometrical factors. The solution of moving it further away was counterintuitive (to me) but in doing so, the issue will disappear. In practice, a 10% fall-off will be barely noticeable under normal circumstances: I discovered it when I was inverting some very dark B&W negatives and boosted the brightness after which I obtained funny things towards the edges. Initially I suspected that my taking lens was vignetting, but decided to do the math to see what effect my finite light source would have.

@Nate: I can email you the Excel spreadsheet with the integral calculations so you can plug in your specific dimensions and see if you can confirm experimentally (using the picker tool in LRC) if indeed it shows up in your setup. In my scanning setup it does.

Small update: I managed to quite accurately create a brightness mask in LRC. The procedure is as follows:

  1. Take a shot of a blank B&W negative
  2. Import with Linear B&W profile (e.g., NegativeLab v2.3 B&W)
  3. Analyze the variation of brightness (if any) in LRC in the development module by hovering over the frame and looking at the R, G, and B values: They should be the same (since B&W) and constant across the frame
  4. Crop to see only the blank negative
  5. Apply a radial mask: My light panel is square, so I will need a circular mask. A rectangular panel will need an oval mask of similar aspect ratio as the panel. Stretch the radial mask across the frame.
  6. Invert the mask (to add brightness to the frame edges and corners)
  7. Apply exposure correction (add) while observing both the histogram and the Tone Curves panel. Observe the peak varying in width while dragging the exposure slider back and forth. The sharpest peak occurs when the brightness is constant across the frame. Drag the exposure slider therefore until the peak is as sharp as possible. Done. Verify homogeneity by hovering across the frame again.
  8. Assign a preset to this mask and copy that over the scanned frames.

The sharpness of the peak can likely be tuned further by playing with the details of the mask but I did not try that. The brightness is homogeneous now in my case. At some point when I find more time, I will drag my light source further out to the 205 mm position that should give only 1% fall-off in the extreme corner of the frame, but I need to find time since this involves a slight mod on my rig.