Yes, there is indeed overlap with any bayer sensor. That’s why we don’t combine the photos as-is but we combine channels selectively. Taking the red (monochrome) channel from a photo that was illuminated with pure red backlight is as far as we can go. In that we are going by the rules of regular bayer interpolation. We just ask our bayer sensor to what it agrees to were overall red densities. We don’t care about camera calibration because we will calibrate channels manually anyways. Having said that of course a monochrome sensor would be best by all means but that’s a different story.
The calibration profiles/forward matrices are invalidated by the spectrum employed during trichromatic scans. If you’re using narrow band LEDs, the standard profiles (to my knowledge defined with respect to the continuous spectrum of a standard illuminants) don’t apply and your mileage may vary. It has been abundantly discussed in this thread, but the idea is that of using the camera’s sensor(despite the CFA complicating this task as @mw79 points out) to measure densities.
While it is likely, that one could easily produce a new calibration profile that compensates the crosstalk caused by the specific spectra of the trichromatic light source( and I actually shared a publication outlining how this would be done) in absence of such a profile, isolating the channels by combining shots is a convenient way of mitigating the issue.
@mw79 I think some sort of Raw processor is inevitable and I would like to do everything in a single WASM call if I can. I suspect that will favour performance. Looking forward to those scans, thanks
Sure! I’m currently on travel, but will send some shots to try combining once I get back and find time to re-scan. It seems that I actually messed up my white balance in the current set (see @nate reply and fix). It seems that also @mw79 was going to send you some pics to play with.
Hi Nate,
So many thanks for offering help and for tackling this! Argh, it seems that I messed up the in-camera WB setting during scanning indeed. Too many things to keep track of at the same time it seems. It reminds me of a quote:
“I remember my friend Johnny von Neuman used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”
Quoted after Enrico Fermi by F. Dyson, Nature 427, 297 (2004).
Anyway, my apologies to all for wasting valuable time! I’ll re-scan and report back, but for now, indeed, simply modifying the WB to daylight and using NLP as per Nate’s instructions with NLP Neutral provides far superior results now, despite scanning being faulty. I tried it on some of the other shots of the roll as well and it’s producing consistent high quality. One would obviously expect purpose written software to do a better job than a manual inversion, so I was really puzzled whether my light source created some sort of errors. Glad to see that this seems not to be the case.
One more question to Nate: What setting would your recommend in the “Convert” box for the color model and pre-saturation to get an as neutral conversion as possible? Currently I was using “Basic” and “3-default”?
Overall, all time was not wasted (at least for me) since I learned a tremendous amount on LRC profiles and how the curves work, which has helped me to get better conversions for my B&W shots, which I actually now do manual with (to me) the most pleasing outcomes. One important lesson (for me) was the mess that standard Adobe profiles create for scanned pictures. Even the “Adobe Neutral” profiles seem to shift things around in an undefined way. Nate mentioned elsewhere to use linear profiles only for scanning, and indeed I am now using only my custom linear profiles that I compared with Rawdigger and Fastrawviewer to verify that indeed they are linear. From what I understood, the NLP profiles are linear as well(?).
Anyway, having found the error (thanks to Nate) and getting very natural conversions with NLP despite the error gives me confidence in my current setup and the conversion. Next up will be a re-scan with proper in-camera white balance setting, a scan with individual R, G, and B to compare, and then slides, which I didn’t seem to manage to scan accurately before (some light and setup generations back).
Cheers to all that provided inputs in my quest for good color, and especially to Nate for being so supportive!
Arno
That’s great to hear, @ArnoG!
Yep, that’s what I would recommend… “Basic” color model and “3-default” for pre-saturation. It’s possible with enough data that I could create custom settings here for RGB light sources in the future.
Also, side note, if you want to experiment with your own DCP profiles and/or primary calibrations, you can NLP from overwriting the profile currently set on your photo by setting the color model to “None” – this will simply go back to whatever DCP profile and primary calibrations you had set in Lightroom on the photo prior to opening it in NLP. It’s a good way to test out different custom profiles… just note that if you do this that any settings in NLP that rely on LUTs will not be available (the HSL luts and Saturation settings).’
Yes, the NLP profiles are linear (technically, it’s a bit more complicated, but the end result is a true linear response in Lightroom all the way through highlights – something that I haven’t seen other “linear” profiles employ). They also have a number of other modifications made specifically with negative inversions in mind.
And yes, any profile that was created primarily with a positive digital image in mind will generally perform poorly for negative conversions… the most obvious reason is the embedded tone profile, but there are lots of other small details that go into a profile that can also cause issues that must be accounted for.
I’d agree that there is opportunity to optimize the existing profiles for narrow band LEDs. The biggest difficulty at the moment is simply that there isn’t a widely available narrow band RGB light table for me to profile.
Even just doing some quick adjustments to the existing profile, I can get much better results…
LEFT is existing NLP profile, RIGHT is a candidate profile for RGB light
So there is definitely opportunity for improvement, but it would be really nice if there were a commercially available narrow band RGB light made just for camera scanning that I could use for profiling so I would be confident that the profile would work well for lots of users (and not be specific to an individual bespoke setup).
Hi. Please find below the corresponding R,G,B triplet (RAW) from my previously uploaded image. Please note that in my automated workflow I account for backlight uniformity (LCC/FlatFieldCorrection) which is of course not respected in raw files itself. You will notice that all 3 captures are made using the same f-stop and speed. This is intentional as I instead calibrate the individual backlight intensity per roll for giving me a rather neutral starting point after combining. As said earlier this calibration also allows for improved consistency throughout the roll and in consequence more deterministic post processing.
Best regards,
Michael
Hi Nate, all,
An update from my side as well. I rescanned the roll that was part of the discussion above, now taking care of what I learned:
-
Camera WB was adjusted so as to be as close as possible to LRC daylight and fixed. This turned out to be 5500 K for temperature and +13 for tint as LRC tells me.
-
R, G, and B of the light was adjusted to as to have the peaks in the LRC histogram overlap as good as possible on a blank part of the negative. This created a nice gray, and I re-shot the same blank part after the scanning to see if the peaks still overlapped. There was no drift, so all okay.
-
I did notice some light falloff towards the corners, despite my light being homogeneous (possible some vignetting of the lens? At f/8? Not sure…). I adjusted manual the lens vignetting to +30 to compensate, which resulted in a homogeneous light on the blank part of the negative as measured within LRC. I copied this over to all frames after scanning. This is supposedly similar to doing a flat field correction, but I didn’t want LRC to create DNG’s for all my raw files.
-
I adjusted manual exposure on the blank part of the negative to be maximal but not clip when looked at with the NLP 2.3 profile. I kept that exposure fixed for the entire roll, as per instructions to enable roll analyses.
-
I scanned the roll with these settings, doing an AF on each frame and shooting with mirror up to prevent system shake.
-
I took a white balance in LRC in the center of the first blank frame. This gave a WB temperature of 5450 K and tint +10. I copied these over to all frames.
-
I inverted the first 20 frames that were all shot with daylight using roll analysis, basic and neutral NLP settings as per discussions above and advice from Nate. I did not include the remainder of the roll in this analysis, since these were shot with artificial light of various colors.
-
I re-did the inversion for the frames discussed above per frame to look at the difference between roll analysis and individual frames.
Results: Left is roll analysis, middle is individual inversion, last frame is as-scanned:
Overall, this seems to be working nicely now, thanks to all inputs from all of you! I’m not sure I would want / have a need, to go to three separate channel scans and the additional workload that this will bring, but I’ll create some three channel shots out of curiosity and for others to play with and will share them here. I’ll also email Nate the now apparently proper raw scans with hardware orange mask negation in case he has a use for them to optimize NLP further.
For now, I’m very happy with the results and the convenience of using NLP for the inversions, as well as the excellent results it now creates without further tweaking of settings. Sure, I could tune the pictures further, but these are excellent starting points. What remains (for me) is to now try to scan slides, since I never got that to work properly in the past.
Cheers,
Arno
Hi @ArnoG ,
if you like feel free to only take 3 separate shots from the exact negatives you showed us above. Pass on the RAWs and I’ll convert them for you. You can then check for whether the difference justifies the extra mile for you or not…
Best regards,
Michael
Thanks @mw79,
Much appreciated. Will do.
Another thing I just tried is to manually drag the sliders in the curves panel over using my own custom linear profile (which doesn’t seem to differ from the NLP2.3 profile). Globally that doesn’t work well and giving funky color casts, but if I do that in the individual R, G, and B channels, only dragging the endpoints to where the data starts (keeping the inversion linear, i.e. no “gamma” change), I obtain the pictures to the right. The frames are now: Left = NLP roll, middle = NLP individual, right = quick manual on individual R, G, and B channels:
It seems that now that the hardware is setup properly, the manual inversion works just fine as well, without even trying hard and just done quickly. I might even like the colors in the manual linear inversion best(?).
So, hence, I’m curious to find out now what a proper individual summation would to…
A little later…:
Hi @mw79,
The individual channel shots can be downloaded from:
I simply turned each individual control knob to full power to obtain R, then G, then B. I didn’t change the exposure per shot and focused on the first shot of three only. Not sure what to do with these myself, but at least they look cool…
Hi Nate,
Sorry for the spam, but I think I might have discovered a bug in NLP roll analysis(?). In my roll analysis, I set the conversion to NLP Neutral for the roll, but apparently only the first picture is converted with that setting, but the second and further are converted with NLP Standard?
Arno
Disregard. It seems to work fine on the second roll I scanned and inverted. Perhaps I messed up something in the first roll.
Hi @ArnoG ,
Thanks for the R,G,B triplet. Your R and B captures are running very hot. Needed to lower exposure in post by 1.1 / 0.8 stops. Your R and B captures would therefore need roughly half the light power. Anyways I continued with the less optimal starting point. This is after combine, white balance and screen optimization.
NOTE: regardless of scanning technique - your images seem to be off in terms of white balance. Therefore to better see the difference in discreet R,G,B vs white or single R,G,B you might want to adjust the white balance (in post) to roughly the same as I did. Then compare the results. This is what I used as a quick reference point in the image.
Hi @mw79 ,
Thanks for trying and demonstrating. Your colors do look realistic, if a tad dull. The ground is more brown than in the previous results, tending towards purple somewhat perhaps. For sure the wood chips should be brown/beige, which is also what Nate got with the original NLP profile above (but with the scan hardware not properly adjusted for color). Not sure what is closest to “the truth”, but your colors do somewhat remind me what happens if one sends an image in Adobe RGB color space to a printer that expects an sRGB color space. Not sure. Skin colors look very good though. Overall it seems to miss the popping colors that Ektar is known for, but I’m impressed by what you got. My initial goal of doing everything as linear as possible was to be able to see the tones a specific film stock is known for. For a Portra film (presumably bad for landscapes but great for skin tones), I do see a pinkish something in landscapes while skin tones look extremely nice. Perhaps it’s indeed working as intended now (…).
For sure I will have exposed too hot, since I had the camera on manual, with a setting that was close to ETTR with the lights canceling the orange mask. From there, turning each channel up indeed would push them too hot.
Regarding WB, I left the camera at the WB setting that gives 5500K and +13 tint (daylight) in LRC, from where I adjusted the R, G, and B balance to obtain “perfect” grey through a blank part of the negative. That WB will obviously be off when using each channel at maximum.
Hence, I’m not entirely sure whether I followed an optimal procedure for your workflow. If I find some time, I might study combining the individual channels, but I inverted a roll of Portra with NLP in settings as per Nate’s instructions and the speed and results are hard to beat manually without spending much more time. NLP missed out on perhaps 4 or 5 frames on the roll, and those were easily corrected manually. To me the main strength of NLP is speed and convenience. For the few extraordinary shots (if any) I can spend more time and try harder to get an ultimate result.
BTW, and perhaps interesting for others that want to play with (quasi)trichromatic scanning, or at least negating the orange mask in hardware, I started with a “poor mans” attempt: There’s an app called “color savy” which enables the creation of various RGB colors on an iPhone screen by adjusting RGB sliders. One can even take a picture of a blank section of negative on a white screen (RGB all max) and measure the color to correct. Setting RGB channels then at 255 minus the measured color, one can also negate the orange mask (or simply slide the siders until one sees grey), like so:
A better way would be to take a scan with an iPhone as the light source, negating the orange mask with the sliders, and measure how grey the grey really is.
iPhone screens are apparently also (quasi) discreet, but on the few spectra that one can find online, there’s also light in between the individual peaks so it’s not as good as individual leds, but it does allow for negating the orange mask in hardware, simply by using an iPhone and a free app.
Hey all, just wanted to make a note here that I am the author the Tricolor scanning article on Medium, and while I still think tricolor scanning is the way to go I now think some of my reasoning in that article is incorrect. I believe that the explanation in this document (not authored by me) is more accurate: Color Separation for Color Negatives using Digital Cameras - Google Docs
Also, I want to reiterate a point I made above about how to properly assess the results of a tricolor scan of a color negative vs a scan made with high CRI white light. The problem that we encounter with white light scans is that in order to avoid color casts in the shadows and highlights, we need to apply a per channel gamma correction. This is in effect what NLP does, calculate what per-channel gamma correction is required to correctly invert the negative. In Photoshop or Lightroom terms, this means that each channel would need an individual curves adjustments. The way color negative film is supposed to work, is that you should be able to make a simple, linear intensity change to the green and blue channels. In essence, if you white balance on medium grey, the color channels should all align, and there should be no color casts in the shadows or highlights. The way you will know if the process is working is correct is by scanning a color negative image of a greyscale step progession and white-balancing on any neutral patch. Now color negative films tend to not be 100% linear in the shadows and highlights, and this is in fact a lot of what defines the characteristic “look” of a given emulsion, so this will never be perfect, but it should be close. Here’s an example of what I’m talking about:
Here is a neutral step progression scanned with high CRI white light:
I did the following steps in Photoshop:
- Add an invert layer.
- Add a levels layer.
- White balance in the levels layer on the third square from the left.
- Add a curves layer with the adjustment mode set to luminosity to up the contrast.
Here’s what you get:
I used the eyedropper with a 51x51 sample (to average out the noise), and sampled the darkest and lightest square. Keep in mind I white balance on the third from the left. The values are:
Darkest Square: R34 G27 B7
Lightest Square: R171 G173 B182
To correct this, you would need add a curve to blue channel to add blue to the shadows and remove it from the highlights. There’s no linear adjustment that can correct this. Based in the emulsion, you might have to do this for the blue channel, or the blue and green channels.
Here’s an example with tricolor light:
And the RGB sample values:
Darkest Square: R13 G16 B10
Lightest Square: R181 G183 B186
As you can see, while the white light introduces a variation of 10-20, the tricolor variation is < 10, which is well within the range that is native to the emulsion. When you’re testing out your tricolor lights, I recommend using this as an assessment for how well your setup works. If you can invert a greyscale step progression like this by just inverting and white balancing in the middle, you’ll know you’ve got it right. I would also note that if you get this working properly, you won’t need NLP at all. You can simply invert the colors on all of your negatives, and apply a global white balance from a reference negative (assuming you did all of your scans with the same white balance).
Over the past few years, since I wrote that article, I have been working on my own custom RGB light that emits narrow-band wavelengths that align with the Status-M densitometry standard (the one developed specifically for color negative film) which I used to conduct the test above. I designed an LED grid and found a company in China that would sell me 5050 3-channel LED emitters at the custom wavelengths I needed. Then after a lot of trial and error, I used a reference design from Texas Instruments using the LM3409 driver to build a board that can drive a 3 channel LED grid at high frequency (30khz) and high resolution (10bit, for values from 0-1024) PWM.
For anyone going down this path I will warn you up-front, PCB design for use-cases with high frequency signals is not trivial. High resolution PWM in the tens of kilohertz range can involve signal transition times in the single nanoseconds. Combine a rapidly pulsing signal with a power output of a few hundred mW and you can accidentally build yourself a neat little radio jammer for certain frequencies. If you’re want to do something similar, I recommend using the analog dimming function that many LED drivers offer. I also went the route of having all of the LED channels in my grid share a cathode which simplified the layout and reduced the number of connections between the driver and emitter boards (a single ground connection vs one ground per channel). If you go this route, you’ll find that most drivers utilize a feedback mechanism that precludes a shared cathode.
Here’s the current state of my light, two recent revisions of my board, the LED grid and my partially assembled 3D printed enclosure. I’m using 3 rotary encoders to control the brightness of the individual channels, and ESP32 for all of the control logic, and a little OLED screen to display the brightness values. There’s also an interface to analog control of the brightness, and pins that provide an interface to disable and enable each channel independent of the controller for future automation.
One major issue I have run into, and one I am still struggling with, is that it is incredibly difficult to get an even field with an LED array. With my original grid, the LED emitters were all oriented the same way, which led to the layout of the emitters getting projected onto the diffuers of the light. This resulted in a backlight that was actually a diffuse representation of the layout of the emitter, basically a faint gradient from magenta to green which stuck around no matter how much diffusion I used. You can see an example of this here:
My current model rotates every other emitter 180 degrees, but for some reason I now have color shifts in the corners of the fame, which persists through any level of diffusion an remains constant regardless of how I position the light (which indicates its not related to the diffusion). I’m currently in the process of figuring out if this problem is lens related, or some property of the grid that I don’t understand. I have a sneaking suspicious that the lenses on top of the LED emitters cause a lot of these problems.
The other issue I have encountered, which also seem evidence in the tricolor scans others have posted above, is that the red channel ends up oversaturated. I’m not sure why this happens, though the easiest solution I’ve found is to reduce the intensity of the red channel relative to green and blue. If I had to hazard a guess, I’d say it has something to with the fact that the red channel in the positive is captured by the green channel in the negative, and there are two green photo sides to every one of the red and blue sites in most bayer sensors. Though that doesn’t answer why reducing the intensity of red light channel addresses the issue.
Still a lot to figure out here, but it’s great to see how many people are working on this, and I’m confident we can find solutions for the problems. I’m also doing some testing with integrating spheres at the suggestion of mightimatti, but I haven’t spend nearly as much time on that as I have on the LED array. If anyone has any question about the design process for the LED light or my tricolor scanning tests I’d be happy to answer them.
Good post Alexi,
glad to see some thorough testing methodology introduced here.
Hi @flimsy,
Thanks so very much for your excellent summary in the Medium write up. It helped me a lot in understanding the bits and pieces that I read in various places. I’ll also read the “correcter” version that you link to.
I really like the progress you’re making on creating an ideal light source. Perhaps you can consider marketing that eventually? I actually received exactly that question from someone out of nowhere, but I won’t have time to even remotely consider that. Perhaps it’s interesting for you?
Anyway, apart from having a slight impression that things are getting slightly out o gf control here (joke), I’m impressed that you managed to find a manufacturer willing to create leds with ideal rgb frequencies. I don’t suppose that your array can be driven by a WS2812s protocol and that you happen to have a spare one? I would certainly be interested in trying to run such a led array in my light.
On the point of doing a step wedge, I am doing exactly that albeit so far only for B&W: I started by calibrating B&W film by doing the usual exposure steps to determine zone 1 and actual film sensitivity, and zone IX to determine development time. I then found my 20+ year old zone calibration card back which enabled me to look at all zones in one frame but the thing was not accurate (anymore?). I then created a digital one in PS in LAB space using an 8 bit gray profile and making 11 patches with 10% increased L on each one, which provides an idealized step wedge where each step corresponds to the linear visualization of one stop in exposure, or doubling of light intensity. I added a frame size patch with zone V only to set exposure on. The idea is that this is shown on a calibrated monitor to take a picture of and then have the entire step wedge in one frame. The thing looks like so:
Since I don’t know what happens to the carefully prepared jpg when sharing like this, it can also be downloaded from:
My thought was to shoot each first frame on a new roll with the patches, since I will have my laptop with calibrated screen with me at all times anyway, so I would have a control frame at the beginning of each roll. My intent was initially for B&W only, but after reading your suggestions I will do this also for color film from now on: Set exposure on the large zone V patch, scroll down, and take a pic of the patches with said exposure.
Having grown up with pre-digital electronics, I struggled a bit to get used to Arduino and have to admit that I don’t grasp all the details of why you are constructing a high frequency board to drive the leds, but I’ll read some more and will try to understand better.
Hi all,
As mentioned in the first post in this forum thread, a finite planar light source (such as an LED panel) will always result in an inhomogeneous light distribution on film. A planar light panel a give a homogeneous light distribution on film, provided it is infinitely large compared to the film, which is not practical for table-top solutions. It took me a while to get this done, but I just calculated the brightness inhomogeneity on a film that results from using a planar light source that is not infinitely large, like so:
The details of this calculation are provided elsewhere:
Typical results for my custom trichromatic light source (A custom-built trichromatic light for DSLR film scanning - arnogodeke) with 68 x 68 mm^2 homogeneous and diffused light, at 20 mm in front of a 24x36 mm^2 negative are like so:
This should be comparable to what most of us are doing: 13.1% light fall-off in the extreme corners, and significant fall-off around the edges of the frame. This will lead to all kinds of commonly observed issues, such as brightness variations towards the edges of the frame in B&W scanning, as well as color shifts in color scanning.
The tool obviously allows me to move the light around: When it is placed closer to the frame things improve, but not by a lot. When it is placed at 205 mm away from the negative, the fall-off can be as low as 1% in the last mm^2 in the corner of the frame, like so:
Hence, for all of you out there that are using LED panels for your scans, please be aware. This is quite significant, and the calculations seem to match closely what I indeed see when I scan a blank frame.
Solutions are to create a brightness mask in your editor, create a hardware mask that counters the light fall-off, tune the LED panel to provide more brightness towards the edges (if at all possible), or move the panel further away where things will improve (at the cost of overall brightness for scanning obviously).
I had to create some code in MS Excel to solve the quite nasty integrals that are needed to calculate this, and might share that file if there is sufficient interest.
Cheers,
Arno
Thank you so much for sharing, @ArnoG
A few observations and follow-up questions:
(1) How much of role does the film holder itself present in this analysis?
From my own observation with different film holder, the amount that they obscure diffused light from reaching the film emulsion is the primary factor in the noticeable light fall-off I’ve observed.
As a mental experiment, imagine two film emulsions each suspended 100mm above a light panel. In the first case, the film is within a bulky film carrier, where light must pass 50mm through a narrow passage before reaching the film. In the second case, the film is wet-mounted on museum glass.
In the first case, I would expect significant light fall-off (or perhaps, “vignetting” is more accurate in this context). In the second case, I would expect virtually no noticeable light fall off (given a sufficiently sized light panel, and good diffusion).
So I would argue that the fall-off caused by a film holder is the primary practical reason that film scanners experience light fall-off, and for most home-scanners, will be the thing to look at if they are having this issue.
(2) As another experiment, imagine you had columnated light.
I’d be interested to see if that noticeably impacts your calculations.
(3) As a solution, have you consider Flat Field correction?
This is something you can already do in Lightroom by creating a calibration frame and syncing that to the other photos in your set. Trying to manually create a brightness mask could lead to other inaccuracies.
Hi Nate,
(1) I can only calculate the film plane to light source plane (in principle one could include of course a blocked light path, but that’s more work than I am willing to do…). I would disagree that the main issue that folks see is the film holder, since the drop in brightness at the edges just from having a common finite-size panel is easily 10% already, independent of the holder. Any “shadow” of a holder will only add to that, so yes, the holder is important also. Ideally, a holder would have a sharp edge and a steep chamfer at the side of the light source so as to not block any light coming from the side onto the negative. A 50 mm “tunnel” before the negative would be very bad indeed, since it would cause a lot of “shadow” or vignetting from the panel light needing to come from the side onto the negative. Still. any holder “shadow” would only add to the “ideal holder case” that I calculate, which already easily gives some 10% fall-off at the edges for a, say, 100 x 100 mm panel a few cm before the negative.
(2) When light is collimated, it only enters the negative perpendicular, and there would be no fall-off in the “ideal holder” case. The fall-off is due to points on the light source radiating in all directions. If this is focused, so as to create a perfect parallel beam, there will be no fall-off. Also your hypothetical 50 mm “tunnel” holder will not cause shading. in this way. Creating perfect collimated light is, however, not trivial, as others have shown here (I did some attempts as well).
(3) In my view, “Flat Field” correction is a dumb way to introduce a brightness mask. I looked into that but Adobe wants to create an additional DNG file for each RAW file, which will clog up my HDD. Why is that needed? I did play around with a simple round brightness mask that I stuck in a preset, and tuned it to provide a homogeneous brightness across a frame on a blank negative. This is easy enough to do and effective and also will correct any potential shading of a holder. This is basically what happens in a “Flat Field” correction. I just prefer to do this manually since I don’t need the additional DNG files. If a correction mask is put in a present, its amount can easily be varied by the “amount” slider of the preset. My calculations show that if the light source is square, I will need a circular mask (ignoring holder shading). If the light source is rectangular, the aspect ratio of the now oval mask should represent the aspect ratio of the light source (again ignoring holder shadow).