How to Measure FADGI SFR10/SFR50 (MTF) for a Camera Film-Scanning Setup?

Hi everyone — first post here, though I’ve been reading the camera-scanning threads for a long time.

I’ve been refining my camera-scanning setup for the past couple of years and I’m now wondering whether it could meet (or at least be evaluated against) FADGI requirements. Because I may want to scan film for other people or institutions. I know it doesn’t mean everything but it would be nice to have a scientific way to figure out for example the MTF values of your lens/system.

Current setup (mainly 35mm negatives):

  • Fuji GFX 100S

  • Rodenstock Rodagon-D APO 1x f/4 (main)

  • Mamiya Macro A 120mm f/4 M / Rodenstock Rodagon-D APO 2x f/4.5

  • Kaiser repro stand + Manfrotto geared head

  • Kaiser FilmCopyVario holder with ANR glass on the bottom + 35mm mask on top

  • Cinestill CS-Lite (used for B&W and color positive)

While reading the FADGI guidelines, I saw several metrics I don’t fully understand how to measure in a camera-scanning workflow, especially:

  • Color Channel Misregistration: must be < 0.33 pixel => abberations

  • SFR10 (Sampling Efficiency): must be > 90%

  • SFR50 (50% SFR): should be between 45% and 65% of half the sampling frequency

From what I understand, SFR10/SFR50 are related to MTF/sharpness (how well the system reproduces contrast at different spatial frequencies), but I’m not sure how to test these values practically for a macro/copy setup.

I know Imatest can do this, but it’s expensive (software + targets). I’ve also found MTF Mapper (open source), which seems like a more realistic option. Jim Kasson has also written a lot about testing similar setups (including razor-blade/slanted-edge type tests), which seems promising.

My questions:

  1. What’s the most practical/affordable way to measure SFR10 and SFR50 for a film camera-scanning setup like mine?

  2. Can MTF Mapper (or other low-cost tools) produce results comparable to what FADGI expects, or is Imatest basically required?

  3. Is there an existing transparency-based test target suitable for macro film scanning (something like a slanted-edge/Siemens star target on film), or can this be done using something like Vlad’s test target (which I already have)?

  4. If you’ve tested your own setup against FADGI-like metrics, what workflow/materials did you use?

I’d love to hear how others are approaching this without spending a lot of money on proprietary software and targets. Thanks!

PS I have been working for quite a long time on some lens/film holder/backlight comparisons that might be handy for other people.

I forgot to add some images/links:

FADGI guidelines PDF (relevant info starts from page 54): https://www.digitizationguidelines.gov/guidelines/FADGI%20Technical%20Guidelines%20for%20Digitizing%20Cultural%20Heritage%20Materials_3rd%20Edition_05092023.pdf

Jim Kassons blog: https://blog.kasson.com/the-last-word/towards-a-reproducible-mtf-testing-protocol/

Welcome to the forum, @Thornado_007

Maybe your post is beyond my understanding of what this forum is for, but what is your question regarding NegativeLab Pro?

Hi, thanks. I know it’s a very specific question but I thought I would give it a shot. I guess because I have read a lot of different questions on this forum regarding lens sharpness, especially on the camera scanning category. I thought it might be worth it to ask. It doesn’t directly have anything to do with NLP, although I am very excited for the upcoming standalone version! In general I think the FADGI guidelines contain a lot of relevant tips to get good camera scanning results. But I understand completely it may be too specific of a question. Thanks for the response!

Respectfully I’d like to think that there was a place for such questions on this forum. The two longest threads on here by a long way are of course “Let’s see your DSLR film scanning setup!” and “Suggested back light sources for scanning film”. Nate started the first one and did nothing to discourage the second as he came in with the second post.

Surely anything that brings high-end users of ‘camera scanning’ setups to this forum would be a good thing for sales of NLP?

Anyway, I hope you get an answer here because otherwise it’s just means going to private FB forums and a lot of people, myself included, are not too keen on having to be part of FB but there is some great information on them and I suspect you might get an answer there.

I don’t know anything about this company, or the relevance of their testing procedure so I’ll just put it here for reference. On the face of it they don’t like the Fuji GFX100 for Fadgi Compliance.

I didn’t know of this article, their testing seems quite detailed. Very handy information, and also in line with some things I have read before, thanks for the link!

Especcially for color negative conversion NLP is still one of the best options. Here is a quote taken from the FADGI pdf:

Given the many combinations of film processes and print media used over time, there is no way to reliably create a digital image from a negative that is a faithful reproduction of how the image may have looked if printed with the original materials and processes of the era. With this fundamental limitation, we are concerned that modern sensibilities of what an image should look like will lead to scans which lose the authenticity of the original in the quest for good looking images.

Photographic negatives may also suffer from degradation, especially color films. It is impossible to
visualize the correct color of a color negative, as the orange or red appearance of the film is primarily a proportional dye mask, establishing the appropriate conditions for the color print media it would eventually be printed to. Both this mask and the color dye layers in the film itself fade over time and are influenced by a variety of factors including exposure to heat and improper processing. The current practice of scanning the color negative, inverting the image, and color correcting the positive image does not produce an accurate representation of the original image as it would have looked if printed to photographic paper. Additionally, color negative films produced prior to 1975 (C22 process) do not scan well. Films produced after 1995 were designed to be scanned on the sophisticated digital photographic systems used just prior to the digital camera era.

It is highly recommended to use color negative inversion and color correction programs that employ “scene balancing” algorithms, which look at the image and adjust for best color using complex calculations. There are several of these programs available in the market today.

One of the best scene-balancing programs is still NLP. The standalone app would be especially useful for archival setups.

Capture One — which is the preferred software for reproduction work — recently introduced its own negative conversion option. However, as far as I understand, it mainly performs a simple inversion and then adjusts the RGB levels. I’m not entirely sure what calculations NLP performs under the hood, but I imagine it goes beyond what Capture One is currently doing.

As for my question, I’ll indeed probably have to venture into the dark depths of Facebook forums. Thanks for the response.

Capture Integration is a technically competent company, but suppliers of mainly Phase One equipment.

Who says Capture One is “the preferred software” for reproduction work?

With all due respect to the GroupThink that wrote this sentence, and some others in that set of guidelines, I wonder how much thought they’ve given to the meaning of the term “authenticity of the original” in the context of analog film. What is “authentic” about those materials versus what alternatively is therefore “inauthentic”? I guarantee you, because I’ve done it thousands of times by now, that I can scan negatives and convert them in NLP, adjust the conversions in NLP and Lightroom and print them in my Epson SC-P5000, and the photographic qualities of the resulting prints will be considerably better than anything that came out of all but the very small number of highest-end large-format professional labs of yesteryear. A lot of this stuff I think is akin to “gilding the lily” relative to the wide degrees of variance and latitude that constituted the context and technical character of analog colour photography. So much of this is a matter of judgment and interpretation, unless you happen to have the original subject matter close at hand to assess tonal and colour accuracy of the photographic rendition - I’m thinking for example of paintings and sculptures, postage stamps, vintage postcards, vintage photographs and the like.

BTW, your GFX-100 with the Rodenstock lens is a good choice for high-end archiving. I know of several respectable photographic/museum institutions in France and Switzerland using that camera for digitizing their archives.

OK, if you are planning to compete in the high-end cultural heritage market those guidelines probably need to be respected, but the specific tools for doing so in respect of the items you ask about I’m sorry I can’t help with those as I have not addressed them.

1 Like

I hadn’t read the article or heard of the company until the day I posted it here. The GF 80mm f1.7 is a very strange choice of lens for this kind of demanding work, it’s primarily a fast aperture portrait lens, not a macro lens at all. The tests were done on targets varying in size from about 40 x 30 cms down to 22.5 x 16.5 cms so not at all relevant to most film copying applications. Could the exposure variation noted be down to mechanical control of the aperture blades on such a lens? Anyway changing to the Apo-Rodagon D lenses would give very different results. That said I have read another person using a GFX for copying from film say that they aren’t properly corrected for the glass in the sensor stack of the GFX so he changed to (I think) Schneider Apo-Digitar or could have been Makro-Symmar.

It may be controversial to say on an NLP forum, but I found that once I switched to an RGB light source, I didn’t need NLP anymore and get much better conversions and color with the simple inversion tool in Capture One.

Strictly speaking, NLP isn’t needed at all, but it makes converting quick and easy. Converting manually can be straightforward and, depending on the negative, a royal pain.

That seems to be a view shared by other practitioners using RGB light sources, indeed it could be said to be the main reason for using one. It removes the need for NLP to do so much under the bonnet processing or so the theory goes. I don’t have one so have no idea if it is true and as you suggest this forum is possibly not the place to explore it otherwise it would be interesting to share RAW files.

1 Like

Very true. But my experience so far is that with an RGB light, the situation is reversed - manual (or very simple inversion like C1) is actually the easier method,

Quite a while ago, I ran a proof of concept with three captures, taken with R, G and B backlighting, combined in Photoshop.

The concept basically replicates what commercial scanners (and Technicolor) have been doing for years

I tried separate exposures on a normal color sensor as well using a beta app someone on another forum made. The results were good, but the workflow was of course not that great. I think the single exposure with an RGB light provides just as good a result, but I admittedly only tried a few shots.

1 Like

Fair point — “preferred” was probably the wrong word. It really depends on opinion and workflow. What I meant is that, anecdotally, I often see people in the repro/archiving world using Capture One, especially for tethered shooting. Do you have a preferred software?

That said, I also really like darktable for its open-source nature and the flexibility/customization it offers.

Those are some very important questions you ask there! Thanks for your input on this.
In the end, a lot of this is inevitably subjective and comes down to interpretation.

I think when people talk about a “faithful” result, they sometimes mean “how it would likely have looked when printed in that era.”. I think FADGI standards are reasoned this way. But that still raises a bunch of practical questions: what exactly are we trying to match, and how do we know when we’ve matched it?

It also suggests there may be trade-offs. Like you say modern capture/scanning methods can easily produce a technically “better” file (higher resolution, cleaner color, larger printable size) than what was possible historically — but that doesn’t automatically mean it matches the historical viewing/printing intent.

On the color side, I’m especially interested in the “RA-4 paper look” as a reference point, since RA-4 printing interacts with the spectral sensitivities of the negative’s dye layers. That’s one reason it makes sense to me that early scanners like the Fuji Frontier and Nikon Coolscan use RGB illumination, and why there’s been renewed interest in RGB light sources for camera scanning (Cinestill SpectraColor, Jack’s light, Cutenewdesign RGB, etc.).

Still, even with RGB light, what a “faithful” color-negative scan looks like is always going to involve interpretation. And workflow it that regard is still being “developped”, thinking for example of the tonecarries project).

Completely agree.

Thanks for that info, that good to know!

After reading more about that test, I agree — the GF 80/1.7 seems like an odd choice for this application. It may not have a sufficiently flat field or the kind of correction (e.g., APO-type performance) you’d ideally want. It honestly does make the setup feel a bit “set up to fail.”

Really good question. In my own experience (e.g., using an APO Rodagon and other manual lenses without electronic communication), I haven’t noticed exposure variation like that.

I will say: Fujifilm’s pixel shift implementation feels fairly limited, especially on the GFX100S — the fact that you can’t do a simple 4-shot mode just to improve color accuracy / reduce Bayer artifacts is a real downside for repro work.

That’s interesting — I haven’t come across that in my own reading about this combo. Do you happen to remember where you saw it? And what the practical impact was supposed to be (loss of corner sharpness, focus shift, increased aberrations, etc.) due to the thicker sensor stack glass?

I’ve had mixed results with Capture One’s newer negative conversion feature. For example, there’s no “buffer zone” for scans where the holder edge is visible, and the initial conversion can feel a bit hit-or-miss. I still prefer doing a straightforward manual inversion in C1 (inverting the channels).

Using an RGB light source like the SpectraColor definitely helps — inversions tend to come out cleaner and more consistent.

At the same time, I still sometimes get better results (especially if I’m aiming for something closer to an RA-4-type rendering) with more specialized tools like NLP or Negpy. That’s also why the standalone NLP app still seems promising to me in terms of workflow and consistency — especially if there is RGB capture compatibility.

Agreed. Trichrome capture has real upsides, but the workflow cost is significant: three separate captures, large files (especially on GFX), and the copy setup needs to be extremely solid to avoid sharpness loss or alignment issues when combining exposures.

Thanks again for all the responses. Does anyone have input on my original question: what’s a good way to test SFR/MTF performance specifically for film scanning with a camera?

Is there an affordable transparency test target with slanted edges that works well and is accepted by MTF Mapper?

There is a very simple fix for this: save an aggressive crop as a style and assign it a keyboard shortcut. Then you can crop - invert - uncrop. Super fast and easy. This is why I love C1 - there is a simple customization for nearly anything you want to do.

My results using an RGB light, crop shortcut, and inversion are generally as good or better than NLP, provided I got a good WB sample off the film edge. A second pass with the Pick Neutralize tool and it’s dead accurate. All this in less time than NLP. Also, if you use a lens that needs flat field correction, there is no comparison - C1 is far better than LR for this.