I Tried CineStill’s ‘Free’ Film Conversion Tools So You Don’t Have To

A hands-on look at CS Negative+ — and why true film inversion will still be done with Negative Lab Pro

This is an abridged version of my original article on Medium [1] - stripped of all the storytelling flourishes :wink: and focused purely on the core findings and takeaways.

On July Fourth while on the city bus, I caught a headline flash from the PetaPixel site: “CineStill’s New Film Scan Conversion Software is Fast, Accurate, and Free.” [2]

The first line read: “CineStill has announced CS Negative+ Convert Tools, a free set of tools that integrate into Adobe Creative Cloud apps that promise to deliver true-to-film color with fast results, straight out of camera.”

I quickly scrolled through the article, noticed a few pictures of film negatives and their converted positives side by side — which looked very good — and registered a few words like: “film conversion process …the characteristic spectral sensitometric curves of traditional RA-4 darkroom … motion picture cinema projection prints… CineStill says SpectraCOLOR enhances color separation and tonal accuracy …the way color film and digital sensors both respond to the color spectrum.”

Here is what apparently CS folks talk about: CS-Lite Spectral Power Distribution in different modes (warm, white, cool) is being studied scientifically.

The article made me excited and sad simultaneously.

We all know firsthand that the negative inversion process is very complicated and far from obvious. So, all the scientifically-sounding buzzwords in the press release made it seem like the issue had finally been solved — once and for all — even for those using film stocks other than those sold by CineStill (i.e., Kodak Vision3). That would, in fact, be a wonderful and important development for the film-shooting community.

I also immediately caught a not-so-subtle jab at Negative Lab Pro — the mention that after inversion, the Lightroom sliders continue to work as expected. (NLP users will immediately understand what that’s about.)

As for the sad part… I immediately felt bad for @nate . For years, he has been literally maligned by certain film photographers who believe that the very modest (in my view) $100 license is too steep and unwarranted for jumping on the automated negative inversion bandwagon. NLP is truly excellent software, the product of years of development and refinement by Nate, and it has rightfully earned its place as the leading film negative inversion tool.

Any decent tool that delivers anything close to what NLP does — and is available for the nice price of free — would certainly hit Nate hard. (Of course, I’m not aware of how much Nate relies on NLP sales to pay his bills, but I can only speculate it may be an important source of income for him.) Developing any decent software is not easy; being dependent on the quirks and whims of a giant like Adobe is challenging; providing customer support is even harder; and limiting yourself to releasing only fully and thoroughly tested features is the most difficult of all. So, having a major player in film space release free software that completely pulls the rug out from under him feels like a major blow — especially after years of highly productive and widely respected work.

I can fully sympathize with him, as I experienced a similar situation myself when the sales of my own font-designing software, LaseRuss, for Hewlett-Packard laser printers were obliterated — literally in a matter of a couple of weeks — when Windows 3.1 came out with full support for Cyrillic fonts.

The book-publishing crowd almost overnight switched from Xerox Ventura Publisher on MS-DOS to Adobe PageMaker 3 on Windows 3.1, and my customer base went AWOL. But that was many years ago, and now I can only chuckle thinking about that chapter in my entrepreneurship as I cut film and seal the cello bags by bunches, sending off another batch of Vlads Test Targets to the Amazon warehouse. Forgive the tangent—some scars just resurface when you least expect them :wink:

Anyway, I dully posted the news on a Facebook DFDC group and went about enjoying a good stroll through the Brooklyn Botanic Garden.

When I came home a few hours later, I saw people responding to my post — and I started laughing reading the responses: It was the presets, stupid!

Folks who downloaded and installed the software quickly discovered the catch: there was no software per se — just presets for Adobe RAW/Lightroom. That means no actual image analysis or adaptive processing. A preset is simply a fixed set of numbers injected into Adobe’s processing engine. These values do change the image’s appearance — but in a predetermined, one-size-fits-some way. They’re only useful if, by chance, your negative happens to resemble the one the preset was trained on (don’t confuse with AI training).

Presets like these are typically built from a handful of specific film samples developed to, say, a standard gamma — almost certainly without accounting for any push/pull processing. And if push/pull is the often-used option for the film stock, new presets would need to be tailored specifically for those cases too.

Yes, a preset can be sophisticated, with all the spectroscopy, densitometry, and LUT wizardry you can throw at it — but at the end of the day, it’s like a punch in a press: it stamps the same shape onto whatever you feed it, whether it’s sheet metal, plastic, or thin paper.

Of course, those presets would internally flip the tone curve to turn a negative into a positive — but we all know that at least a dozen other Adobe RAW/Lightroom sliders need to be adjusted to make an image visually pleasing. So, a preset might be of any use only if the negative image itself crosses all the T’s and dots all the I’s in terms of overall density, contrast, orange mask density, spectral qualities of the backlight, and so on. Even the developer choice — ECN-2 or C-41 — would affect how well the preset works.

To make my point, when actual software like NLP or G2P is employed, a real analysis of the specific image does occur. While in both cases the actual processing pipeline is a closely guarded trade secret, we can reasonably assume that complex image analysis is happening under the hood — with lots of heuristics involving histograms, channel scaling, collecting RGB values for the orange mask, and innovative ways of subtracting it from the image. That’s why you can present NLP with images taken at completely different exposures during scanning, and the end result will be practically identical — provided the scans don’t have clipped shadows or highlights.

In the case of presets, only an image that closely matches the preset’s training conditions will yield an acceptable result.

Still, upon hearing it was all just presets — and seeing that people were already complaining about how useless they were for their own negatives — I wanted to see for myself. Just day before, I scanned some film from around 2000 that featured the World Trade Center towers prominently. I could see the negatives were on the weak side, but the minilab prints produced at the time were perfectly fine, so I expected the scans to give me no trouble at all.

Indeed, NLP produced what I would consider a satisfactory result.
I installed the CS Negative+ presets into Lightroom. Strangely enough, they turned out to be even less useful than I expected.

The first-stage (START ) preset turned out to be just flipped tonal curves — horizontally flipped R, G, and B tone curves given an S-like shape. All the color science effort had apparently gone into figuring out how to shape that S-curve for each RGB channel and where to place the bend points for given film stock and light source.

By the way, this is a standard trick — NLP does something very similar. It dynamically adjusts the individual RGB tone curves, in fact you can observe the changes live when selecting the output color model in the NLP dialog window.

But what the CineStill folks are doing next is asking users to manually set white balance and exposure — if the negative doesn’t exactly match the preset’s expectations.

So, the most hated and time killing part of the inversion process is left to the photographer: adjusting Temp, Tint, and Exposure by taste.

And of course, despite what the press release claims, once the RGB tonal curves are flipped, the Temp and Tint sliders behave in exactly the counter-intuitive way — just like in NLP.

I sigh with relief — Nate’s income stream looks safe. And I hope his sales go up, as people should begin to better appreciate what NLP is really doing for them, especially since it typically brings the picture incredibly close to its final state.

If CineStill had just been upfront and said, “Look folks, shoot our CineStill film, process it at a well-regarded lab or use our developer, stick to the development process to the letter, and use CS-Lite backing light only with our magic sheets” — then in return, “we’ll give you a preset that helps you convert your negatives the easy way” — I’d be totally onboard. I’d be hailing them as the saviors of the film universe.

Unfortunately, PetaPixel ran with a tall tale — probably fed by the CineStill press release — and that set me off, even though I have no personal stake in anything related to negative inversion.

To recap: there’s more than one way to skin the RAW negative cat

The simplest are the presets we’ve been discussing. They’re the easiest to apply, but also the least robust — and they require very consistent negative quality and tightly controlled equipment conditions.

Then there are plugins built on top of Adobe products, such as Negative Lab Pro for Lightroom, and Grain2Pixel and Negmaster-PS for Photoshop. All are solid options and allow you to save full 16-bit TIFFs that can be further massaged in any image editing software.

There are also standalone tools like Darktable, RawTherapee, and Filmomat SmartConvert, each with a different learning curve — some steeper than others.

Back to Square One

So now we know what CineStill is offering for free — and honestly, that’s a fair price. Sure, we lost a bit of time and got our hopes up, but we also learned something that might come in handy down the road.

I’d love to end the article with one more ironic jab at CineStill — but we’re not quite done. They didn’t just hand out presets; they also threw in a few DNG files for us to test. And sure enough, applying their presets to those files gives you excellent-looking inversions.

Just for fun here is same image inverted with NLP. Disregard differences in tones, obviously you can make NLP inversion match preset if desired.

So now the question becomes: what’s so special about those sample files and why they are getting inverted easily?

That, my friends, leads to a much more interesting — and materially important — discovery. In the meantime, I suggest you take a look at those sample images and see if you can spot one slightly unusual thing about them.

We’ll dig into that in Part II of this series.

So long!

[1] Medium.com: I Tried CineStill’s ‘Free’ Film Conversion Tools So You Don’t Have To

[2] CineStill’s New Film Scan Conversion Software is Fast, Accurate, and Free.

3 Likes

TLTR … but a summary could be: Forget CS Conversion Tool.
(Essence as taken from the posted conversions)