Convert mechanics

Now I understand if the developers might be reluctant to answer all these questions, however, the knowledge about this appears to be quite fundamental if we want to bring DSLR scanning to the level of professional films scanners.

Foremost I am interested in the somewhat mysterious mechanisms of the “convert” process. As I wish to waste as little of your time as possible I will list a few assumptions I have. I would ask you to be so kind you either confirm or correct them.

SOURCE:

  • Digital camera - Analyses the negative scene and tries to balance the negative channels to achieve a flawless negative distribution and then inverts (or vice versa).
  • VueScan/SF - Uses preset colour curves only applying exposure correction and inverts (or vice versa).
  • TIFF Scan - Uses preset colour curves only applying exposure correction and inverts (or vice versa).

COLOR MODEL

  • Scanner - Applies a secondary curve correction to the convert curve trying to bring it closer to the selected device.
  • None - leaves it at the state of the conversion curve
  • Basic - applies a positive scene correction.

This is how I imagine it. The thing that puzzles me is the tertiary correction and the fourth one with the newly introduced LUTs where we can again select a device emulation.

Now, this is the place where I start wondering. I can select a Noritsu conversion and later apply a Pakon LUT. I am given choices at the wrong end of the process. All the secondary and tertiary manipulation can be done in Lightroom or PS etc. What I need is control over the primary conversion.

In my long experience with film scanners from various manufacturers, I can see much simpler and thus much more consistent process which is necessary for professional work. It makes me wish someone would break down the NikonScan or Dimage Scan software and simply try to “steal” their conversion math. Perhaps those company would even open up such legacy code. The scanner softer appears to correct the exposure at scanning but leaves the colours fixed. The consistency of colour across different shots of the same scene is never an issue because there is no math to screw it up going on.

With NLP however, most of the time I have the feeling that each shot was made with a different film and different developer which makes me feel uneasy. Of course - that software knew the hardware it has run and was set to it - and your software needs to work with an uncontrollable variety of light sources etc. Still this “correction layer” should be optional as some of us have very reliable and even changeable light sources like enlarger colour heads etc.

Anyway - I am sure you get my drift. I am now trying to select DNG scan even though I am scanning with a DSLR and this is why I am so curious if this solves my problem. I do not want to have colours analyzed in each scene, I simply want a fix invert with exposure or gamma correction. Everything else should be optional in further stages or editing.

Please do not read this post as a rant - I simply wish I would finally have a workflow that would make me feel comfortable ditching my old 8000. :grimacing:

Ah also - I know that is a beast and a fortune. However, did you ever have a chance at looking into how they convert the negs? https://www.youtube.com/watch?v=9lb8L7nFC2c
It seems their archival software must have a negative conversion module.

All righty that was a longish one. Thank you for your answer and all best :slight_smile:
Rene

In order to apply a fixed conversion for a series of negatives, you can convert one image with NLP and then sync settings in Lightroom. You can also do everything else outside of NLP, the choice is yours.

Try it and see how it works for you.

Phase One’s solution certainly looks nice and with a price of roughly 1000 times of the cost for NLP, they had better do it right,

I camera-scan fairly old negatives and I usually get conversions that are a good starting point for interpretation or refinement. I mostly convert with “basic” and low pre-saturation settings.

Thank you for your answer. I know about this option. Alas this is not what I am talking about.

The scene seems to be analysed any way I set up the conversions. To explain - 2 different crops of the same scene will result in different conversion because the software “overanalyses” the scene. Now, how will you decide which will be your master scene? This is not the right path. I understand the exposure correction (which is also optional at some scanners).

Also - film photographers do not wan’t to have their film characterises annulated and then reconstructed artificially. It is better the opposite way around - let all the tweaks be outside the conversion math that should be a simple neutral inversion.

I am not a developer so maybe I am having a wrong idea of it all. However I am simply frustrated how a scanner produces reliable inversions that never surprise me without analysing much. I can always make my artistic decisional later …

I think I know what you mean but I choose to address one item specifically to illustrate my way to do things.

NLP uses an algorithm that checks the image for blacks, whites and things in between. It then inverts the r, g and b curves and adds whatever it does to make the conversion look right. It does not by necessity try to emulate a film brand/type or scanner brand/type, although the second tab functions go in that direction.

The beauty of NLP is (imo) that it copes with whatever I throw at it and delivers results that suit me, except in cases where the original negative is difficult, e.g. underexposed under low and mixed light.

V2 adds a few features that can be used independently and therefore make many new paths through Mirkwood…

Hi @Hansha,

As the developer of Negative Lab Pro, let me try to answer a few of your questions.

In terms of the process:

  1. The “SOURCE” is used to determine how to interpret the data in the negative file itself. For instance, if it was captured with a digital camera, I need to set the appropriate RAW Profile to interpret the data. Or if it is a TIFF file, I need to not set any profiles.
  2. The Color Model adds a very broad type of calibration on the negative itself. You can learn more about each here: Getting Started with Negative Lab Pro | Negative Lab Pro . If in doubt, just use the “Basic” color model and it won’t attempt to calibrate.
  3. The LUTs offer more precise emulation abilities. Because they are working against the positive image post conversion, and Look Up Tables in general offer much more customization than is available via Lightroom controls. If you want the sort of standard “scan software” look, just use the “Natural” LUT.

I’m not sure what you mean by this? All of the editing controls in Negative Lab Pro are happening on the primary conversion itself. For instance, if you change the BlackClip and WhiteClip points, it’s happening against the negative itself. Same with color changes. That’s why I strongly encourage getting settings where you want in Negative Lab Pro instead of trying to do so in Lightroom or Photoshop.

The existing defaults are set in such a way to try to emulate a lab scanning process, which tries to optimize as much possible to create a finished image with not much input needed from the lab tech. So off the bat, you may want to change your default settings to be more neutral. For instance, you can change your settings to “tone profile: linear” , the wb setting to “Kodak” (which is a static profile and doesn’t use image analysi), and the lut to “Natural” to get the look that you associate with a traditional home scanning software.

Additionally, most home scanning software also evaluates whatever you have in crop for the image (including any unmasked light and border edges), which results in a lower contrast initial result. You can do that in NLP to… just don’t crop ahead of time and turn the border buffer to 0.

Also, when “Roll Analysis” is added in v3, this should also help provide more consistent results across a range of scenes, without “over optimizing” individual frames.

I’ve played around quite a bit with fixed inversions that don’t rely on any image analysis. There is just too much variability for this to work well across multiple setups. But as I continue to add new features (like presets) it may be possible for me to allow users to save the inversions themselves (in addition to editing settings) to apply to other images.

Hope that helps!
-Nate

1 Like

Firstly thank you for your reply Nate! I will here try to jolt my answers and thoughts.

I’m not sure what you mean by this? All of the editing controls in Negative Lab Pro are happening on the primary conversion itself. For instance, if you change the BlackClip and WhiteClip points, it’s happening against the negative itself. Same with color changes. That’s why I strongly encourage getting settings where you want in Negative Lab Pro instead of trying to do so in Lightroom or Photoshop.

This is exactly a thing that I find redundant in this part of the process as the tools for manipulating that are far more precise and advanced in LR or PS. Of course – if there is no data lost in the conversion.

The existing defaults are set in such a way to try to emulate a lab scanning process, which tries to optimize as much possible to create a finished image with not much input needed from the lab tech.

However, in order to do this one would need to know the light source of the scan. A competitor in negative conversion is going exactly this way. One can choose the light source D50, D65, Tungsten … which makes a good starting point for the conversion. If one would add the camera sensor profile to this we should arrive at quite good hardware data which would make the “fixed inversion” possible. After all – lab scanners seem to work with a fixed inversion and secondary image correction. At least this is how motion film scanners work.

The secondary emulation of the scanner can be packed into LR presets or whatever in the end.

For instance, you can change your settings to “tone profile: linear” , the wb setting to “Kodak” (which is a static profile and doesn’t use image analysi), and the lut to “Natural” to get the look that you associate with a traditional home scanning software.

Thank you – this is what I was looking for.

Also, when “Roll Analysis” is added in v3, this should also help provide more consistent results across a range of scenes, without “over optimizing” individual frames.

Congratulations – this sounds a very good idea. Orange masks are quite stable too recently, especially regarding the fact that there are sadly fewer and fewer film stocks on the market.

I’ve played around quite a bit with fixed inversions that don’t rely on any image analysis. There is just too much variability for this to work well across multiple setups. But as I continue to add new features (like presets) it may be possible for me to allow users to save the inversions themselves (in addition to editing settings) to apply to other images.

I believe most scanners have 3 variables + exposure. Those are light source temperature, sensor profile and generic orange mask removal. They do perform exposure adjustment however I do not think they analyze and correct white balance. This would be against the philosophy of analogue printing where colours were manipulated in the print process.

Anyway, thank you again for helping answers. I will try what you suggest! Looking forward to version 3! Good luck and all the best! :v:
Rene