Donât get me wrong, I think your methodology and intent is fantastic. The fact that you are getting what you want out of your system is proof enough. I am really disagreeing on some terminology and discussing how I might approach it differently.
Let be clear about some terms. A colourspace is a definition of how a numeric value relates to either a brightness value in the scene (scene referred) or a display output colour (display referred). More often than not this is a combination of a luminance lookup and a gamut conversion. In your case you are only concerned with the luminance mapping. You are making your own colour space definition (exposure stop steps below zoneV map to certain values and contrast, and above zoneV a different contrast) and the values are referred to a display device.
You are âburning in the gamma response of the output mediumâ by directly relating code values to output luminaries without any âcolour-scienceâ being used to covert from working space to display space. Or at least you havenât described using any colour management systems.
If you didnât have the game response of the monitor burnt into the image, the top âzoneâ would account for half the possible code values (a stop) and it would look very dark and crushed. Generally the raw processing does this, as Iâm sure you know.
Because your data values look ânormalâ on a computer screen without a LUT means that they are in a colour space that is encoded for the display device already. You did that through your process, correlating numeric code values and display illumination.
From your process description, you are validating the screen display colours. I have not used âraw diggerâ before. Reading the original 12/14 bit raw code values and mapping them through camera response would make an excellent âmillionâ spot meter.
Also, for clarity, a linear to light scene referred colourspace would be, well, linear, not changing arbitrarily changing contrast at a point. It generally is a float value (which Photoshop incorrectly labels 32bit) or High Dynamic Range. 0 means no light, but 1 has no meaning, as light can just keep increasing. We are not talking about reflectivity, but all possible scene light.
How I would differ in approach is to start with zoneV at .18 be linear around this. I understand you are using .5 stop steps for it to contain the full ârangeâ, but itâs not really, itâs a 7 stop range with smaller steps. I would tend to replicate top end rolloff of photographic paper to account for the dynamic range of 10 stops. Switching contrast at zone 5 seems like an odd choice to me, but if it works for you, great. I would not assume even steps of reflectivity at the top few zones, because that is not the response that paper does, or as film does.
Your process is admirable for round trip validation, and I am surprised youâre getting a full 7 stops out of a calibrated monitors output. I canât say I have ever tried it.
On a slightly different note, in your full write up you talked a little about box speed, and not feeling experience enough to mess with it. You have all the pieces in your testing. ANSI/ISO speed is defined as .1 log units about film base + fog to determine when the film crystals start to break. Most film tech data sheets note this in the fine print, even when the box says 3200 or whatever. You noted that forma400 doesnât start to have detail till zone2. That suggests true speed is closer to 200. Film manufacturers are liarsâŚ
The lower contrast on the rest of the curve suggest if you pushed the film you could get closer to the apx densities. It doesnât make the film faster, just more contrasty. Thus âexpose for the shadows, develop for the highlightsâ. (and therefore contrast)
It sounds like you have the gear setup to to asses and calibrate your process for black and white darkroom assessment.