Digitizer, I don’t think that NLP should care about old negatives. At all. In my view those are two separate steps. NLP’s focus is the quality color inversion, to be a perfect “electronic RA4 paper”.
I would argue that restoring faded or shifted color is better done before or after the NLP step. In fact, “auto-color” in Photoshop is probably all you really need at that point.
Actually if I were in your shoes I’d want this too, i.e. to have NLP provide a “reference look”, so you could evaluate if colors have shifted and by how much.
The purpose of a calibration is to get the RAW image into a well-known state. Currently NLP algorithms can make very few assumptions about the color because everyone’s light source is different, and camera manufacturers tinker with RAW data trying to make non-inverted colors more “pleasing”. Adobe Color camera profile tries to normalize across all camera models, but they are too guilty of optimizing for “pleasing” and they do not account for variations in light sources.
In fact, I believe that NLP is actually trying too hard. Having quality camera profiles would allow the algorithm to be dramatically simplified: no need to analyze picture data, don’t clip channels, don’t crop the border, adjust gamma + gain for each channel to match paper response, that’s all I need.