Imo, there is a potential issue with using any technological variant of narrowband R, G and B light sources. Assume that they don’t match the sensitivities of the CFA/sensor of your camera…creating a different balance between colours…that NLP will have to compensate. Camera sensor sensitivities are not standardised and can change between camera brands and models, see dxomark.com.
Moreover, “normal” cameras are made to take pictures in all kinds of natural and artificial light and without control over the light source. They will simply compensate any light mix when we white balance the takes.
Narrowband R, G and B light sources could make a lot of sense if we’d scan colour takes with monochrome cameras. Most of us don’t have such cameras…but I’ve run a PoC in order to see how things handled in such a case.
Now for accuracy: Think of it as an illusion. Variations of chemical processing and introduced by time and storage conditions make calibration a fool’s errand. Even one’s own ideas about how an image should look changes over time and seeing habits. Imo, it is best to judge the results subjectively rather than by measurement.