Welcome to the forum @35mm
NLP uses the information it finds in an image as shown on screen. The more info NLP can see regarding colours and tonality, the more likely it is that the conversion turns out in a way that can be used directly or as a good starting point.
Your “problem” image is no problem in itself, but it contains mostly a few hues of blue, low contrast and no relevant amount of green and red. Therefore, NLP has not much to wok with and still tries to give you something white and something black. Consequentially every little bit of vignetting is greatly amplified and colours turn out to be anything but acceptable in many cases. All of it is caused by NLP’s adaptive conversion and can therefore be considered to be normal.
In situations like these, roll analysis can help get you more suitable results. You can also sync settings from other images or include some of the space between the negatives. Including sprocket holes can help too. All these measures provide greater variety for NLP to work with.
Remember: NLP works with what it gets. The less it gets, the more surprises you’ll get.