If you can get a lab sharp scan, you should be able to get a sharp camera scan too - if both scans have similar pixel dimensions.
If both scans have roughly the same number of pixels, the difference is in how the scans are sharpened by the lab or by you in NLP and/or Lightroom. NLP’s primary function is to convert negatives and does a great job at it too, for sharpening, I’d use Lightroom’s own tools or any of the specialized tools out there.
You could try different settings in NLP and see how they provide different bases for further sharpening.
As for overall color, I find that pre-saturation levels of >3 tend to give me results that are way too green for my taste. Test series (see below) done with NLP 2.1.2:
Upper row: Basic color model, the number behind “Kopie” corresponds to the value of pre-saturation.
Lower row: Noritsu color model, the number behind “Kopie” corresponds to the value of pre-saturation.
Due to the results above, I most often convert with pre-saturation set to 2 or 3