Nishika N8000 and the mysterious White Balance Problem!

Hi all,

After years of scanning with an Epson v860 I finally switched to using a mirrorless camera (Sony a7RV+Sigma 105mm DG DN MACRO) after I bought Negative Supply’s support.

I recently did some shooting for events using the Nishika n8000, a camera that takes four photos simultaneously in two different 35mm frames.
Negative Supply’s support does not allow the mirrorless to photograph the two frames together, so I decided to take two separate photos of the two frames on the Nishika, and then process them in post.

Usually with the epson v850 I have never had any problems scanning the Nishika n8000 or even NLP. However, I think this is due to the fact that with the epson v850 I always worked on the entire photo segment scanned at one time.

My current problem comes from dealing with two different photos: the two separate shots almost always differ from each other in their white balance (sometimes a little more magenta, sometimes a little more green, warmer, colder etc.).
To make the final Nishika video I need all four shots to be obviously IDENTICAL in terms of colour.
I don’t understand where the problem comes from.

Here I post the parameters and steps I follow when scanning and converting to NLP:

  • Sony a7RV, exposure reading (SPOT or average, the problem persists anyway)
  • ambient lights off when shooting with Sony a7rV
  • manual focus, and Sony a7rV all in manual (iso 100, f9, approx. 1/40)
  • each shot is taken with the same parameters.
    Once imported into LR, I use the eyedropper for white balance on one photo, then the SYNCHROME button on the rest of the roll (or even just the second photo concerned) to have the same parameters in white balance.

Despite this, between one Nishika photo and the next, there are variations in magentas and greens or warmth.

I have also tried changing mirrorless (Sony a7RIII) to see if the problem is with the camera, but the problem persists.

Can anyone help me? Where am I going wrong? Where does the error come from?
Many thanks to those who will be kind enough to help me




Questions:

  • Is it possible the lighting has some variation or flicker?
  • Is it always the same side of the 2 frames? Left or right?
  • Related to the previous question, do the taking lenses on the N8000 all have the same shutter speed and same aperture, exactly, after all these years?
  • Does Negative Supply have a panoramic mask you could use to do a single capture of the whole thing? A7R5 has a lot of pixels to work with (and pixel shift exists for selects with some extra effort but lots of recent debate on the utility of that :laughing:)

That all I can think of now.

thank you for answering me

  • no, it is impossible the light has some variation or flicker, totally dark room and dedicated light for scanning Negative Supply ‘light source mini 97’

  • yes exactly, it’s the same photo of nishika. Left side and right side. They are however two different shots with the Sony a7rV

  • I can’t answer this question for sure (and I don’t know how to test it!). But using the epson v860 scanner previously with the nishika I never encountered this problem. So I think it’s more related to the camera and/or NLP

  • Negative Supply has a panoramic mask, but unfortunately Nishika’s 4 photos don’t fit inside! I think this would solve all my problems, as the problematic issue is the change of white balance in two different shots of the Sonya7rV :melting_face: :sob:

NegativeLab’s conversions are adaptive, which mans that every little variation between captures will most probably lead to different results. For more consistent results, you might like to try the following:

  • roll analysis
  • stitch the separated captures to get one negative to convert

thank you for the solid advice.

I really tried everything.

Roll analysis changes absolutely nothing in terms of results.

Instead, I tried combining the two photos, the intuition seemed right and it made sense.
The photos actually all come out with the same edit. The problem that arises now is that I don’t know why, but the merged files (in photoshop, in either TIFF. or PSD. formats) are extremely unprocessable with NLP in terms of editing, and the results between Edit - Camera and Edit - TIFF on NLP are baffling.

I therefore think it is impossible to make two identical conversions on NLP, which I find absurd to be honest. Every single shot of the mirrorless equals a different conversion in terms of WB. To say that I am disappointing is saying the least.

attacched the conversion on the left the single photo on the right the TIFF file with all the 4 frames together.

Assuming that all parts of the film are equally exposed and unfiltered, development settings could be copied over from on reference image

Also, every blown highlight can spoil the conversion. Cropping these parts can help. Many things could be tried, but it would be easier to test with an original file than to guess away. You seem to be concerned about privacy. Send your captures via private message if you like.

I’m thinking if the two photos are not positioned identically within their frames there could be a difference of the total number of non-image pixels between them and that could differentiate . Color balance value between them

Hi @CalancusBasilicatus

Hmm, haven’t seen this before…

To determine exactly what is going on here, the easiest way would be to share a catalog with me of one or two examples. To do that:

  1. Select a few pairs of photos
  2. Go to “file > export as catalog”
  3. Make sure “export negative files” is checked
  4. Zip up that catalog folder one it has been exported, and email it to me at nate@natephotographic.com (you can use dropbox or wetransfer to get the large file over)

With this, I will definitely be able to tell you exactly why this is happening.

Another thing you can try is using the sync feature in Lightroom to sync all the settings between the two photos after your conversion… if there still appears to be a difference, the difference is inherent in the negative or camera scan and is not a result of differences in settings.

It is possible, but maybe a little too complicated at the moment… in theory, Roll Analysis should fix this (just set the Roll Process to “Darkroom Paper” and there should be zero variance or adjustment to how the analysis is used between frames). Also check to make sure that if you are using Auto White Balance, that the values in terms of temp and tint are the same.

Previously, I had a “sync scene” button that just made them all the same… I have that back in the beta version, just having a bit go finding time to properly get it out at the moment (we are in St Pete and were impacted by Hurricane’s Helene and Milton).

-Nate

Wow, what are the odds that there would be a relevant post to the issue I was just encountering from just a day ago… I also use Negative Lab Pro for making Nishika gifs. I took a year long hiatus and just sat down tonight to try out NLP v3 finally and was shocked to find the sync scene feature missing. That was exactly how I managed to keep colors consistent over the roll. I’m really hoping this gets added back in due time! I vaguely remember I’d stopped using v2.4 last year because a bug wouldn’t allow me to open the panel window on certain scans, and was excited to upgrade, but it’s looking like I’ll have to downgrade temporarily and try my luck, working around the bug again. Really sorry to hear that you were impacted by Helene. Best of luck with the recovery!