Pixel Shift vs Regular Capture - quick comparison w S1R

Thanks Mark , I think I read this article at the time, but of course over the time, those parts of my brain have experienced too many CRUD events. ;-). The reason I started looking into this now was that I was trying to review the tiniest bars on test target in regards of how many pixels they occupied and became confused. What seemed to be well resolved bars at 100% in LR became less certain at 200% and even less certain at 400%.
I went as far as as splitting the raw files in 4 tiffs per each RGBG channel using RawDigger. Sure enough the individual images as well reconstructed image were showing much less certain resolution of bars then LR would do. I was thinking if it were my eyes and brain were seeing things ;-). Than I realized that what I saw in LR was not the raw image, but image with default LR settings like default level of sharpening. I immediately pulled the sliders to the left and than it was revealed to me that indeed the bars were not as defined as I have thought. So if one to apply USAF1951 standard assessment procedure, the resolution would go from 62 lp/mm to 50 lp/m. I always resisted the recommendation of @Richard1Karash to check the resolution at 400% now I think I realized what was his point - exclude LR adjustments from evaluation.
One point I think with practical implications for us is that once image loaded in LR all settings which increase sharpness should be turned off before proceeding with inversion or NLP. Those artificial sharpening artifacts become unsightly after inversion. If anything, sharpness should be truly deferred to the very end of processing.

Is this what I was thinking about above? Iā€™d never given it much thought before but logically (to me anyway) once you went above 1:1 the software must be taking over. This could mean that other software would resolve differently at zoom levels above 100%, or am I misundertanding? I donā€™t use Rawdigger but presumably that is the only way to see the detail as the camera has captured it. The X-Trans files from my X-T2 complicate it again as my version of Lightroom (6.14) was notorious for not handling those files very well which is why I keep Iridient X-Transformer in the toolbox.


@Harry Thanks for all that. The link makes me pine to shoot the last of my Velvia 50! And perhaps order more if I can find any properly stored. One can dream.

Also, I only JUST sold my 5d Mark ii a few months ago. It had collected dust for a few too many years. I did keep my original 5D though, nostalgia and itā€™s special magic wonā€™t allow me to sell it.


@Digitizer I agree. Based on viewing distance, PShift does not matter for most print sizes on 35mm scans, I have been clear on this. At least for my initial B&W example, this is useful for large printing and showing no color noise. The TMax 400 shows itā€™s grain easily for smaller prints and color noise is easily managed at typical sizes. But just because a slider manages the color noise doesnā€™t mean getting a cleaner image wonā€™t be useful for big prints or long term preservation purposes of selected, special images.

I think itā€™s likely the 35mm Tmax400 grain maxes out above 47MP but well below 180MP haha! Initially I thought it useful for images printed larger than 20in on the long edge, but now I think that number should be much higher. Itā€™d require astonishing printing for anything under 35-40in to show a difference for this B&W image, and as you said more likely 50in+ is where it really shows up compared to native 47MP resolution. But, say I needed to crop, now we are talking :slight_smile: :upside_down_face: ha!


@VladS Yes to turning off all the sharpening when doing any analysis. My original post showed slightly edited images (used a linear profile but clipped in the curves), and the 2nd group of images were more edited which affects the outcome. I personally see a difference, but clearly there is a debate on the utility of it all. :melting_face:


Cheers yā€™all!

Transitioning to next steps in my testing.

TLDR: for 35mm B&W - PShift is nice for truly huge prints, not necessary otherwise. I am glad I did this, but also wish Iā€™d paired it with a good color neg or especially slide. Iā€™ve only gotten part of the story here. My flat Tmax400 image was great for showing silver grains but didnā€™t push the sensorā€™s or techniqueā€™s limits at all I fear. So lets set aside the idea that we can achieve large sizes with this. That part is clear to me. Now to test color and larger formats!

The details are one thingā€¦ but capturing the RGB and then properly interpreting it for final output is important. As is stitching-free medium and large format scanning.

Otherwise why else do people keep old scanners going if not to get those full RGB files and resolution? Right?! The way people still talk about drum scanners and Imacons as if they are still the end all be all, Iā€™d think there would be interest in fully testing PShiftā€™s limits for scanning. Those devices just record the analog as pixels, too. And we all know camera scanning is so much faster. So it comes down to: repeatability, resolution, sharpness and RGB interpretation.

I figure if big institutions are camera scanning large volumes nearly exclusively now ā€“ so should I.

So for special images (not every image!), I want to push to the limits, which I would define as being beyond the practical, typical uses (and because we do not know what the future holds).

Additionally, Pixel Shift dynamic range sees a theoretical boost above their base ISO standard, too, which I suspect would be most obvious in much more dense, contrasty images like slides vs my flat Tmax400 example. There is a chart below and it was linked above. @LABlueBike showed this quite readily in his example from Post 32. I donā€™t think it was a fluke because Iā€™ve watched this a few times now recently. I think it is illuminating to the discussion here.

Video linked straight to the examples (35mm through 6x9) but the explanation up front is nice for those that want the basics.

In my experience digitizing tens of thousands of slides, linear color profiles paired with wider dynamic range does wonders for transparencies and any boost to that range is a useful one on the most extreme examples. For example, my jump to the R5 from a 5D3 was magical in this regard. Prior to the R5 I never found camera scanned slides pleasing. So finding more DR can only help and the less I move the sliders in post, the cleaner the final output, with or without Pixel Shift. So I want to test if I can find more DR economically. A Phase One and even a Hasselblad or Fuji is a bigger investment.


Cheers yā€™all

@VladS I added test target images for you - more of the same but a bit clearer to analyze. Center of the Sigma 70 Art clearly stronger than far corners as expected. Looking forward to trying other lenses soon.

Cheers yā€™all

Thanks a lot Spencer! Downloaded for review. First impression: pixelshift 21 looks overexposed during scanning, is that side effect?- V

They should be the exact same! I thought I sent the wrong file pairing but when I just looked they are virtually identical on my end with all adjustments and sharpening off, save for white balance.

EDIT: Checked again. At initial loading it appears a bit brighter but once the preview loads they even out. Then I opened in 2 different editors and measures some patches and there was never a difference of more than 1 pt on only one of the 0-255 scale of each R-G-B channel. Let me know what you see.

Happy to shoot it any way you want though so let me know. What would you like to see?

Iā€™m set up for quick and easy repeats at this point, its a scanning week

An investigation across 2 articles on pixel-shift, but for Canon R5 rather than Panasonic. It seems that Canon have abandoned their in-camera 9-shot High Res version for the R5 MkII and replaced it with in-camera AI upsizing. Includes a lot of examples which seem to demonstrate how this is a mistake.

@Harry I fully suspected the R5mk2 ā€œAIā€ upscaling to be useless. Adobe and Gigapixel etc do that job really well, mostly. (Though I do not think Gigapixelā€™s algos get film grain / dye clouds right). That article proves its more than useless, its actively smearing fine details. Yikes. I canā€™t imagine why they would include that feature when a simple test clearly shows its faults.

The R5 is a great for camera scanning, but could have been so much more if only the High Res mode would keep the RAW instead of outputting an 8bit jpeg. So sadly again mostly a useless feature for critical work. I wonder if it is simply a patent issue and who owns what in regards to getting that done? The R5mk1 should have plenty of processing power for the job.

Anyways, thanks for the links! ā€¦ and cheers yā€™all

@LABlueBike Iā€™ve noticed similar (though less pronounced) differences between my Fujifilm XH-2ā€™s pixel-shifted DNGs compared with the normal RAW files - I have to imagine it has something to do with the way colors are repackaged in the Fuji software, and my guess is that the NLP software is optimized for ā€œnormalā€ RAW files from each individual camera, NOT the Pixel-Shift files that would key in the same NLP camera profile as a normal RAW. Iā€™ve noticed that my pixel-shift images tend to clip faster in the deep blacks and bright whites than the normal RAWs unless I adjust the black-clip or white-clip. This is unfortunate at times because I find the colors tend to look best at the software-determined 0 clip marks. Fortunately I havenā€™t experienced really any weird color-casts with the shifted images though the colors DO convert just a hair differently than regular RAWs.
It would be interesting to compare the pixel shift files among popular scanning camera options like the S1R & S5II, XH-2, XT-5, GFX50S II, Z6 III, Z8 and so on and see which exhibit these color differences. I wonder if I have any old pixel-shifted files left form my S1R I could testā€¦ I miss that it combined the image in-camera but I ran into issues with vibrations back then and got weird artifacts in my images, but that was with a totally different setup than I have now

Hi Nate - Thanks for adding to all the knowledge posted by members since my OP. Iā€™m going to keep your thoughts in mind next time I have an image thatā€™s worth the time to pixel shift. (Especially your comments on clipping.) Iā€™m bringing the 500c on a planned trip - just hoping that TSA will hand check my film instead of putting it through the CAT machine!

I keep a roll of super expired ASA 3200 35mm film with me to aid my requests. I am never denied, but just for the extra juice if needed. :slight_smile:

1 Like

So I tend to have good intuitive understandings of technical matters like these, but usually not the time/patience to do my own charts/graphs. Having read about what pixel-shift shooting does, and having done my own observational tests of B&W and color shots, I have concluded the following: Pixel-shift does not increase the absolute resolution of photos - but it does improve (pretty significantly) their color resolution. So detail wonā€™t increase, but color gradations and accuracy will improve. My B&W scans donā€™t look any better with pixel-shift, but my color photos do. What do you think of my conclusions? Do you agree? Youā€™ve done a lot more work here than I have, so Iā€™d like your reaction. Thanks!

PS - I suppose I should add that I am using 4-shot pixel-shift on a Sony A7R3, which I have dedicated to my scanning set-up. In case that matters to your response - perhaps 8/16-shot pixel-shift would change the answer?

Hi there, welcome to the forum and thanks for your observations from using your Sony A7R3, a new camera to this debate on here I think and one Iā€™ve considered buying myself.

Your observations seem to fit in with the science, if the sensor is moved in discreet steps equal to the pixel pitch it canā€™t in theory increase the resolution, only the colour information as your 4-shot experiment has shown. This essentially ensures that each pixel site (red, blue and 2 greens) on a Bayer sensor actually measures the density of light rather than interpreting it from its neighbours. In order to increase the resolution it needs also to move in increments equal to half the pixel pitch so you get 16 or 32 shot modes.

The pixel pitch of your camera is 4.5 Āµm or 4.5 thousandths of a millimetre so for the science to meet reality the camera has to be very solidly mounted and stable during the exposure.

What has made you see that the colour has improved dramatically, is there anything that you can share?