How to properly scale from 32-bit to 16-bit?

I am looking for the most reasonable way to scale a 32-bit image (2C +100Z, in my case obtained after deconvolution with Huygens) to an unsigned 16-bit image.

The reason for this request is that I wish to measure and compare intensities between channels, but not all downstream analysis tools I’d like to use (e.g. CellProfiler, JaCoP) are compatible with 32-bit images.

From the ImageJ documentation it is not entirely clear to me what the Image > type > 16-bit command does, but if I understand it correctly it will use the min and max display values (as opposed to the true min and max value in the image).

Hence my current approach is to look for the maximum pixel value across the entire hyperstack and set the maxium display value to that before rescaling the image:

// make sure min--max is scaled to 0--65535 
run("Conversions...", "scale");

// get the number of channels in the hyperstack
Stack.getDimensions(width, height, channels, slices, frames);

// get the maximum pixel value in the hyperstack
Stack.getStatistics(voxelCount, mean, min, max, stdDev);

// reset the display values to 0 and max in each channel before converting
for (k=0; k<channels; k++) {
    Stack.setChannel(k+1);
setMinAndMax(0, max);
}

// convert to 16 bit
run("16-bit");	

Do you think this is the “best” approach to rescale or did I overlook something?

1 Like

I am also sceptical about it, any opinions on IJ.run 16 bit command, for converting 32 bit images?
@Christian_Tischer @haesleinhuepf @imagejan

Well, don’t expect any magic alternate command. Reducing the bit-depth of an image basically means information loss; independent from 32->16 or 16->8 bit. How can you deal with the shift and the scaling? To be absolutely sure, you need to read out two pixels of different intensity before and after reducing bit-depth. Minimum and maximum do the job similarily well. These values guide you towards determining scale and shift of pixel intensity values. You can use them later to undo the scaling. But again, disclaimer: results might be a bit imprecise.

I’m not sure if it’s helpful, but conversion using CLIJ on the GPU involves explicit intensity scaling and works like this:

I also just updated the documentation to make that clear.


So thanks for asking! :wink:

Cheers,
Robert

Thanks @haesleinhuepf, I just found out how dangerous it could be. I was testing CARE reconstructions as they looked better than GT I was giving. I used Normalized MSE and Structural similarity index calculation between (GT/Low)(Blue) and (GT/Restored)(Green) for 76 XYZ movies having about 26 slices each, I compute NMSE and SSIM for each slice and then histogram the results. If I save my restored image as 32 Bit it shows reconstruction did a good job (low SSIM counts are coming from start of Z stacks where there is no structure in GT)

but if I use IJ.run(‘16-Bit’) right away on the saved results and then compute NMSE and SSIM it shows reconstruction made it worse!!!

Code for generating these measures : https://github.com/kapoorlab/PyImage/blob/master/ObjectClassInstance/SNRComparision.ipynb

For the specific case of 32-bit images being the result of deconvolution of images acquired with depth of 16 bit:

Since the cameras in many common microscope systems actually acquire at a lower bit depth (e.g. 12-bit) but then save as 16-bit, the intensities after #deconvolution are usually covering a range larger than the original camera range but still smaller than the full 16-bit range.
If that’t the case for your images, I would recommend to convert them back to 16 bit without scaling, and not use the actual min and max display values (to ensure comparability between different images).

In ImageJ, you can do that by using the Set button in the B&C dialog to set the min to 0 and max to 65535 (or calling setMinAndMax(0, 65535); from macro) before converting to Image > Type > 16-bit.

When you’re using #imagej-ops in scripting, converting without scaling is the default:

#@ Img input
#@ OpService ops
#@output result

result = ops.run("convert.uint16", input)
4 Likes