How does converting an Image Sequence from float32 to 16-bit work internally in ImageJ/Fiji?

I implemented an illumination correction in python, which similarly exists in Fiji (ImageJ). Everything works fine, except the final conversion from float32 to 16-bit.

Loading an image sequence (stack of gray valued images with 16-bit depth) and dividing each image in the stack by the illumination correction image results in float32 valued images, which have values larger than 65535.

At this point, the results from my python code and image completely coincide.

The next step is to rescale the sequence of images to 16-bit. Therefore I do

I_rescaled = (65535 / (max - min)) * (I - min)

where I is the Image Sequence, max/min are the maximum/minimum value of all pixel intensities over all images. However, the conversion in ImageJ leads to a different result, which looks better in the sense of more contrast.

I googled and parsed the code on github from ImageJ, but I did not find the procedure which leads to the desired better results.

Hope that someone could help me to find (or explain), how ImageJ converts float32 image sequence to 16-bit image sequence. (stackconverter class seems not be the correct way)

Thanks in advance for any help.

I think the logic can be found here:


Thanks for your answer.

Actually, I also found this code, but it describes a conversion for each image in a stack separately. However, converting an image sequence seems to work differently.

One would like to retain the the same contrast over all images in the sequence. This is possible with the procedure described in my first post.

However, converting the image sequence in Fiji/Image, the contrast is much better compared to my procedure.

The question is, which procedure does Fiji/ImageJ call for conversion on an “image sequence” (not just a stack).

Do you get the same pixel values in both cases, and do you explicitly set the display range to actual min and max in your Python code?

I made a drawing to explain what I mean.

I start left top with an image sequence (many images) and divide them (each seperately) by an illumination correction image, which has values in range [0.6, 1.2]. The result is an image sequence, where the values exceed the 16-bit range. With the conversion in imagej/fiji I obtain again 16-bit images.

I wrote for each image the min and max values. If the code (convertFloatToShort) above would be applied, each image should have the maximal values 65535 and minimal value 0. However, you can see that the last image does not fill the whole range.

So it implies that ImageJ does something else, if you apply type conversion on an image sequence.

I want to know/understand, what the software is actually doing?

P.S. The image sequence comprise of many more images, I just picked some images to show, that the conversion cannot be image-wise.

1 Like

Hi @DerJFP,

how did you do the conversion in ImageJ? Via Image › Type › 16-bit on the opened stack?

If so, the 32 => 16 bit conversion takes the current min and max values (as defined by the Brightness&Contrast dialog) and maps these to the 16-bit range for the entire stack. This way you can end up with some images (i.e. stack slices) covering the entire range 0-65535, and others not.

Does that help?


Thank you for your answer!

If I load a stack / sequence, how are these values in the “Brightness&Contrast dialog” are calculated?

They’re computed on the currently displayed slice, i.e. pressing Reset resets the range to the current slice min and max.

1 Like

Since one uses just the min/max of the first image of the stack, the transformation may still have values larger than 65535 and smaller than zero? The next step is then just clipping the values outside the 16-bit range?

If so, than I understood the procedure :slight_smile: Thank you very much.

1 Like