I implemented an illumination correction in python, which similarly exists in Fiji (ImageJ). Everything works fine, except the final conversion from float32 to 16-bit.

Loading an image sequence (stack of gray valued images with 16-bit depth) and dividing each image in the stack by the illumination correction image results in float32 valued images, which have values larger than 65535.

At this point, the results from my python code and image completely coincide.

The next step is to rescale the sequence of images to 16-bit. Therefore I do

I_rescaled = (65535 / (max - min)) * (I - min)

where I is the Image Sequence, max/min are the maximum/minimum value of all pixel intensities over all images. However, the conversion in ImageJ leads to a different result, which looks better in the sense of more contrast.

I googled and parsed the code on github from ImageJ, but I did not find the procedure which leads to the desired better results.

Hope that someone could help me to find (or explain), how ImageJ converts float32 image sequence to 16-bit image sequence. (stackconverter class seems not be the correct way)

Thanks in advance for any help.