8bit mask vs 16 vs 32bit...not equivalent actions

Perhaps it’s a bug: threshold masking on 8bit faithfully creates masks or selections that contain all the red pixels, but on 16 and especially 32bit images only some of the pixels end up in selections or masks. I have to create 8bit conversions to accurately generate background masks, for example, to be used on stack SUM projections [32bit]. You can see the effect with this demo macro code using an example image [mild example of the issue]:

run("M51 Galaxy (177K, 16-bits)");
run("Copy");
run("Add Slice");
run("Paste");
run("Add Slice");
run("Paste");
run("Z Project...", "projection=[Sum Slices]");
selectWindow("SUM_m51.tif");
run("Duplicate...", " ");
run("8-bit");
setAutoThreshold("Percentile dark");
run("Create Selection");        //8bit isGOOD!
selectWindow("SUM_m51.tif");
setAutoThreshold("Percentile dark");
run("Create Selection");        //32bit ispartial
selectWindow("m51-1.tif");
setAutoThreshold("Percentile dark");
run("Create Selection");    //16bit ispartial
//note the accurate 8bit selection vs partial selections for 16 or 32bit sources
// createMASK likewise shows partial use of red thresholded pixels

Does the threshold and mask-selection work differently for >8bit data? Why so?

The live thresholding (i.e. the red pixel overlay) is implemented using a look-up table (LUT) which is inherently 8-bit. Creating a selection or a binary mask (Process > Binary > Make Binary) will consider the true, calculated threshold value.

This was discussed at several other places, e.g. here:

1 Like