I have a weird issue with the signal intensity measurements across two channels.
I am trying to compare the signal intensities of two proteins from the same tissue sample. For this,
I subtract 12 optical sections from a hyperstack (contains two channels), and Z-project them using “Z-project → Sum Slices”. Then, I isolate (“crop”) the region of interest, split the channels, and measure intensity across the same line in both of them. The images are in 32-bit greyscale.
I collected two datasets, and in the first one, the brightest channel showed a higher mean and max grey value compared to the dimmer channel, as it should (not shown).
Then I moved to another dataset, collected with the same settings. And here, some of the images started to show lower Mean and Max Grey Value for the brightest (A) channel (see screenshot, upper and lower left).
As a control (?), when converted to 8-bit, the values jump to the sensible ones:
144 for the brighter channel A, and 53 for the dimmer channel B (see screenshot, upper and lower right).
How is this possible and how to fix it for the 32-bit images?
I cannot work with 8-bit because:
a) conversion means the loss of data
b) I used 32-bit images for the measurements in the first dataset.
Many thanks in advance,
P.S. Find below the original images (in 32-bit):