Any unknown normalizing steps in pixel classification -ilastik

Does anyone know if there are any unknown/hidden normalizing steps in ilastik - pixel classification training?

Hi, my group is interested in comparing the intensity of flurophores in bacteria between treated samples and our output from ilastik does not show the drastic observed differences between samples. The measured intensity between samples from ilastik training is very close to call, however when we observe the original TIF files in ImageJ we see that there are significant differences. If there are normalizing steps that we are unaware of this would help us clear up any discrepancies between the groups.

Any information is greatly appreciated!

Hello @eshunwilson,

do you mean whether the prediction images of ilastik are normalized? Or what output are you comparing?

Cheers

Hi @k-dominik ,

I am interested in comparing the mean intensity and total intensity between images, and I was wondering if there are any unknown normalizing steps that would affect the mean intensity or total intensity output. If there is any protocol on any normalizing steps on the measured intensity values I could be referred to it would be greatly appreciated. My goal is to double check that I am not missing a step in training that would result in the measured intensity to be affected by a unknown normalizing step.
Thanks!

Ah, so are you in object classification? Where do you get the intensity values?

I am in object classification and I select standard object features (all) → e.g. mean intensity, maximum intensity etc. I then train the classifier with my label and the objects (with live update) and move onto object information export. I was wondering if there are any normalizing steps that I may be missing that are altering the measured intensity values of the objects I train for.
Thanks!

Hi @eshunwilson,

there should be no normalization for the reported intensity values. A tiny exception is the histogram; there the bins are located between the minimum and maximum object pixel values (globally).

Hi @k-dominik

Thank you for looking into this. For the histogram, is that an output directly from ilastik? When I export the object classification files, I look at them in R, but the reported intensity values are not impacted by the object pixel min and max values? I was also wondering if apply any soothing effects to the original raw images would have an impact on the measured intensity values? I train only on the raw original images

Hello @eshunwilson,

the histogram will be in the table exported from object classification. But only if you choose to calculate and export it.
In general values will be computed on what you supply as raw data. So if you do smoothing, and then supply this new, smoothed version of your raw data, then of course intensities are affected.

Do you mean in thresholding? Thresholding only operates on what you supply as the probability image. If you smooth it, you might get a different object shape. So the “object” might cover different pixels as to the non-smoothed setting. You will also most likely get different values for min/max and all of those intensity stats. But, again, those are measured on the raw data image, so the values of the raw data are reported as is.

1 Like

Hi @k-dominik

I work with the raw images .h5 only ilastik pixel classification + object feature, without selecting for any smoothing effects in the feature selection stage. Yes, the smoothing step for the size of objects in thresholding was what I was wondering about. That is the only step I alter to decrease the size to pick up objects that are 1-2 pixels in size (we are working with small bacteria). Is it possible that the original .h5 images can be altered during the “export h5 for to ilastik” plugin from imageJ? We have some images that appear very dim when we look at the channels individually in imageJ, and when the images are exported as .h5, there is much more we can see, that might just be the nature of the raw images, but we are trying to turn over every stone to see of there is something we might be missing with our analysis.
We are also interested in implementing the neural network classification workflow and we are wondering if we would be a good candidate for this. We look at slides of bacteria and there is a lot of variability with background and the size of objects (bacteria) in relation to the testing condition.
Any information would be greatly appreciated

Hi @eshunwilson,

the difference is only in display. The data won’t change on export. You can use the window/level tool to adjust contrast.

Small note (warning) on using the combined pixel + object classification workflow. We strongly recommend separating these two steps into two separate projects, using the output of pixel classification in object classification. The combined workflow is there for convenience to give quick demos mostly.

1 Like

Hi @k-dominik

Thank you for this insight, we have been using the combined pixel + object classification for all of work so far. Can I ask why the combined workflow is not recommended? I am wondering if the work we have done so far is still valid.

Hi @eshunwilson,

what version of ilastik were you using?

With the combined workflow the computational graph becomes quite big with a bottleneck request, operating on the whole data at once. This can be unstable/run out of ram easily.

Cheers
DOminik

Hi @k-dominik ,

I am using the 1.3.3post3-OX version.
I do have some issues with ilastik slowing down sometimes, I have been using a computer with 16GB RAM.
Does the large computational graph and bottleneck request that occur, impact the output of training the files in anyway?

Best,
Franceen

hm, so whatever you produce with the file while you have ilastik open is fine. However, when you reopen the file the annotations in object classification are not properly loaded (they are in the file, but are not recovered when opening). Did you base a lot of your analysis on this workflow?

Hi @k-dominik

We based a portion of our findings on this workflow. When training in ilastik, I usually complete the training in one go and immediately export the files, but if I ever did open the same training file twice I noticed the object feature selection would reduce from 50 to 42. Is it that it is impossible to recover the file annotations in object classification because of the bottleneck effect? I also noticed that the annotations in object classification would disappear and I would have to redo them.