Efficient quantification of background fluorescence intensity

Sorry I have a short basic question regarding ilastik.

After getting the Pixel prediction map and with the aim of quantifying the fluorescence intensity of the background of this map in the most efficient way.Is there a possibility (or workaround) to quantify the mean fluorescence intensity of the background of the pixel classification map in ilastik (need it to correct for our signal). My approach is computationally quite expensive; for a presumably simple task


Hi @alexjov,

Hmmm. So naively I would take the probability map of the background channel, threshold it at 0.5 to get a background mask, check pixels where this mask is not zero and take the mean. How are you doing it?


Ciao Dominik!

thanks (again) for your time😊.

So you’d take the pixel prediction map from Ilastik, and go into Fiji for example and apply the background mask to the image and do the quantification there- correct?

So I stuck with Ilastik and tried two approaches.

  1. I used the pixel prediction map and the background signal for the Object classification. I set the threshold to 0.5 in the Object classification workflow and I cranked the size threshold quite high to omit small artefact-backgrounds (true background is > 2500000 units) and essentially trained the classifier on just one class (ignored the other classifier label). This worked but im missusing the machine learning…

  2. Another approach was to make use of the small-artefacts and the second classifier label. Like this I thought Im making use of the ML.With this I kept the 0.5 signal threshold but omitted the size limit. So I set the size from 0 - to maximum. I trained the classifier on the true background and the false backgrounds and ran it like this.

I noticed however with both my approaches often the processing of the object classification of the background was slower than the object classification of my cells (in a separate object classification workflow).It was a bit counter intuitive since I have more objects to classify (several hundreds) as oppose to just a few using approach 2 or 1 when I use the first approach

Think my approach is computationally inefficient and maybe approach 1 is wrong… keen to hear your thoughts!

Sorry for the monologue :sweat_smile:


Hi @alexjov,

yeah, that was my suggestion, but after your last post I can see that you are doing something more elaborate than that.

Comment to your approach 1): As you state yourself, this doesn’t do much in means of machine learning. It would be the same to go to Fiji, apply a threshold of 0.5 and then do the quantification using this mask.

Comment to your approach 2): This can make a lot of sense if the background in smaller objects could be misleading, or if, in fact, pixel classification output is simply not good enough for the background.

Im pretty sure what you’re seeing here is the effect of the feature computation. Duration here is dependent on the size of the objects (in number of pixels that are included in the analysis). The classification step itself is cheap compared to this.

It might help to decrease the number of features you compute. On the other hand it might be worth investigating, what background mean values you get using e.g. Fiji with threshold of 0.5 on the background channel vs your approach 2), to at least quantify the improvement vs computation time.

Cheers :slightly_smiling_face:

Hi @k-dominik ,

Perfect - its all clear now. I will do as you suggested. Thank you for your patience and taking the time. Really appreciate it!

Wish you a happy Christmas!