Ilastik object classifier broken?

Hi there,

This might be a very simple issue (or at least I hope it is); the object classifier doesn’t seem to be working. See screenshot at the end of this post.

  • Setup: this is an object classification project with a .tif file as RAW data and its correspondent .h5 file as prediction mask (generated from ilastik) in the same location as the project file
  • First of all, the smoothed image does not seem smoothed out, irrespective of the number I choose (here I’ve chosen 50 on purpose)
  • Then adding a value of 10,000 to the minimum filter size does not do anything; the final output still picks up all cells (when I don’t think it should…). Just tried 100,000; still nothing.
  • Finally, in the screenshot below you can see that I’ve selected Hysteresis thresholding mode. Yet, the after high threshold and after low threshold layers do not appear.
  • I’m expecting to see all of the above since I checked the box show intermediate results

I’m experiencing this odd behaviour on both v1.3.3 and v1.4.0. I am on MacOS Catalina 10.15.7. Please tell me I’m dumb and I’m missing something obvious :frowning: :cold_sweat:

Thanks in advance!


Hi @pablooriol2,

i’d be really interested in seeing all layers here :), but I’ll try to answer some of the questions by eyeballing on the screenshot you have kindly provided. (you definitely want to check whatever I write here, to be sure).

Okay, first of all, the order in the layer-stack (left-bottom) is important. If you have something with full opacity above some other layer, then you cannot see it.
In order to see the smoothed input you should disable the final output layer. Of course one could ask - “why do I not see anything in the areas of no objects?” Well I think you see it. The heavy smoothing with a sigma larger as your objects averages them out (assuming a rough image size of 514x514, with objects being around 50px wide in diameter). At all positions, both prediction images have roughly the same values, so, this results in a green, “flat” image (this image is rendered with green/red in case of two channels in your case green for background and red for objects. To be fair that should be corrected to match the blue/yellow colormap).
Advice: In order to see the layers, try disabling the final result for now (click on the eye icon). Use a smaller sigma (e.g. 3) in the beginning, so make sure that you see the filtered result.

The only explanation I have here that you might not (yet) have pressed apply. Why? Because you can see objects that look sort of reasonable. After filtering with such a high sigma on an image of your size (I’d estimate it to 510x510) I wouldn’t expect that anymore. You’d probably see more or less random shapes.
Advice: use a smaller sigma, make sure to hit apply.

Please let us know if this helps you at all!

Hi @k-dominik,

I must apologise; you are right, I simply didn’t click on ‘apply’. So I think I did click ‘apply’ on earlier occasions but thought it was an irreversible step, hence my hesitance to hit that button before I was happy with the settings. It’d been quite a while since I used the GUI (been using headless mode all day, all night), and so I guess I’m well rusty interacting with it. My bad!

It’s working as it should now. Again, apologies and thank you once again for helping out!

Hi @pablooriol2,

it’s not the best UI, having to press that button. The problem is that we always run the operation on the whole image, so usually it’s sort of expensive to run. In contrast, in pixel classification you can do everything block-wise, so on smaller parts of the image…

Hi @k-dominik,

Yeah, that makes sense though that it’s computationally expensive - but I guess the image itself and hardware can make things better/worse. A suggestion would be to add a button where you allow users to perform operations in real-time or not? It’s just an idea.

Block-wise operations would be well useful for 3D data I suppose, or just massive 2D images