Auto-context multi-channel view in Stage 2

Hi,
I am working with three-channel images. The auto-context workflow shows all the channels in stage 1 as RGB composite but only shows one channel at a time in stage 2. I think it will be more efficient if we could view all the (original) channels as a composite (the same way that is implemented in Stage 1). Is there a way to change the stage 2 view settings?

We could add the results of stage 1 as a separate layer.

Thanks!

If I know where in the code to look for view setting, I might be able to experiment…

Hi @memphizz

Could be done, sure, that’s a good idea.

Personally I find it really hard to annotate on color images and go for showing separate channels right away when labeling. So, even for 3 channel images (that are rendered as rgb by default) I would not go for that. Of course this is a bit different, if you really have rgb images (as in e.g. photos). Single channels might not make a lot of sense there.

Hi @k-dominik, how complex is adding this feature? What needs to be changed? Could you please point me to those lines in the AutoContex workflow?

Hi @k-dominik, I looked at the autoContext workflow code but couldn’t pin point where the view setting are. I would appreciate any help.

Thanks.

So I don;t think adding this feature is easy. The probabilities of stage 1 are stacked on top of the original image for stage 2. The whole code is written in a way that one can stack as many pixelclassification operators on top of each other as one wants. So its most important inputs are always the same (basically “raw” image for display and feature image for calculations).
In order to show the original raw image, and the probabilities in a multi-stage scenario one would have to

  • add additional slots to the 2nd stage pixel classification operator (ilastik-meta/ilastik/ilastik/applets/pixelClassification/opPixelClassification.pywith the original image, and the probabilities from stage one.
  • Modify ilastik-meta/ilastik/ilastik/applets/pixelClassification/pixelClassificationGui.py, notably setupLayers in a way to expose those additional inputs to the gui

and all that in a manner that doesn’t break the current way of being able to stack the pixel classification pipeline to still allow for the chaining… I would say this is quite an effort.

If you don’t care so much to also show the predictions from the previous stage it could be done a bit easier…