[NEUBIAS Academy@Home] Webinar "ilastik beyond pixel classification" + Questions & Answers

NEUBIAS ilastik webinar - Q&A

Dear ilastik users,

here, we summarize the Questions and Answers that occurred during the NEUBIAS ilastik webinar in May 2020 (recording, slides).
The wonderful participants provided the questions during the live sessions. The excellent team behind the scenes answering the questions was

Finally, thank you to the organizers of NEUBIAS for giving us this opportunity!

Table of contents

General

↑ back to table of contents

[Q1] Does ilastik machine learning functions multi-threaded, e.g. run in parallel on all CPUs?

ilastik is mutli-threaded and tries to utilize all available cores, but there are diminishing returns. Number of computational threads is configurable (see here for the ways to do it).

[Q2] Hi! Any option to open directly ndpi digital images? Thanks!

The best file format to use in ilastik is h5 at the moment. My suggestion is to convert your file format using the ilastik Fiji plugin.

[Q3] Great tool, congrats for such a well deserved success, and thanks for keeping developing it. Doing Pixel clasification I’m getting the following error: Failed to export file: [WinError123] The filename, directory name, or volume label syntax is incorrect: D:Data/....

It seems like you are missing slash after D: it should read D:/Data/...

[Q4] I cropped the route of the files to make it easy to read (D:/Data/folder/file), but don’t see any problem on that, where can I chek that ?

Try to use default value of {dataset_dir}/{nickname}_export.h5. Details about configuring export available here.

[Q5] Will ilastik be able to take advantage of GPU power in the future?

Yes, for the neural network workflow even now, stay tuned!

[Q6] Is wathersed still good enuogh? Is there a better algorithm than watershed?

Will be covered later in the talk, see Multicut. Later addition: Watershed has many flavours. Biased watershed as used in the carving workflow to extract objects one by one is indeed pretty good. If you need to segment everything, not one by one, Multicut and other graph agglomeration methods are much better. ilastik Multicut workflow provides a user-friendly way to apply them to your data.

[Q7] Trying to count clumped cells in brightfield image. Any advice for separating cells? No Dapi channel for seeded watershed.

Depending of images, you can manage to train to recognize nuclei area. Or use the ilastik counting workflow, which we won’t have time to show today, but we have video tutorials for this on youtube.

[Q8] Is there also a way to segment crowded cells when markers (nuclei for example as shown in demo) are not available?

Yes, if the boundaries are somewhat visible you can use the Multicut workflow.

[Q9] Maybe a silly question, but is it implemented to run in multiple GPUs (CUDA like acceleration)? This way we can process huge datasets in high performance computers

Neural network workflow can use multiple GPUs. Other parts of ilastik are CPU-only because of diminishing returns of using the GPU in our interactive setting (basically, we only compute in small blocks and the overhead of GPU transfer kills the advantage of faster computing).

[Q10] What’s about large 3D/4D data sets (xyz/xyzt) in regard to speed and physical memory utilization (large tilescans, timelapses)? Any data caching (virtual memory) on a RAID 0 arays (NVMe SSDs) utilized when handling large data sets?

Generally speaking you can use ‘any size’. However I strongly recommend to convert the ilastik input images in h5 and maybe use a set of crop images for the training. H5 with chunking supports reading sub-blocks, no need for additional caching. There is an Fiji plugin for the conversion. Then apply to everything in batch processing.

[Q11] Can I use ilastik for quantitative analysis of z-stacks captured in multiple channels e.g. reflective and fluoresence channels i.e. 3d quantitative profiling?

ilastik can easily handle multi-channel data. You can even use only one/few channels for segmentation, and use all channels for feature extraction on the object level afterwards.

[Q12] Hi ilastik works if you do not move your data from the original folder/location you used at the creation of the project. Is there a way to feed ilastik the new path if you change the location?sometimes when you try to open a project (after moving the data’s location) it simply crashes and it does not find the new path (even if you try to define the new path).

It used to be possible and then we broke it, sorry. It was fixed in the last release.

[Q13] My data is not biological samples, i am measuring fluoresence from nowoven fiber…is the software suitable to use for this apllication?

Hard to say without seeing the images, but in general ilastik has been used for anything you can think of, satellite images of fields, detecting insects on leaves, all the way down to cryo-EM. Give it a try.

[Q14] CUDA only, what is about OpenCL (broader spectrum of users), or even better, professional GPU scripting languages?

All models that are currently available are tensorflow and pytorch and these frameworks rely on Cuda. In general it’s interesting to explore different backends and image processing languages e.g. Halide. There is related Fiji project that may interest you https://clij.github.io/clij-docs/macro_intro.html

[Q15] I heard that TU Dresden has large number of GPU nodes, how fast would it be on 400+ CPU nodes?

CPU or GPU? In any case, we have solutions to parallelize on the cluster, get in touch if you need this, we are happy to help.

[Q16] What version of python is compatible with illastik headless?

ilastik brings everything along in its binary distribution, including Python. It’s currently 3.7

Pixel Classification

↑ back to table of contents

[Q17] Is it possible to import object labels generated from outside ilastik?

It is possible but not straight forward at the moment. See e.g. here.

[Q18] Where could I find basic tutorial information about using pixel classification?

In our documentation and on youtube.

[Q19] For pixel classification, can you comment on the length of brush strokes and also the number of them?

My suggestion is to use not too large sparse labels. It can help to reduce the computational time and get a better result.

[Q20] For pixel classification is their a output about precision of the classification?

In the logs and in the command line where you launch ilastik, we print the out-of-bag error. It’s over-optimistic, but still an estimate of the test error. We’ll move it to the main window, it’s on our todo list.

Autocontext

↑ back to table of contents

[Q21] In the autocontext, do we give give same labels and colors at both stages?

If your data has multiple classes, we recommend to label them separately in the first stage and then in the second stage to concentrate on the class of interest vs. background.

Object Classification

↑ back to table of contents

[Q22] Is object classification restricted to 2D?

No it also works with 3D images.

[Q23] Does the labelling for object classification interpolates between adjecents frames when working on 3D data? I.e. can the alrgorithm roughly follow and objects through the planes without segmenting from scratch each frame in a stack?

You can use it. But if you have large data it is better to split the workflow between the pixel class and the obj class workflows. ilastik always works natively in 3D. It labels all the pixels that belong to the object, in the whole volume.

[Q24] Could you please expand on why one should’t use Pixel Classification + Object classification ? Thxs

Techinically, the problem is that Pixel Classification needs to finish running for the whole volume before you can even start with object classification. If you change anything in the Pixel classification side, everything on the object side is then discarded. So it’s good for a quick demo on a little data when you only follow one click path, but for anything serious you should use the workflows separately.

[Q25] Is it possible to run object classification based on “alpha Shape” instead of “convex hull”?

Not sure what you mean by alphaShape? Our features are listed there, if you need more you’d have to write your own (or convince us to do it for you). It’s not hard, there is a plugin system, but it requires a bit of Python coding skills.

[Q26] Is it possible to import object annotations?

It’s not very straightforward, because the annotations depend on the object order, so any change in the segmentation will throw the annotations off. That said, we for sure had a work-around that allowed to do it. I’ll check if it’s still functional and add it to the docs. - The workaround is only available in debug mode, so not a feature we would recommend to use.

[Q27] Hi: can you tell a bit more how the initial map, which Donimik mentioned was created.

Object classification workflow can start from an existing segmentation or create a segmentation from a probability map (or your raw image, as long as it’s 8-bit) by simple or hysteresis thresholding. The probability map in Dominik’s case was created in ilastik ahead of time following the usual procedure as described in Pixel Classification docs.

[Q28] Do we need to know the classes of objects in the image, or can ilastik also give a prediction on how many classes of objects are present in the image?

You need to know the classes, we don’t do unsupervised clustering

[Q29] Can you configure the object classification algorithm ?

Depends on what you mean by configure. You can select features which is the most important control for Random Forest. We have never found a need to tweak the RF hyperparameters, so they are all at defaults.

[Q30] So in order to classify objects I always have to classify pixels first in ilastik?

In general, you need pixel classification before or you can import the segmented image in object classification as binary if you created it somewhere outside ilastik. For instance you can generate you binary in Fiji. If the data is simple, you can also use raw data directly instead of the probability map and threshold it in ilastik, but Fiji would have more powerful non-learned segmentation methods than we do.

[Q31] Is it as well possible to make and averaging (in order to average measurements errors) of all the identified objects (that will have different orientation and sligthly different shapes)?

ilastik computes features per object, including average object intensity and other statistics. If you need to analyze the features further, you can export the table of object features and go wild.

[Q32] What is the connexity feature used in object classification?

Convexity features measure the different between the object and its convex hull - the tightest convex shape covering your object. It’s a powerful shape feature.

[Q33] Perhaps I missed that: what is the difference between block and halo?

Block is the segment of an image you prediting, halo is additional area around this block. See also the webinar video. We have also added slides (13 and 14) to our slide deck after the seminar.

[Q34] We had problems with object classification when trying to segment a large, convoluted object. We thought that this is because the object looks quite different in the cropped training data - could this be the case? would we need to load in a large portion of the whole data into the object classification training?

Yes, object classification needs to see the whole object to compute features. If you plan to use mostly shape features, perhaps downsampling the volume would help?

[Q35] For object classification, as you load a probability map, it is possible to obtain an instance segmentation and, later on, use its inverted mask to perform the object classification? (assuming you are using more than 2 classes for object classification)

You can in general load your own segmentation, there is a variant of object classification workflow for this. It doesn’t even have to be done in ilastik.

[Q36] So what would be a rule of thumb for block size? if there is any

1.5 or 2 times the size of the Object.

[Q37] How difficult does it get for non binary object classification? Say if there are features of interest overlayed on some non-zero background and it is not straight to define a threshold.

It really depends on the data. Training the classifier in Pixel Classification to predict boundaries between objects can help. But in the end, if objects are merged and the boundary is not visible, you’d need a shape prior to separate them, which we don’t have.

[Q38] Is the software able to distinguish between overlapping objects like two touching cells?

It depends, if they are slightly touching it might work, if they are heavily overlapping, it’s not likely. There is no shape prior.

[Q39] OK, how does it relate to prediction map? I thought that what object clasifier uses is the image where each pixel is described with probability from 0 to 1 and not 0 or 1.

I hope I understand the question. In Object classification you threshold the probability map from pixel classification and you generate a binary image. After you select the feature and the different classes. The features are computed both on the binary image (for shape feature) and on the raw image (for the object intensity statistics)

[Q40] How is the multiple couting of an object issue solved when analysing by blocks?

There is no mulitple counting since it’s all based on the same spatial map. So the object will only be in one block and potentially in several halos. Final prediction only takes the one from the block.

[Q41] Would it be possible to classify objects composed of multiple subobjects. Like, in the example of letters, would it be possible to classify words? Thanks!

No, we don’t have hierarchies like this. It would treat the whole word as one object then. Or you do it twice, first on letter level, then on the word level.

[Q42] Can object classification be applied to images containing mutiple neurons (and their axons) that potentially overlap?

You could try to segment the cells in Fiji or any other software. For instance using seed detection followed by watershed and then apply object classification.

Tracking

↑ back to table of contents

[Q43] I was wondering if there is a way to save the output of the manual tracking as a CSV files containing the coordinates of each trajectory the same way as the CSV file (plugin as source) of the export applet in automatic tracking

I’m surprised it’s not possible now, I’ll check and add an issue, thanks for mentioning it.

[Q44] Could be useful to also allow fusion as a classification option (great for particles merging events).

Agreed, but we’d have to introduce another factor in the graphical model. It can handle false merges though, so if they “unfuse” after a few frames, it could work.

[Q45] Can we use tracking in 3D, including dividing cells?

Yes, absolutely. Go to a machine with more RAM though.

[Q46] How do we make measurements in the cytoplasm in a tracking project?

ilastik reasons on the object level. Whatever is in your objects will be measured.

[Q47] The export to Mamut for manual correction looks very interesting. But Mamut is a “spot” tracker. Is the centroid exported? Is it possible to merge the segmentation information from ilastik back in again somehow after correction?

Yes, we export the centroid. Not sure about the other parts, let me ask the MaMuT experts.

[Q48] Is it possible to do tracking on objects that change their class over time, for example cells that have been classified depending on their cell phase in time lapse?

As long as they are all recognized as objects, there shouldn’t be a problem

Carving

↑ back to table of contents

[Q49] Can you use carving on fluorescent data? e.g. cell mask or nuclear enveloppe

Yes, as long as the boundaries are clearly seen

[Q50] Is the superpixel annotation performed in 2D only or is the 3D information taken into account in this step?

In ilastik, 3D data is always handled as 3D, unless you explicitly tell it not to (only possible in Pixel Classification feature selection).

[Q51] For carving you always need to one object at a time ?

Yes, to get all objects at once use Multicut (coming next).

[Q52] This example did not start with raw data, how does one obtain the input image to build superpixels?

You can do it on the raw data too as long as the boundaries are clearly visible.

[Q53] What is the project type selected for carving from the main page?

CARVING (4th last)

[Q54] and the probability map he started with can be generated using normal pixel classification, correct?

Pixel classification or autocontex.

Multicut

↑ back to table of contents

[Q55] Not a question but a feature suggestion, swapping the colours for keeping/throwing away the borders would provide a more intuitive usage of the feature

feature request can be posted on giuthub issue

[Q56] Is it possible to change red and green in the multicut?

Not now, but we are annoyed by it ourselves, we’ll change it.

[Q57] Maybe I missed it, but what is needed for the Multicut algorithm to work? Thanks!

Please look at our multicut documentation.

[Q58] Can you train multicut on a small part of the data then batch export to a large data-set?

Batch processing is available in the Multicat workflow!

[Q59] Does carving/multicut work for segmentation of 3D filamentous data?

It works in 3D. If your boundary are strong enough should do the job.

[Q60] Can you export the graph in of the Muticut wflow? What can you output?

We never exposed this functionality, do you want to try your own solver on it or why would you want it?

Neural Networks

↑ back to table of contents

[Q61] Could you further explain how to pick a suitable model?

The model zoo shows the data that the network has been trained on. Look for models that have been trained on similar data. In the future we will provide a matching algorithm, but it’s in development now.

[Q62] I didn’t understand the explanation, if I have a dedicated gpu in my workstation, would I still need to use tiktorch?

yes, you would, but it’s easy to install. We are working on producing a single joint binary for local installs for the non-beta release.

[Q63] Does the neural network workflow support multi-gpu/SLI set ups?

The ilastik team is working on that. It will be available in the future!

[Q64] How long did it take to train the model for this dataset?

Several days, I think. But several weeks before that to find the best model.

[Q65] Can I run the neural network classification on data that is of different resolution than the data used to generate the model?

This is dangerous, networks are not scale-invariant. In the best case, performance will be compromised, in the worst it will be all wrong. I would recommend up/down-sampling your data.

[Q66] On a technical note, can you suggest a resource on how to set up a GPU server such as yours to be shared with users in a core facility?

We tried to document it in the tiktorch docs (I’ll augment this answer with a link after the live webinar). The setting you have in mind (backend running on the server) is indeed the best way.

[Q67] The NN based segementation looked fantastic. Can u apply the the NN bit of ilastik for tracking problems as well?

The network itself is only trained to segment. But you can use it for the segmentation of objects that you would then track as usual.

Fiji Plugin

↑ back to table of contents

[Q68] What is the name of the update site for FIJI?

The fiji plugin installation is documented here; the url of our update site for fiji is https://sites.imagej.net/ilastik/

[Q69] Is ilastik plugin compatible with ImageJ macro language?

Yes!

[Q70] What are the 3 channels created in FIJI?

Not sure anymore what it refers to, but probably different semantic classes in the image

[Q71] Can you write the ImageJ script in groovy/jython as well?

Good question, I don’t think we have ever tried. We’ll find out. Update: According to @Christian_Tischer this should work!

[Q72] Would you share this macro?

Yes!

Headless

↑ back to table of contents

[Q73] Is batch processing sequential or parallelized? Can user assign 2-3 CPU cores/threads per job and run user defined number of jobs in parallel?

You can control CPU and RAM in headless mode. Please check our documentation on this.

[Q74] Im currently writing a python script to uses several ilastik projects in headless operation and further analyzes the data. Is there documentation on how to use ilastik as python package in a script? Are the commands similar to headless operation?

Currently the best way to run ilastik in python script is running it via subprocess in python.

3 Likes