I am currently using ilastik density counting to count the number of neurons in my brain sections. I would like to compare neuron numbers for specific regions between my mutant and wild type mice. I thus need to be able to use the same box size to compare - is there a copy paste option or a way to make sure all of my boxes are the same size between different images.
May I ask what version of ilastik you are using?
Are the regions/boxes after loading still where you expect them to be?
So right now the answer is that no, it is not possible to use the boxes to compare different datasets at the moment in a straightforward way.
It should, however, be possible to inject boxes from one project to the other with a small python script. Are you familiar with a bit of scripting/coding?
Thanks for the reply @k-dominik.
I am using version 1.3.2. The boxes do shift and do not stay in the location I want them to be in, which is a little annoying.
Oh I am not very familiar with coding and have just begun using python. Would you be able to help with this?
Sure, I’ll look into fixing the current version first, maybe not much scripting on your side will be needed then.
there has been quite some work on the Counting side: resizing should be smoother, and also export/import of boxes has been implemented for our latest ilastik version.
The import/export function is accessible by right clicking into the list of boxes:
This will export a csv (comma separated value) table that can be read in excel. It also includes the counts. You can also import this file in a different ilastik project. This will add boxes at the same positions.
Hi @k-dominik, thank you for the reply. I have come across a different problem. When counting cells using object classification, I have noticed that the boxes always tend to overestimate the number of cells by quite a lot. See the picture below for an example. I can count 7 cells but the box says that there are 20.5 cells. This happens with every image I use. How can I fix this?
thank you so much for providing an image, that makes it a bit clearer.
First of all, will all the objects that you want to count be that separated? The Density Counting workflow was designed to work with crowded scenes, where cells are overlapping/touching. Another prerequisite for the Density Counting workflow is that all cells are more or less of circular shape and also the same size.
On your data I would not go for the counting workflow, but rather use two other workflows, one after the other, to get exact counts:
I illustrated it with an example that I think looks similar to yours:
Pixel Classification to do foreground/background segmentation. Save the prediction map and go to
Object Classification. Here you can train an object classifier to filter out false detections, or classify cells if they are touching. By the end you will get a table as the following. Depending on how you choose your class names (e.g. “single objects”, “2 cells touching”, “not a cell”…), you can then (in excel, google sheets) count ht number of cells by counting the number of rows of the respective classes:
oh alright thanks @k-dominik, for my less densely packed images, I’ll use the suggested workflow.
However, even when I used images with sections 4x thicker (very densely packed cells and overlapping), I still get a lot more cells than are present. Is there something that I can do, perhaps during the training stage? or is there a way I can control for size?
Here is an example of an image I have used previously with densely packed cells (other images are even more packed). I used the cell density counting and the numbers had been overestimated. Not all of my cells are perfectly round and some are bigger than the other - with such densely packed cells, what workflow can I used to count them? Or how can I train the algorithm better?
these are the same pics - the second one has been labelled by the algorithm
Thank you for the additional info.
For this kind of data I don’t know of any other solution than ilastik’s density counting. What value of sigma are you using, which features, and could you maybe show an image with your annotations?