Ilastik Carving: requires more RAM or a better graphic card?

I’m trying to analyze a stack using “Carving” in ilastik (up to 35 GB for a image sequence of roughly 1000). Using the imagej plugin I already converted the file into h5. However, I can only process a stack of roughly 100 images at a time. We are now trying to improve the PC (128 GB RAM at the moment) in order to be able to process more than 100 at a time.
The error message clearly states out of memory but we wanted to check if Carving also requires better graphics because it wouldn’t make sense to increase the RAM if we then require a better graphic card as well.

Hi @Aira,

welcome to the forum!!

This is already some big data you are processing there in Carving. More RAM would surely help. But do you really need to work in the full resolution, or would downsampling the data be an option?


Hello Dominik,

Thank you very much for your reply!
I would like to process as much of the data as possible because the cell creates smaller processes I want to follow as well and those will be hard to track if I either split the dataset or downsample my data too much.

I tried downscaling the data in order to split the whole dataset into 2 stacks anyway, however, in this case I need to label the cell body every ~20 sections because carving seems to not keep the already segmented parts “in memory”.
So I figured I would try to first create a boundary map using pixel classification. But now the segmentation seems very “patchy”: Some party of the cell body will not be predicted as such while parts outside of the cell (that I am very sure of are not part of it) are predicted as my object. I tried to label those parts to improve the prediction, yet, the workflow either “ignores” my labels and still won’t consider it part of my object or will label it but parts outside the cell as object too. I worked inside the first 100 sections but the problem doesn’t seem to get any better.

Is this a common problem? Should I maybe try to change something in the prepocessing step? (I used the ‘standard’ settings with dark lines (valley filter) for the downscaled raw data and bright lines (smoothed image) for the boundary map)
Also, the prediction will take a while to load but I guess there is nothing I can do about it :smiley:

Thanks a lot

Hi @Aira,

this sounds like the approach that we would recommend :). Could you maybe post screenshots of your boundary probability map and data? Also which settings did you choose in preprocessing?


Hey @k-dominik

For the preprocessing I chose: filter scale 1600, agglomerate was ticked, superpixel regularity at 0.500 and reducte to 0.200
I tried to lower the filter scale as it seemed to look better on the filtered data. However, even lowering the filter scale to 1.500 caused an issue with the memory during preprocessing.
I already worked with downscaled data and lowered the number of pictures in the stack to the minimum to be able to process my whole dataset in only 2 stacks (so one merge of the two outputs).

Of course I would like to share some screenshots but this is research data so, sadly, I’m not allowed to share them on this forum. Is there another way to reach out to you and keep the screenshots confidential?