Comparaison of some tools for 3D dense ground truth annotations

Dear community,

Just want to share with you my quick comparative test of three different software for 3D annotations:

BTW : the “no”, could also mean I didn’t find it, but the feature might exist

table link

From the hardware side, I tested using a keyboard + mouse or keyboard + tablet (Wacom PTK-440), on WIN10.

From this first test, my preferred setup would be tablet + keyboard using QuPath, here is a script to exports label images

I’d happy to get some feedback, eg if you think some criteria are missing …
I would also like to test other tools, I attended napari’s event at NEUBIAS Academy and plan to test it as well …



poke : @Christian_Tischer , @constantinpape , @mweigert , @uschmidt83 , @oburri , @esgomezm, @iarganda , @akreshuk , @maarzt , @sofroniewn , @jni

Update on 2020.06.19: Thanks to @mattrussell @mcdomart @petebankhead, @guiwitz, @constantinpape for their contribution to the table.
After testing ITK-SNAP, I would say that it is now my favourite solution (sorry @petebankhead :sweat_smile: ).
I’m really impressed by the flexibility of the tool, the snake detection to get a first label that you can correct. The possibility to draw some z and to interpolate from them.
BUT testing it with my tablet (Wacom PTK-440) I encountered some issues to draw and had to come back to the mouse.


Small one… toggling isn’t only global in QuPath. You can switch on a per class level.

Quick way is to select the classes and press the spacebar – you can also toggle by selecting the classes & using the popup menu after a right-click.


You might also consider adding ‘Support for overlapping labels’ (if it is/might be of interest to you).

Hi @petebankhead,
Oups :sweat_smile: , I corrected it in the table :smiley:
Thank you for noticing it!

1 Like

That’s an interesting point. As far as I get it, the DeepLearning tools (that I know) use a label image and overlapping part can’t be solved, so one label takes over the other.

But I can add it to the table as to be tested


1 Like

There are several methods that can in principle work with overlapping labels.
E.g. embedding based methods:

Or instance mask predictions:

I think another important point to check for is if existing segmentations can be imported and corrected.
This can often speed up the annotation task significantly.

Another tool to consider is
It’s made for annotating 3d EM images for connectomics, but we have used it for a few other tasks as well. It works for very large data and in addition to the “normal” label modes, it also supports correcting an existing segmentation by splitting it into fragments and then reassembling these fragments.

The one down-side is that it’s a bit difficult to set up and not so user friendly.


Hi @constantinpape,

That sounds like a good suggestion :

Can I ask you, more precision about your idea of it ? To be sure to be on same page, before testing.
It would mean, can we import a label image (format/convention to be determined :wink: )

So far I just tested that we can save newly created labels and load them on the same platform.

What I like with your idea is that I could export objects as labels from ilastik and check/correct the existing labels.
My only concern with that approach is : “How to be sure we check/correct well enough” and that we don’t stop because “yeah this labeling is enough” and thus we end up with a so-called “ground truth”.
Counter-concern: you never know if annotator did well anyway :sweat_smile:

Thanks for your input !

I’ve heard of it and never tried (yet)

That’s also the feedback I got and why I never tried (yet)

Maybe I could add as criteria : intallation + learning curve

1 Like

Yes, exactly.

True, there is always a trade-off when starting from an existing segmentation: the annotator might not correct it enough and then one ends up with bad ground-truth. Even if the annotator corrects a lot one might still be left with biases due to small systematic errors in the initial segmentation.
But on the other hand, starting from an existing segmentation is often much faster and can also be really helpful for difficult annotation tasks. I think that this is especially true in EM, where cells are usually “completely dense” in the volume so annotating everything from scratch is a huge effort.
But overall, I agree that this is not a solution that is always the best option. But it would be very good to know whether it is supported by the labeling tools :).

1 Like

Yes, that would be really good to know too :).

I think that for 3D annotations, one of the key features is to have some way to “fill” 3D space faster than annotating each single plane, so you might want to add that to your table (unless it’s there and I just didn’t understand your keywords). I’m thinking of things like interpolating labels between planes separated by non-annotated planes, or direct 3D drawing. I really like ITK-Snap for 3D annotations, and it has those features. It’s also very user-friendly.


Another very good suggestion!

“Great mind think alike” :sweat_smile:, I made this prototype IJmacro using interpolate ROIs function of the ROI manager to complete 3D labels image.
I was thinking to create a new topic about "Do/can we allow using interpolation for ground truth creation " … I guess it’s part of this thread now :smiley:

1 Like

Just learned of another emerging tool that @oeway has recently started: which is a web based viewer / annotation tool inspired by napari.


I’ve just updated the screenshot of the table with the update made by @petebankhead and @guiwitz !

After testing ITK-SNAP this afternoon I’ve to say that I’m really impressed by the interpolation of labels and even more by the snake detection!

Looking forward to more inputs,



Can you also try 3dmod (part of IMOD), and TrakEM2? Happy to help you get started with 3dmod, I’ve been training a lot of people on it while on lockdown! Cheers, Matt


If I remember well, TrakEM2 does have that functionality :wink:

1 Like

Yes you are right! And TrakEM2 should probably be added to the table. However I have to say that while it’s certainly a great software when you have complex tasks, including e.g. stitching tiles or registration, its power (complex annotation structure, canvas etc.) is a bit of an obstacle when you just need to annotate (or have someone annotate) “simple” structures like nuclei in a regular image stack.

1 Like

That’s an interesting point. As far as I get it, the DeepLearning tools (that I know) use a label image and overlapping part can’t be solved, so one label takes over the other.

Have you tried this:

It works as an add-on or patch to MITK; needs an NVIDIA card and CUDA…
Works well for circular cells, but needs a fair bit of parameter tuning…
1 Like