In order to evaluate the quality of an automated segmentation, I am looking for a way to generate a manual segmentation by painting the ground-truth mask on an image (similarly to Ilastik, Weka… but without training a classifier).
With Ilastik for instance it is not possible to export the mask without training the classifier.
In Fiji, the paint brush tool can do the job but then I had no idea how to recover the painting since it seems to be burned into the image, it it possible to use some overlay ?
With Knime there is the interactive annotator but it is currently limited to ROI annotation.
Currently the best I found was to use an Image editor like
Paint.Net (see screencast below). It is quite convenient as we can add a layer for each class, adjust transparency… After merging the layer and saving it as a tiff, it can be open in Fiji as RGB and converted to grayscale such that each classes has its own pixel/label value.
The only derawback is that if the painting of different classes overlap for some pixels then the result is a combination of the values (not sure it is a sum as it is RGB).
I am open to other suggestions of course !
Painting mask in