Export pixel classification in QuPath 0.2.0

We have been using QuPath for a while now and we are even introducing it to pathologists all over. Now we are using the pixel classifier, even if it is under constant improvement.

We need to integrate the results from this classification with extra code outside QuPath and I was wondering, what kind of “annotation” is the resulting pixel classification. Is it like small squares with a center and a size? Is it an image?

My main question is how to call these annotations via code to make a script to save them in a way that I can use it outside.

Would this script be it?

For those curious we are using it to segment gastric tumor areas in TMAs so we are happily using the de-arrayer.


Thanks in advance!

Internally, the vertices of the annotation are stored. So you can (potentially at least…) export them however you like.

The script you linked to gives one way. There is at least one other post on my blog that describes exporting to a binary image, or you can send regions to ImageJ and get the annotations as ImageJ ROIs (which can be saved and reused… at least in ImageJ or Fiji).

In v0.2.0-m3 there is an early implementation of a new way to export, using GeoJSON. You may be able to get the result into Python using Shapely… but I haven’t got as far as trying that yet.

Thank you so much!

And how should I look for these vertices? should I use getAnnotationObjects() ?

The image in the Tweet shows a script you can use (only with v0.2.0-m3):

Otherwise there are some posts discussing accessing vertices other ways, and the complications of this (simple shapes are fine, but complex ones with holes aren’t). I’m not entirely sure where they are though (here, GitHub or the old Google Groups forum) and not able to search currently…


Yeah I just saw the tweet. Ok I will try both GsonTools and exporting the image.

Thank you!

1 Like

I remember now why I didn’t want to use m3. I cannot load my projects from m2 for some reason. I guess I will export the pathologist annotations while in m2 and import them in m3 into a new project.

Have you tried converting the project?

1 Like

@lesolorzanov Pete’s script which Mike linked to worked like a charm for me. Even for larger projects with >400 annotated H&E whole slides.


Thanks! I am having troubles. As far as I can see, It is supposed to prompt me to select a project on my file system, but this is not happening. I removed the lines

def project = qupath.getProject()
if (project == null) {
    print 'Please open a project! Entries will be imported from m2 to the current project.'

Now it prompts me but the project is from another computer. I have the data, so I changes the qproj file but it can’t create the image server.

For instance:

INFO: Importing file:/home/leslie/Downloads/p_5B.svs p_5B.svs f2b33de8-6779-49b9-91b6-36b7e8934e21
INFO: java.lang.NullPointerException: Cannot invoke method addImage() on null object

I think I am going to take the annotations and just import them in m3… thankfully I don’t have hundreds of images

Do you have a project open in m3 when you run the script?


Silly me, that was it. OK So now I can see the annotations that existed before. But when I try to run the pixel classifier it is not possible any more. I attach a screenshot with a comparison.

Could you give more details on why it is not possible, and whether you have had the m3 classifier work on other projects (non imported)?

Ok so I just tried to create a project in m3, import images , run de-arrayer, annotate, select classes and the pixel classifier is not working.

I show in this screenshot the two options for the pixel classifier in m2 and m3.

In m2 everything works nicely while in m3 I cant use anything or save and apply.

I don’t know if I am missing something


Are you having trouble finding the features? They are in the edit menu. But the classifier itself works pretty much the same as it did in M2, I think.

Saving and reloading the pixel classifier isn’t in yet, as far as I know.

**One of the times I was playing with the pixel classifier I ran into a problem where the Live Prediction showed nothing. I have not been able to repeat this.

1 Like

This is interesting. I will have to check it out further. But if and when I manage to get a live prediction I am wondering how to export that classification via either an image (no matter the size) or a region in polygon shape (preferably this). Do you export it in some way or you are content just by being able to look at the live prediction?

I know it is still experimental. It’s just that it works so well that we would be very happy to be able to use it further.

Converting the region to a polygon should be doable through the lower left corner Create Objects. The downside is that you can’t then take that algorithm and apply it to any other image, so there is no consistency. Once you can save and load pixel classifiers, I will be suuuper excited to explore all of the ways they can be used and abused. Until then, I have been sticking with the 0.2.0m2 and prior SLICs for my area classifications.


The create object crashes my QuPath. I use only medium sized pixels and still there will be way to many objects, I believe, so it crashes. It seems to have something to do with the many children and too many objects.

Quite possibly. Creating detection objects is better supported by the system, if you are looking for that sort of thing. Annotations are mostly intended for larger, simpler objects. There have been several instances of people trying to do too much at too high of a resolution when using annotations rather than detections and running into problems.

The size threshold is intended to prevent too many small objects from being incorrectly created during that step.

Sure, I understand why there is a limit in size. I will try creating detections instead. I will also see if the super pixels are a possibility for my collaborators :smiley: this has been fantastic thank you so much!

1 Like