Export square inside an image in Qupath

Hi,

my aim is to draw a square inside an image and perform the annotations there. At the end I want to save this image but not all the image only the square with the annotations. To do this I am using qupath 0.2.0 m11 and I am running this script : https://raw.githubusercontent.com/mpicbg-csbd/stardist/master/extras/qupath_export_annotations.groovy
this is a script that allows me to export the info from qupath to stardist.
With the script I am able to export all the image but I am not able ( I do not know how to do it) to export only the square I draw with the annotations inside
can you help me?

If you remove line 26 or comment it by prepending // it will process the currently selected annotation.

So if your square is selected, it should do what you want.

Best

2 Likes

This approche works if the square is not rotated, but In another sets of image I need to rotate the square, annotated inside and then save the rotated square.
what is happening is that instead of save only the square is saving all the image. I do not know if there is a way to solve this, is there?

I don’t think you would want to rotate an image for export, from a deep learning perspective. It is using the pixel values, which are effectively lots of points in an array that make up a rectangle. If you rotate that and try to save it as a rectangle again, all of the pixel values will have to be interpolated in some way, which will significantly alter your data for that particular sample.

Another way to think about it is that all of the outer lines of your object cross through different amounts of different pixels (if you zoom in). You can’t really make an image with 10% then 20% then 30% of a pixel along the outer edge. That is why most export processes will use the bounding box (which is always a rectangle) rather than degrade the image by trying to rotate it. In the middle of the image you have to start blurring pixels together to get them to fit in to rows and columns again.

The exception would be if you wanted to rotate by 90 degrees at a time, since that lines up with the pixel column and row orientation.

Here is a great example of what happens to an image as you rotate and resave it from @haesleinhuepf

1 Like

I would argue that rotating an image once by 26.3 degrees and saving it for further analysis is absolutely ok. Check out the video how long it takes until the image is obviously screwed up :wink:

1 Like

My worry is that the pixel environment will change. Depending on the size of the training data set, all of these examples may “look different” to the deep learning algorithm. Much like DL can learn what type of WSI scanner took the images if most of the positive samples came from the same hospital. I’m not especially knowledgeable about StarDist, though, and haven’t tried it myself. I suppose StarDist isn’t classifying the nuclei.

In QuPath there is a transform image server you might be able to use to export a rotated image, @petebankhead had an example here.


I suspect it would need a bit of modification for this purpose.

Re. @Research_Associate’s suggestion, I don’t recommend the transformed image server here – only increments of 90 degrees are fully supported, anything is decidedly more experimental and subject to change (since it isn’t documented or used anywhere in QuPath itself).

@Mariya_Timotey_Mitev in general, I think there are three main ways to get the rotated region to be output:

  1. Rotate the image before reading it in QuPath
  2. Rotate the image during export from QuPath
  3. Export the bounding box of the rotated rectangle from QuPath, and extract the part you want for rotation afterwards

Any existing export scripts that I have written (and I think Oli’s as well) don’t support rotating regions during export (2.) and adapting a script to do that might be quite a substantial bit of work (not least because QuPath doesn’t have a ‘rotated rectangle’ ROI concept – once you rotate the rectangle, you have a polygon.

Therefore I’d personally recommend trying option 1. or option 3. first, since I think they are likely to be quite a lot easier.

3 Likes

Pete’s response jogged my memory of my first thought when I read this, which was
@Mariya_Timotey_Mitev: why is there a need to rotate the square?

@oburri Do the exports for training all need to be the same size for some reason, or could @Mariya_Timotey_Mitev use a smaller square that is not rotated, which could avoid the whole export rotation problem?

The images need not be the same size but they must be densely annotated. That means each and every cell in the (no matter that the heck you do) square region must be annotated if you want to detect it.

What is wrong with making a straight square and leaving space for either empty space on top or other cells on the bottom being annotated? I back @petebankheadand @Research_Associate there is absolutely no need to rotate that square.

Interpolation caused by the rotation means that unless you account for all rotations during network training, your input data for training will differ from the data you will feed it when applying the model (unrotated data).

This is a case where non-expert assumptions about how machine learning works can cause issues. There is no ‘need to rotate the square’ and if there is, I have yet to hear the reason for it. Yes you can rotate it but why?

Best

Oli

3 Likes

Agree with @oburri (as always :slight_smile: ), just use the unrotated square even if then some portions of the crop are not containing any nuclei.

@Research_Associate: I would not expect the performance to drop if suddenly rotated images are used (and thus interpolation related smoothing is introduced). This would happen anyway when adding data augmentation (rotation/elastic). In fact, I would be worried if the network picks on subtle texture cues that would vanish when interpolating.

3 Likes

Ah, ok. It has been a while and the last time I played around with a DL model (Keras 2017?) I remember being locked into 90degree rotations for regularization/augmentation. I wasn’t aware that non90 degree rotations were common!

2 Likes

I guess non 90deg rotations are not that common, as large portions of the rotated image would have to filled with out-of-boundary pixels and typical padding (e.g. mirror or zero) can introduce weird shapes. Elastic deformations on the other hand are pretty common and due to being local enough, does not suffer that much from the same problem.

4 Likes

Thank you Pater,
yes I manage to perform option 1
rotate the image before using qupath.

1 Like