I used SLIC super-pixels to slice up my image. I then applied a pixel classifier to “trim off” the white space from the background. I then ended up with a data frame/spreadsheet containing super-pixel data. The size of my images varied and I would export anywhere from 12,000 to 160,000 super-pixels. I then randomized the rows (super-pixel data needs to remain attached to class) and selected exactly 250 pixels from each of my 50 images. All of the steps after exporting the data frame/spreadsheet was done in Python.
In order to validate my findings, I wanted to go back into QuPath and look at the 250 SLIC super-pixels taken from each image, matching them by Centroid X and Y values but was unable to do so without manually finding each one in the dataframe. Scripting has been recommended to me but I wanted to see if anyone else had encountered a similar issue and, if so, the workaround you applied.