Quick tutorial on generating annotation level areas from SLICs.
Image courtesy of La Jolla Institute of Immunology Microscopy Core shared image resource.
Not going to go into a whole lot of details here as the entire method will likely be obsolete once the Pixel Classifier is fully functional.
- Generate your primary area with an annotation, either simple tissue detection, createSelectAllObject() or something else like drawing a square.
- Generate your SLICs. There isn’t any one right answer for this, as the right size, blur, and regularization settings will vary for every project. What you are looking for are puzzle pieces that conform to the edges of the objects you are looking for, and don’t cross over between areas very much. You might think that means “just make them very small,” but there is some tension there because you also need enough information within each SLIC to classify them, and too many SLICs along with too many measurements can make the whole process prohibitively slow or even impossible due to the size of the data file.
- Adjusted my color vectors to fit the colors of the things I was trying to annotate. This step should make the numbers for the features I am calculating a bit more relevant.
- Add feature measurements. I have frequently found that I needed more contextual information that could be contained in the small sized SLICs I needed for good area definition. I resort to using the Circular Tile option in the dropdown menu for those situations, as with a large enough radius it will pull in pixel information from outside of the SLIC itself. This part of the process is both the most important, and the most open ended, like image analysis in general. Find the right features, and you should be able to get away with a relatively small number. Throw dozens or hundreds of features at something and you had better have a VERY LARGE TRAINING SET. An easy way to think about this is if I had two SLICs, one blue and one red, and I added every measurement I could think of to a classifier and trained it… it might end up making the decision on how to classify the two based on the circularity of the object, not anything relating to the colors I actually was interested in.
In this case I just used some standard measurements as shown.
I then created a few more annotations that I assigned classes to in order to have some training data, as per the normal trained classifier steps for cell objects.
- Running the classifier on this small training set (which is a pretty terrible idea since I have more measurements than I have training objects!), I can see around the green arrows that I need some more variation in my Eosin stained area. In other words, I generate a new training area that includes some of the misclassified areas, added some smoothed measurements (as described in the features link), and selected only some of the smoothed measurements using the Select button in the Create detection classifier dialog.
Whenever you are happy with your classifier, save that project, and leave it. Or duplicate it if you want to keep using the same project (copy and paste the project folder), but always keep a copy of the project where you created the classifier, and keep it separate. This will let you go back to the classifier and fix it up if needed.
But, now that I have a classifier, I need to test it on new data, or it doesn’t really mean too much. So I make a new area with new SLICs, using the exact same settings that I used to generate these. Names and everything have to be the same, down to the decimal points. The easiest way is to use the Workflow tab on the left to create a script.
Looks decent, so I go to the bottom of the Create detection classifier dialog, and save that as a classifier file (.qpclassifier).
Now I can use runClassifier(“Path to that classifier file including file name and extension”) in any project to access that trained classifier.
Great, some classified SLICs. But I wanted to do some cell counts or something else within those areas. So we use the following:
Command to convert all of the detections into annotations. If you both Delete existing child objects and Clear existing annotations, you will end up with one annotation per class and no remaining SLIC objects.
This command works in most versions except 0.2.0m4.
Final point. There is frequently a lot of inter-sample/slide/image variation, and training on one slide frequently won’t work very well. You can repeat the process described here on multiple images and then rebuild a super-classifier from all of them by rebuilding the training from the whole project:
This will load every detection object within a classified annotation, which is another reason to have a separate project for creating the classifier. Just in case you decide in one image to generate some cells as a test…