Quantifying owl plumage - color, spot quantity and size

We will be photographing subjects (barn owls) with a DSLR. The white balance will be calibrated using a white balance card at each nest box instead of batch post-processing. Each photo will capture an individual’s ventral surface then cropped to a 60x40mm rectangle at the same spot on the breast (sample below) for measuring average plumage color (RGB values), number of spots per cropped breast image, and the average vertical diameter of the breast spots in the cropped image.

Here is what we would like to do through Fiji:

  1. Is it possible to automate the cropping process by creating a macro that will crop all the images to a 60x40mm rectangle? → Will need to include some sort of scale bar (e.g. 40mm) in a different color that won’t be in the photo (e.g. blue)?

  2. Is it better to:
    2a) write a macro that generates the RGB values for 5-10 random areas, excluding the black spots, and average those values to obtain one RGB value per individual or at the very least provide some output with the 5-10 RGB values per photograph that we can then average in another program; or
    2b) develop a macros that will output an RGB value for the entire non-spotted area of the cropped breast image?
    *If we go route #1^^, do we manually move our random ROIs if they contain a spot simply by dragging the ROI?

  3. Would a macro be able to identify all the black spots in the cropped image, measure their vertical diameter, and average all the spot measurements to get an average spot diameter measurement for each individual?
    3a) With this process, is it better to first create a binary image (spots & no spots) and then measure the vertical diameter of every spot?
    3b) Will we have to manually remove spots that fall on the edge of the 60X40mm cropped photo? If so, how do we do that?

  4. With spot quantity:
    4a) Should we create a binary image (spots & no spots) or just leave as is and count the number of spots?
    4b) Will we have to manually remove spots that fall on the edge of the cropped image so they are not counted or is there a way to automate that in the code? If we do have to do that manually, how would we do that?

We are just starting out with Fiji and would really appreciate any help. Thank you so much in advance!

A few quick answers, mostly incomplete.

  1. Cropping is fairly easily automated, but where would you put the rectangle? You would also need to automate the selection of the center or top left (I forget which) of whatever you wanted to crop.
    Scale bars can be added in the Analyze->Tools menu, but how confident are you that the camera is the exact same distance from the bird in every case?

  2. 2b seems easier as long as the imaging is consistent. I would say that if 2b is difficult, then the lighting is likely not consistent enough to validate your other measurements if you select random areas.

  3. Macroing the detection of black spots should be fairly standard. I guess you could get the “vertical distance” using the Bounding Box option in Analyze->Set Measurements…
    3b. If you are automatically detecting the spots, you can choose whether to “Exclude on edges” by selecting that checkbox in the Analyze Particles… step.

  4. If you use Analyze particles for automated counting, it requires a binary image.

Thank you for your response! We appreciate your help. Can you offer any guidance on accomplishing #2b - developing a macro that will output an RGB value for the entire non-spotted area of the cropped breast image?

Most I can recommend it a bit of gaussian blur to smooth the feather lines, and combine that step with the detecting spots step.
So you detect the spots, quantify those. Then you can take everything that is not a spot an ROI that can be measured in turn.
There is an AND function in the ROI manager that can be used to merge ROIs after you measured individual spots, and then Edit-Selection-Make Inverse. The inverse of all of the spots is… the area you wanted to measure.

Actually getting the spots is likely to be tricky, though, since the spots seem to be buried in the feathers sometimes. You may have to miss some of the lighter spots to contend with picking up areas like the disturbed feathers on the left (which is dark).

I tried a few things and was not able to come up with something clever. I haven’t worked with this type of image, much, though, so it is likely someone else will have better ideas. The darker area on the left always turns positive before picking up the lighter spots in the top left. There seems to be an overall shading effect from the bottom left to top right, in general. The shine in the upper left is problematic.

Also tried color deconvolution, since the image looked quite a bit like DAB staining from histology.

Using QuPath.

It did a bit better, but still ran into problems in the corners due to uneven lighting.

Randomly coming back to this one more time, a pixel classifier like Weka might do a better job of segmenting than a thresholder. The pixel classifier in QuPath was able to get this with minimal training, though it would need training across many examples to be robust.