We recently started using DeepLabCut for automated coding of tethered fruit fly behavior, recorded from the side using one camera.
Drosophila legs are transparent, so you can see the “occluded” parts in the images. Frequently, two leg joints that are aligned in 3D overlap in 2D images. Do you suggest that we label both, especially if we are using the skeleton function? In general, how useful is the skeleton function for the estimation of the position of occluded parts?
Thanks in advance, Ozlem
Labelling occlusions really depends on your specific scenario. I’ve found that to get good “guessing” I need about 10x more labelled images. If the occlusions are very brief (less than a few frames), you’re better off not labeling them, keeping only high confidence (i.e. not occluded results), and interpolating across the gaps.
However, I’ve never dealt with transparent occlusions, so you may be able to get away with fewer labeled frames.
As far as the skeleton is concerned, you need to create a multi-animal project to take advantage of it. I have found it to be very helpful with occlusion guessing in the most complex scenarios, but training and analysis is slower and more complex.