Labeling GUI very slow 30+ body parts

Hi - I have an issue with the labeling GUI becoming very slow/unresponsive in projects that have a relatively large number of body parts (e.g. 36 per animal in the current project) and multiple animals. The issue is not the labeling, yet using the GUI and changing parameters in it: Adjusting the marker size for instance can take several minutes and advancing to the next frame up to 40 seconds. When working on the exact same project but only labeling e.g. 12 body parts per animal this issue does not appear (however, we need many body parts on these animals). I have encountered this problem with different frame sizes and on both Ubuntu 18.04 and macOS. If anyone has a work around I’d love to hear (e.g. would not plotting the color/ scale bar / legend work?)

I’ve had the same problem. It’s just a plotting issue with matplotlib cycling through all of the body parts and individuals.

If you are at all comfortable working with python scripts, I’ve written a few utilities that might help. They are designed to take data digitized in Argus (python GUI) and convert to DeepLabCut formats. Argus works with multiple cameras, so the scripts are a bit overkill for your purposes, but will work fine for one camera.

For example, dlt2dlclabels.py finds the frames you extracted for a given video in deeplabcut and gets the labels from data points digitized in Argus. I recently updated (and tested) that conversion, and it works well. You could just digitize the frames that deeplabcut extracted to avoid having to digitize the full video in Argus. With multiple animals, digitize them in separate Argus files. Make sure your track names are the same as your deeplabut body parts (no support yet for “unique body parts”). Check the docstrings for details.

And if you have any problems/questions on Argus, I’m one of the authors and would be happy to help.

-Brandon

thanks for your suggestion, Brandon. Just making sure I understand: you do the labeling in Argus, convert the labels to Deeplabcut format, and train in Deeplabcut?

Yes. We wrote Argus to do fun manual digitizing (and direct linear transformation 3D reconstruction), so the manual labelling was the primary focus for that GUI development.

Basically, you can use Argus instead of the label frames GUI in your deeplabcut workflow.

Here’s what I would suggest:

  1. Create your deeplabcut project, add videos, extract frames (probably already done)
  2. Open the video in Argus and digitize just the frames you extracted. Save.
  3. use dlt2dlclabels.py to import your digitized coordinates (in the file ending with xypts.csv) to the CollectedData files in labeled data in your deeplabcut project
  4. train, evaluate, analyze in deeplabcut
  5. repeat as necessary to add videos or frames from current videos

The current wx-python based GUI is indeed not optimized for >30 body parts, but may I ask how many animals are in a given frame? Also for adjusting the size, you can do that in the config.yaml, before loading the GUI, as at least this part does not require waiting…

thanks for your response, Alexander! We have up to 15 animals per frame (typically around 5, but in some cases it can reach 15). They’re insects so we need quite some body parts to get all the legs (including joints). Do you think we’ll run into problems tracking over 30 body parts on 15 animals?

I don’t think that there is a problem as such. Totally possible.

But perhaps it would be easier (labeling-wise and speed-wise), to track the insects first (using DLC or something else) and then perform pose estimation on the centered images?

Thanks for the suggestion. That’s how I was doing it before multi-animal DLC came out :slight_smile: I have some custom code for that, but it wasn’t very user-friendly…

I think ‘dynamic-cropping’ in the analyze_videos takes care of part of this - is that correct? If there are some utility functions available that would make it easy to 1) track using deeplabcut and store centered crops, and 2) convert coordinates to original frame size, I would love to know.

Adding a small (yet maybe important) detail of our data: we do not have the same number of animals in all frames - animals can fly/walk into the field of view and leave again. I have code to create centroids for all individuals (per frame) but I’m not sure what would be the most efficient way of feeding that into deeplabcut.

Hi Felix!
Do you have your centroid code available somewhere (e.g. on Github)? I have a similar case, where using centroids would be a great supplement to DLC tracking, also with insects.

also, we can add that we have a very in-the-works plan for a new GUI, that is much faster :wink: so stay tuned!

1 Like

Sure, some of it is on my github: https://github.com/felixhol/biteOscope more details in our paper: https://elifesciences.org/articles/56829 though we’ve made quite some improvements since the paper came out. It would be great if it is useful to your work.
Feel free to reach out if you’d like to discuss more details.

awesome!! We’ve managed to get quite some labeling done using 36 body parts (with some patience :wink: )