Questions regarding using normalDLC to track multiple animals

This is a follow up question from this topic: I think I should start a new topic so more people can see.

I have switched to normal DLC and labeled 200 frames, where body parts are defined as “monkey1_nose, monkey2_nose" … "monkey1_tail base, … " as suggested above. I have 2 monkeys in the same cage in each video, and each monkey has 31 body parts, so I have 62 body parts in total.

Right now I have an issue with the performance of the model. When I was using maDLC (also with about 200 frames and 31 body parts), the model converged pretty fast (around 100k iterations) with a pretty good training error (around 3px). But with normal DLC, the loss won’t reach a plateau even after 600k iterations, and the training error is pretty high (~6.7 px), though I understand that normal DLC has different setup comparing to maDLC that it does require more iterations for the network to converge.


  1. Is there a similar way to visually check the accuracy of the predictions in normal DLC, as with create_video_with_all_detections or extract_outlier_frames in maDLC?
    The images created in the evaluate_network doesn’t give me the general picture, because they are cropped into small pieces.
  2. In maDLC, the Euclidean distance statistics per bodypart (in pixels) is listed after running evaluate_network. Is there any way to do this with normal DLC, so I can find out which body parts are causing the errors?
  3. Should I reduce the number of body parts? I’m feeling maybe too many body parts is increasing the difficulty for the network to make predictions.
  4. Any other suggestions on how to improve the performance in general, specifically on using normalDLC to track multiple animals? I don’t find too much info online regarding this topic.

As far as question 1, you can just create a labeled video after analyzing it. You may need to adjust pcutoff to be very low so that it plots labels it’s unsure of.

Your best bet to move forward is to analyze a video, extract outliers (jump probably works the best), and refine labels. set the number of frames to pick to something fairly low (10-20) before extracting outliers. It can be surprising how adding just a few well-selected outlier frames can quickly improve tracking.

As far as the number of labels is concerned, unlike with maDLC, the skeleton is not used for tracking in single animal DLC, as far as I know. There is no need to over-label, or over-connect, as in maDLC. You might try sticking to just the labels you need for analysis.

Hi Brandon,

Thank you for your response! I think I’ll go ahead and do some outlier label refinement and if not improving I’ll try reducing the labels and skeletons.

With regard to extracting outliers, when I run extract_outlier_frames (after running analyze_video), it gives me “No unfiltered data file found” error. I know in maDLC I can run convert_detections2tracklets, convert_raw_tracks_to_h5, and extract_outlier_frames in succession, but I don’t find how I can do these steps in normal DLC.


If you analyzed the video and data is in the folder the most obvious thing that comes to mind is that your path to video is wrong, maybe you’re choosing labeled video instead of the raw one?

I found the problem. I directly copied the function from my maDLC code, which includes a track_method parameter, which shouldn’t be there for normal DLC. It’s running now after I removed it. Thanks!

After I extracted some of the outlier frames, I noticed that most of the labels of two animals are mixing together (labels of two animals appear on one single animal). It’s impossible to do refinement because there are too many mis-labels. It seems that normal DLC is not good at distinguishing two animals that have almost the same appearances (except their collar colors). I guess I’ll have to switch back to maDLC in this case, and wait for the next release. Has anyone done this successfully with normal DLC before?

Assuming the collars are relatively small compared to the animal, I think you would need a very large number of training frames to have much hope of getting single animal DLC to detect the differences. That system works well if the animals are obviously different (e.g., completely different colors). But realize that you are trying to teach it that “this” is a head (for example), but it’s head-1 because it’s kind of close to that orange collar down there, rather that other color collar. The body parts further away from the collar would be even more difficult for it to identify correctly.

I think you’re best bet is to work with maDLC, and label individuals consistently (like it matters). You might eventually be able to get it to track individuals well even before the next release, if you can’t wait.

Thank you for your suggestion. It’s really helpful!