Queries on usage

Good morning everyone! Hope everyone is keeping safe.

I want to thank everyone who contributed towards DLC- it’s a phenomenal tool!

I needed some help with some questions.

How do I decide which is the best snapshot-index for analyzing my videos? Do I try multiple evaluations for different snapshot index and choose the one with the lowest pixel error?

Do I follow the same aforementioned process when deciding which neural network to use? I decide the best model based on pixel error since I find it difficult to quantify which model is relatively better based on labeled images and hist.png plots since they look almost the same for different models.

Also, how do I go about benchmarking neural network on google COLAB? When I try calling deeplabcut.create_training_dataset(path_config_file, num_shuffles=3) to create different shuffles (where I will use each shuffle for different network) , it creates training dataset with different indices of dataset like here:

Screen Shot 2020-07-01 at 7.26.23 PM

How do I create 3 copies of the exact same dataset for each shuffle, so that I can benchmark the neural network?

What to consider when tuning for optimal batch size in config.yaml and pose_cfg.yaml? Do I just increase it when the data is being processed slowly? Are there any downsides to increasing it? Should I also change learning rates since it affects batch processing?

Thank you so much for taking the time out to reply!

HI, here is a wiki with nearly all the answers, I think!
Check it out and let me know if further Q’s:

and for this:

Yes, you can set snapshottouse in the config.yaml file to ‘all’ to evaluate all snapshots. then you get a h5/csv with all the values so you can easily compare!

Hi @MWMathis ! Thank you so much for the prompt reply. And happy belated 4th of July!

Thank you for pointing me out in the right direction. It not only helped me answer the questions I asked but also helped me clear my understanding of other topics as well.

However, I am still confused about the following question:

What to consider when tuning for optimal batch size in config.yaml and pose_cfg.yaml? Do I just increase it when the data is being processed slowly? Are there any downsides to increasing it? Should I also change learning rates since it affects batch processing?

And, also I wanted to ask since I’ll be using DLC for labeling human body parts, is it better to use DLC 2.18 or 2.2 (given that they are single animal videos)? Also, would you suggest using the standard deeplabcut.create_training_dataset(config_path) and deeplabcut.train_network OR the model zoo - deeplabcut.create_pretrained_project with full_human as the model? I am trying to track hand movements in some videos and full body in some videos.

Thank you!!

1 Like

What to consider when tuning for optimal batch size in config.yaml and pose_cfg.yaml? Do I just increase it when the data is being processed slowly? Are there any downsides to increasing it? Should I also change learning rates since it affects batch processing?

You can make it as large as your GPU can handle, which typically isn’t so high; you can leave the rate “as is” for batchsize 16 or so; but you might want to switch from SGD to ADAM; you can do so easily in the pose_cfg.yaml file in the train folder, details here (we will likely make this the default soon anyhow): https://github.com/DeepLabCut/DeepLabCut/wiki/Data-Augmentation

cfg_dlc['dataset_type']='imgaug'
cfg_dlc['multi_step']=[[1e-4, 7500], [5.0e-5, 12000], [1e-5, 50000]]

And, also I wanted to ask since I’ll be using DLC for labeling human body parts, is it better to use DLC 2.18 or 2.2 (given that they are single animal videos)? Also, would you suggest using the standard deeplabcut.create_training_dataset(config_path) and deeplabcut.train_network OR the model zoo - deeplabcut.create_pretrained_project with full_human as the model? I am trying to track hand movements in some videos and full body in some videos.

2.1.8 is going to be always faster, due to it being less complex. And, it’s quite good, so i’d say if it works for you, use that. Otherwise, you can always test out 2.2 with the same data, it’s an easy conversion to a 2.2. project (https://github.com/DeepLabCut/DeepLabCut/blob/5b53391d28c2d40fbd69542223b68ec6d7450b3d/docs/convert_maDLC.md)

You can also test out the human full model right away and see how it goes for you, CLICK HERE

1 Like

Got it! Thank you so much