Plotting RMSE and # iter

Hello All,

I just want to know how to get data about the change of RMSE along with # of training iteration, so that I can make a figure as the figure 2 b&c in the manuscript Mathis et al..

Thank so much!

You can get the training and test errors from network evaluation i.e. the deeplabcut.evaluate_network call. I think the simplest way to compute this for various numbers training iterations is to point your config file to the different training snapshots (e.g. changing ‘snapshotindex’) you saved along the way. (I would probably get bored of doing that and alter evaluate_network to take snapshotindex as an input and return train and test error directly so I could loop…)

1 Like


Appreciate your reply and good suggestion. As you suggested, I just found this description "snapshotindex: This specifies which checkpoint to use to evaluate the network. The default is −1. Use all to evaluate all the checkpoints. "

I was wondering if set “snapshotindex = all”, does it save all snapshots? so that we can have all checkpoint information.

Many thanks,

1 Like

Yes, all snapshots are saved (if it is true). Notice that if your run the evaluation together there will also be a .csv file that actually contains the trainingsetiterations vs. RMSE… So its quite easy to plot it. Otherwise you can just load all the h5 files to make a plot/analyze data…

Many thanks! That would be very helping!

Hi all,
I have a follow up question. I would also like to plot the rmse versus the iteration#, I set snapshotindex to “all” in the config.yaml file and evaluated my network and the cross validation (maDLC). However I can’t find any .csv file with the rmse of the different iteration, where is it suppose to be?



Sorry for the delayed answer… that feature was only added in later versions of 2.2, but now you should find it!

Do you mean in version 2.2b8 for multiple animal projects? Because then I still don’t know where I could get these rmse. Like I said here:

These evaluation metrics are shown in the terminal when I evaluate the different iterations. But they are not written in the single results.csv (only for one iteration) I can find in the evaluation-results folder. Similarly there is only one .h5 file that is created after cross-validation (which I assume is supposed to be the best iteration).