Retraining start with the default init_weight or the last snapshot

I noticed that when I finished refine_labels, and ready to training, the init_weight in pose_config.yaml was still the default one. will it be better if I use the latest training results? Just like I broke a training and continue to train again, reset the init_weight as the path of latest snapshot.

Hey Sana,

If the refined frames are similar to the frames used for the training I guess it will be quicker and better to use the latetest training results since the weights of the network should be already good (from the previous iteration).

Yes, just modify init_weight = path_of_latest_snapshot/snapshot-x (x is the latest iteration of your training).

1 Like

okay, I got it about this and thank you for your reply.
and one more question. every time I continue to train following the breakpoint, the loss is just not the same as the breakpoint where I stop last time. it will begin with a relative large number which is like I train from the first iteration. does it work normally.

yes, that is the correct behavior. Also, if your original dataset had any errors, then re-train from the default weights, not your previously trained one, as you don’t want to carry forward those errors! If not, then load as suggested above, and you can stop sooner, i.e. 50K-100K

Hi, i am using google_colab to run DLC. i changed my int_weight as ‘init_weights: /content/drive/My Drive/#203/dlc-models/iteration-0/#203Dec16-trainset95shuffle1/train/snapshot-67000’ in the training folder under dlc_models folder.
why it still show this ’ ‘init_weights’: ‘/usr/local/lib/python3.6/dist-packages/deeplabcut/pose_estimation_tensorflow/models/pretrained/resnet_v1_50.ckpt’’ in the training process report . Does it really re-train or from start from scratch?
If it does retrain, why the first iteration error is far more bigger the the last iteration(67000) i have?

Can any one answer this question, please.
I am really in urgent to solve this problem. Since this is the routine problem i have for training.
Thank you for help. @MWMathis @AlexanderMathis

Everything goes to start from zero, when i edit the ‘init_weights’ and do the retraining, when finally the iteration is end at smaller iteration than the last one disrupted. ẃhat is wrong with it? Can you please help me to solve it? @MWMathis @AlexanderMathis

That’s not necessarily wrong. Firstly, it starts counting from 0 and due to the sliding loss calculation the loss can be higher initially (but should quickly go to the same value).

1 Like