Cross-Validation

deeplabcut
#1

Hey,

as I am currently training a subset of MPII dataset with golf-images only, I am getting a nice train error around 1px but the test error of 15-20 px seems to be very high.


I think that the dataset split of 95/5 causes the high test error as pictures are diverse in terms of camera perspective and background. Therefore I would like to use cross-validation. I have seen the leave-one-folder-out option by using trainIndexes, testIndexes=deeplabcut.mergeandsplit(config,trainindex=0,uniform=False). So should I train the model with leaving the first folder out, then train the same model again with leaving out the second folder and so on until I left out every folder once?
Training the models in this way with 35 different videos/folders seems to be an overkill. Is there any possibility to perform cross-validation in a better way?

Thanks! Sebastian

0 Likes