Problems with create_training_dataset

Hi, I´m using colab to perform the training network since I don´t have a GPU. But when i run the create_training_dataset() function I get an error that I don´t know how to solve it.
Could somebody help me?

Did you add new bodyparts at some point during labelling? If I understand correctly, this seems to indicate that some files have more columns than others (more bodyparts in CollectedData)

1 Like

I didn´t add new bodyparts, but I delete some. Maybe that´s the problem. Did you know how to solve it without having to label all my data again?

@Denisse, a trick I have in mind is to make sure all body parts are defined in your config.yaml. Then, when re-opening a data file you know contains missing body parts back in the GUI, you should be warned that new keypoints were found; just save and you’ll get a data file of the correct format.

1 Like

@jeylau I have already do that and it´s not working. The data seems to have no problem, when I run the “create training dataset” at the GUI it works fine, I also can start training.
I think that the problem it´s only when trying to run it at colab, but I really don´t know what´s going on there. :frowning:

Weird that there would be no problem on your computer but error on Colab. Are you sure the whole project got uploaded fine?

btw: shouldn’t there be some output produced when conversion happens? Or is it only if there was windows2linux=True set in parameters?

Does the same thing happen if you set windows2linux=True when calling create_trainining_dataset?

@den_t29, is there any reason you don’t create the dataset locally and only train in Colab? I suppose that’s the simplest option.