Deeplabcut uses GPU intermittently

Hi We have redhat 7.4 installed on our cluster and GPU nodes have modules for different versions of cuda including cuda10. we have cuda10 loaded as environment module for using deeplabcut. We installed deeplabcut via anaconda.
When user uses deeplabcut2.0 it runs way slower that other servers she has available to her. When we check the deployment of gpu card it is used very intermittently and not constant usage.
Would you please help us troubleshoot this?
Thanks you

Assuming you are running the same data everywhere & the comptuers are similar (GPU type, RAM, etc.), I would suppose the differences boil down to CUDA and Tensorflow? Is the performance difference dramatic?

Intermittent usage can happen e.g. for hufe frame sizes (when a lot of preprocessing is necessary) before feeding data into TF…

Hi Alex,
Yes the difference is dramatic. May I provide any kind of log/configuration/description for troubleshooting?
Is the fact that we have RedHat instead of Ubuntu a parameter?
Thank you for your help

how large are the frames?

In one video there are around 80,000-100,000 frames.

Mackenzie asks for the number of pixels (width x height)… I still suspect that it is the installation if you can run the same project on other computers with similar graphics cards…

The number of pixels is 1,088,000 (1600*680)

That’s a lot. You can always down sample during training (setting global_scale, see protocol paper: If you need that resolution, then it will take time. You can also check the imgaug loader as it is more efficiently written and faster.

Hi Alex,
What suprises me and the user is that the same job on a local machine runs like 10 times faster. Do you think if I provide some info about our configuration it may shed some light on the problem?

it’s hard for us to troubleshoot a hardware issue