Deeplabcut not using GPU

Hello,

We are running DeepLabCut and noticed that it is using the CPU instead of the GPU.


here is the performance of the CPU and GPU while the model is training.

We also tried to specify the gpu by using the ‘gputouse=0’ when training our model.

The program is running very slow due to this issue. If you can help us specify this task to the GPU, please let us know.

Thank you,
Akhila

Without more information, I cannot troubleshoot this; namely, can you verify your in the GPU env, and CUDA is installed? here are some tips to verify:

I don’t see CUDA listed. I.e. here is what my NVIDIA-SMI looks like:

I had trouble with the installation, most likely my cuda wasn’t installed properly. Can you please instruct me on which version CUDA, cuDNN, Tensorflow are compatible with GPU driver 385.54. Additionally, is numpy necessary and which version is the correct one? Final question, when you install tensorflow does it need to be in the same environment as deeplabcut?

I tried to follow the instructions to the best of my ability however I found myself installing and re-installing different versions of the programs because I would get an error message saying ‘**** is compatible with ****v16.0.0 but you’ll have ****v12.1’

Thank you for your time,
Akhila

1 Like


Hi, I am trying to run DeeplabCut by using GPU, Here is the information of my GPU and CPU,
I check my computer, i can see the information of my GPU, i do not know why it show
‘GPU Not supported’
I went to check the "Num GPUs Available’, it shows 0
So should i install GPU?
Thank you for suggestion,
Best
Chen

@shauyin520 how are you installing DLC? are you using anaconda or Docker?

  • If you are using anaconda, you must use CUDA 10, CUDA 10.2 will not work.

please see these instructions on how to install:

Also, you can’t iimport tensorflow like that, you need to first run:

ipython
import tensorflow

Hi, @MWMathis Dear Dr. Mathis,
I am using the anaconda enviroment to run DLC. it looks the program functions normally, at least right now i am on the step of evaluating the network. I do not know what you mean ‘If you are using anaconda, you must use CUDA 10, CUDA 10.2 will not work.’ for my case, what it will happen?
My video is very big, 3.5 GB for each video, so i am trying to switch to GPU-support, but i check the performance of GPU and CPU, as shown in followed,’ Not GPU supported’.
and indeed, i use the usage:
ipython
import tensorflow as tf