Issue with analyzing a video using deeplabcut GUI

Hi,

I have trouble analyzing a video using deeplabcut GUI. I am using my laptop with windows 8.1 pro. So I don’t have a GPU. It appears to be related to cropping but no matter how I change the cropping parameters, it did not work. Here is the info from the terminal https://docs.google.com/document/d/1KYBIXCE8jXVijMgGhdfD1GHOnUJXzmVMjwcO21Sjn6A/edit

Thanks!

could you share an image of the GUI screen and your config.yaml file?

config.yaml (1.3 KB)

Thanks!!

still does not work
Overwriting cropping parameters: (0, 640, 277, 624)
These are used for all videos, but won’t be save to the cfg file.
Using snapshot-1000 for model C:\Users\QIAOLING\Desktop\OFT_11-Qiaoling-2020-04-
08\dlc-models\iteration-0\OFT_11Apr8-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from C:\Users\QIAOLING\Desktop\OFT_11-Qiaol
ing-2020-04-08\dlc-models\iteration-0\OFT_11Apr8-trainset95shuffle1\train\snapsh
ot-1000
INFO:tensorflow:Restoring parameters from C:\Users\QIAOLING\Desktop\OFT_11-Qiaol
ing-2020-04-08\dlc-models\iteration-0\OFT_11Apr8-trainset95shuffle1\train\snapsh
ot-1000
Starting to analyze % C:\Users\QIAOLING\Videos\OFT_11_3.avi
Loading C:\Users\QIAOLING\Videos\OFT_11_3.avi
Duration of video [s]: 30.04 , recorded with 25.0 fps!
Overall # of frames: 751 found with (before cropping) frame dimensions: 928 5
76
Starting to extract posture
Cropping based on the x1 = 0 x2 = 640 y1 = 277 y2 = 624. You can adjust the crop
ping coordinates in the config.yaml file.
Traceback (most recent call last):
File “C:\Users\QIAOLING\Anaconda3\envs\dlc-windowsCPU\lib\site-packages\deepla
bcut\gui\analyze_videos.py”, line 264, in analyze_videos
destfolder=self.destfolder, crop=crop, dynamic=dynamic)
File “C:\Users\QIAOLING\Anaconda3\envs\dlc-windowsCPU\lib\site-packages\deepla
bcut\pose_estimation_tensorflow\predict_videos.py”, line 220, in analyze_videos
DLCscorer=AnalyzeVideo(video,DLCscorer,DLCscorerlegacy,trainFraction,cfg,dlc
_cfg,sess,inputs, outputs,pdindex,save_as_csv, destfolder,TFGPUinference,dynamic
)
File “C:\Users\QIAOLING\Anaconda3\envs\dlc-windowsCPU\lib\site-packages\deepla
bcut\pose_estimation_tensorflow\predict_videos.py”, line 497, in AnalyzeVideo
PredictedData,nframes=GetPoseF_GTF(cfg,dlc_cfg, sess, inputs, outputs,cap,nf
rames,int(dlc_cfg[“batch_size”]))
File “C:\Users\QIAOLING\Anaconda3\envs\dlc-windowsCPU\lib\site-packages\deepla
bcut\pose_estimation_tensorflow\predict_videos.py”, line 358, in GetPoseF_GTF
ny,nx=checkcropping(cfg,cap)
File “C:\Users\QIAOLING\Anaconda3\envs\dlc-windowsCPU\lib\site-packages\deepla
bcut\pose_estimation_tensorflow\predict_videos.py”, line 241, in checkcropping
raise Exception(‘Please check the boundary of cropping!’)
Exception: Please check the boundary of cropping!

If you suspect this is an IPython 7.12.0 bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev@python.org

You can print a more detailed traceback right now with “%tb”, or use “%debug”
to interactively debug it.

Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True

okay I think I see the issue; if one does not delete the cropping values in the config.yaml they will be used by default, due to this: https://github.com/AlexEMG/DeepLabCut/commit/6398f623b2b69a5df3eaf3bc8d5f4a70960962e9#diff-e1f7fa88206d6fd6451443092b057800

this is a bug, and I will fix it asap! For now, please change in your config.yaml file:

# Cropping Parameters (for analysis and outlier frame detection)
cropping: false
#if cropping is true for analysis, then set the values here:
x1: 0
x2: 640
y1: 277
y2: 624

TO:

# Cropping Parameters (for analysis and outlier frame detection)
cropping: false
#if cropping is true for analysis, then set the values here:
x1: 
x2: 
y1: 
y2: 

I appreciate very much your help! I tried but still didn’t go through… maybe my config file messed up
errors with analyzing a video.pdf (47.5 KB)

1 Like

can you try to run this in the terminal (in your DLC env):

ipython
import deeplabcut
deeplabcut.__version__

cfp = r'C:\Users\QIAOLING\Desktop\OFT_11-Qiaoling-2020-04-
08\config.yaml'

deeplabcut.analyze_videos(cfp, [r'C:\Users\QIAOLING\Videos\OFT_11_3.avi'])

and let me know the output?

Look like it is working!! Just takes time
(base) C:\windows\system32>cd C:\Users\QIAOLING\Desktop\DeepLabCut\conda-environ
ments

(base) C:\Users\QIAOLING\Desktop\DeepLabCut\conda-environments>activate dlc-wind
owsCPU

(dlc-windowsCPU) C:\Users\QIAOLING\Desktop\DeepLabCut\conda-environments>ipython

Python 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 15:18:16) [MSC v.1916 64 b
it (AMD64)]
Type ‘copyright’, ‘credits’ or ‘license’ for more information
IPython 7.12.0 – An enhanced Interactive Python. Type ‘?’ for help.

In [1]: import deeplabcut

In [2]: deeplabcut.version
Out[2]: ‘2.1.6.2’

In [3]: cfp = r’C:\Users\QIAOLING\Desktop\OFT_11-Qiaoling-2020-04-^M
…: 08\config.yaml’
File “”, line 1
cfp = r’C:\Users\QIAOLING\Desktop\OFT_11-Qiaoling-2020-04-
^
SyntaxError: EOL while scanning string literal

In [4]: cfp = r’C:\Users\QIAOLING\Desktop\OFT_11-Qiaoling-2020-04-08\config.yam
…: l’

In [5]: deeplabcut.analyze_videos(cfp, [r’C:\Users\QIAOLING\Videos\OFT_11_3.avi
…: '])
Using snapshot-1000 for model C:\Users\QIAOLING\Desktop\OFT_11-Qiaoling-2020-04-
08\dlc-models\iteration-0\OFT_11Apr8-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from C:\Users\QIAOLING\Desktop\OFT_11-Qiaol
ing-2020-04-08\dlc-models\iteration-0\OFT_11Apr8-trainset95shuffle1\train\snapsh
ot-1000
INFO:tensorflow:Restoring parameters from C:\Users\QIAOLING\Desktop\OFT_11-Qiaol
ing-2020-04-08\dlc-models\iteration-0\OFT_11Apr8-trainset95shuffle1\train\snapsh
ot-1000
Starting to analyze % C:\Users\QIAOLING\Videos\OFT_11_3.avi
Loading C:\Users\QIAOLING\Videos\OFT_11_3.avi
Duration of video [s]: 30.04 , recorded with 25.0 fps!
Overall # of frames: 751 found with (before cropping) frame dimensions: 928 5
76
Starting to extract posture
11%|████▎ | 80/751 [06:28<57:13, 5.12s/it]

1 Like

great, I will make a fix in the GUI soon, good catch, thanks :wink: glad it’s working; since your model is trained, you can also make a simple script that you can run to analyze new videos anytime they are placed inside a folder or subfolder!

Just edit the variables that would matter to you (here is the file you can download: https://github.com/DeepLabCut/DLCutils/blob/master/SCALE_YOUR_ANALYSIS/scale_analysis_oversubfolders.py) and then you can just run in the terminal:

(DLC-CPU) your com name$: python3 scale_analysis_oversubfolders.py

and it would analyze all the videos in the folder (and any subfolders) of any file type, i.e. avi, mp4, etc).

Great, very nice of you spending time troubleshooting!!

1 Like

Actually one more quick question, so logitech webcam is not a good camera for the DLC analysis, right? I saw this “OpenCV and moviepy both have issues using logitech webcams, see post: https://github.com/AlexEMG/DeepLabCut/pull/476#issuecomment-551788921. It may appear to work fine, but you may have issues creating labeled videos with these types of videos as it does not detect the FPS correctly!” I tried with those videos and it did not seem to work well for the evaluation step as I remember. Unfortunately the behavior videos we previously collected are all from logitech webcam.

I have not tried myself, but you likely could export to an mp4 with h264 compression first?

hm, can try! thanks again!

1 Like