Error in calling filterpredictions on google cloud

Hi there,
I have used google cloud for single video labeling several time, and I don’t have any problem when calling the function of filterpredictions. But when I tried to label multiple videos, the function of filterpredictions reported the following error:
dlc.filterpredictions(config_file,video_file,videotype=‘mp4’,filtertype=‘arima’,p_bound=0.1,save_as_csv=True)
%%
Filtering with arima model /content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200202_182456.mp4-00.20.30-00.20.35.mp4
0it [00:00, ?it/s]/usr/local/lib/python3.6/dist-packages/statsmodels/tsa/statespace/sarimax.py:949: UserWarning: Non-stationary starting autoregressive parameters found. Using zeros as starting parameters.
warn(‘Non-stationary starting autoregressive parameters’
/usr/local/lib/python3.6/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
“Check mle_retvals”, ConvergenceWarning)
/usr/local/lib/python3.6/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
“Check mle_retvals”, ConvergenceWarning)
1it [00:00, 1.59it/s]/usr/local/lib/python3.6/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
“Check mle_retvals”, ConvergenceWarning)
3it [00:01, 1.72it/s]/usr/local/lib/python3.6/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
“Check mle_retvals”, ConvergenceWarning)
4it [00:02, 1.84it/s]

LinAlgError Traceback (most recent call last)
in ()
----> 1 dlc.filterpredictions(config_file,video_file,videotype=‘mp4’,filtertype=‘arima’,p_bound=0.1,save_as_csv=True)
2 dlc.plot_trajectories(config_file,video_file,showfigures=True,filtered=False)
3 dlc.create_labeled_video(config_file,video_file,videotype=’.mp4’,filtered=False)

14 frames
/usr/local/lib/python3.6/dist-packages/statsmodels/tsa/statespace/representation.py in _initialize_state(self, prefix, complex_step)
759 raise RuntimeError(‘Initialization is incomplete.’)
760 self._statespaces[prefix].initialize(self.initialization,
–> 761 complex_step=complex_step)
762 else:
763 raise RuntimeError(‘Statespace model not initialized.’)

statsmodels/tsa/statespace/_representation.pyx in statsmodels.tsa.statespace._representation.dStatespace.initialize()

statsmodels/tsa/statespace/_initialization.pyx in statsmodels.tsa.statespace._initialization.dInitialization.initialize()

statsmodels/tsa/statespace/_initialization.pyx in statsmodels.tsa.statespace._initialization.dInitialization.initialize_stationary_stationary_cov()

statsmodels/tsa/statespace/_tools.pyx in statsmodels.tsa.statespace._tools._dsolve_discrete_lyapunov()

LinAlgError: LU decomposition error.
%%
And the video list defined both in config.yaml and in video_file is as follows:
[’/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200202_182456.mp4-00.20.30-00.20.35.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200202_183320.mp4-00.12.13-00.12.19.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200202_184224.mp4-00.06.58-00.07.02.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200202_184925.mp4-00.14.09-00.14.19.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200220_115428_WT.mp4-00.06.42-00.06.52.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200220_133923_NeoAbl.mp4-00.09.19-00.09.29.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200220_140531_NeoAbl.mp4-00.19.36-00.19.46.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200224_220445_254FT.mp4-00.00.08-00.00.18.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200224_220445_254FT.mp4-00.05.29-00.05.39.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200224_220445_254FT.mp4-00.07.21-00.07.31.mp4’,
‘/content/drive/My Drive/video1/test-baboon-2020-03-10/videos/VID_20200224_220445_254FT.mp4-00.14.51-00.15.01.mp4’]
%%
Is there anything wrong with the code I used? Many thanks!

Hi lovebaboon,

Your code looks fine. The error you face is rare, and is due to the inability of the statistical model to deal by default with time series that are strongly non-stationary. We had pushed a fix on GitHub (PR #603) to handle those cases better; it will be available through pip install starting from DeepLabCut 2.2 very soon :slightly_smiling_face: Meanwhile, you could either rebuild DeepLabCut from source by cloning the ‘master’ branch, or try median filtering.

Hi Jeylau,
Thanks for your reply. I have tried several ways, and when I change the code into this:
dlc.filterpredictions(config_file,video_file,videotype=‘mp4’,filtertype=‘arima’,ARdegree=5,MAdegree=2,p_bound=0.1,save_as_csv=True)
there is no error any more, though I don’t know the reason, it works well now. Thank you!

In Google Colab DLC doesn’t have statsmodels installed. You’d have to filter the data once you pull back from the cloud. It’s extremely fast and doesn’t require a gpu

Dear Mathis,
Thanks for your reply, I would try the filter function from local computer.

1 Like