Only a part of videos created under 'create_labeled_video'

deeplabcut
#1

When I tried running the create_labeled_video only a small fraction of video (15 seconds out of 11 minutes long) was generated with the tracking data marked on it. Any clue on how to create the labeled video for the entire video dataset?

0 Likes

#2

you’ll have to tell us a bit more. Does the .h5 output of analyze_videos have the full frame length? You can look at the .csv of .h5; and if you didn’t create the .csv you can run this: deeplabcut.analyze_videos_converth5_to_csv(videopath, videotype='.avi')

If you check the docstring for create_labeled_video, there are lots of options, including what frames to include:


Signature: deeplabcut.create_labeled_video(config, videos, videotype='avi', shuffle=1, trainingsetindex=0, save_frames=False, Frames2plot=None, delete=False, displayedbodyparts='all', codec='mp4v', outputframerate=None)
Docstring:
    Labels the bodyparts in a video. Make sure the video is already analyzed by the function 'analyze_video'

    Parameters
    ----------
    config : string
        Full path of the config.yaml file as a string.

    videos : list
        A list of strings containing the full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.
    
    videotype: string, optional
        Checks for the extension of the video in case the input to the video is a directory.
 Only videos with this extension are analyzed. The default is ``.avi``

    shuffle : int, optional
        Number of shuffles of training dataset. Default is set to 1.

    trainingsetindex: int, optional
        Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml).
     
    videotype: string, optional
        Checks for the extension of the video in case the input is a directory.
Only videos with this extension are analyzed. The default is ``.avi``

    save_frames: bool
        If true creates each frame individual and then combines into a video. This variant is relatively slow as
        it stores all individual frames. However, it uses matplotlib to create the frames and is therefore much more flexible (one can set transparency of markers, crop, and easily customize).

    Frames2plot: List of indices
        If not None & save_frames=True then the frames corresponding to the index will be plotted. For example, Frames2plot=[0,11] will plot the first and the 12th frame.
        
    delete: bool
        If true then the individual frames created during the video generation will be deleted.

    displayedbodyparts: list of strings, optional
        This select the body parts that are plotted in the video. Either ``all``, then all body parts
        from config.yaml are used orr a list of strings that are a subset of the full list.
        E.g. ['hand','Joystick'] for the demo Reaching-Mackenzie-2018-08-30/config.yaml to select only these two body parts.

    codec: codec for labeled video. Options see http://www.fourcc.org/codecs.php [depends on your ffmpeg installation.]
    
    outputframerate: positive number, output frame rate for labeled video (only available for the mode with saving frames.) By default: None, which results in the original video rate.
    
    Examples
    --------
    If you want to create the labeled video for only 1 video
    >>> deeplabcut.create_labeled_video('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi'])
    --------

    If you want to create the labeled video for only 1 video and store the individual frames
    >>> deeplabcut.create_labeled_video('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi'],save_frames=True)
0 Likes

#3

The .h5 files frames has 15373 frames whereas the video has 676158 frames. The processing stops at 3%… any reason why only a select few frames are going through? Is it some training data issue?

0 Likes

#4

seems the video may be corrupt, or you are filling your CPU memory (best to use a GPU or chunk your video into smaller parts), or your kernel dies in Jupyter. Is you video recorded at 1,000 FPS (i.e. is openCV reading the video correctly)? You might want to use terminal (“cmd”) to avoid the kernel issue, but for such long videos I would suggest a GPU. You can also use Colab if you don’t have one:

0 Likes

#5

Thanks for your reply. I will try using the cmd terminal and chunking the video first followed by giving Google CoLab a try.

0 Likes