if you look at the docstrings for each call, there are many options. I. e “userfeedback” is one of them:
Signature: deeplabcut.extract_frames(config, mode='automatic', algo='kmeans', crop=False, userfeedback=True, cluster_step=1, cluster_resizewidth=30, cluster_color=False, opencv=True)
Extracts frames from the videos in the config.yaml file. Only the videos in the config.yaml will be used to select the frames.
Use the function ``add_new_video`` at any stage of the project to add new videos to the config file and extract their frames.
The provided function either selects frames from the videos in a randomly and temporally uniformly distributed way (uniform),
by clustering based on visual appearance (k-means), or by manual selection.
Three important parameters for automatic extraction: numframes2pick, start and stop are set in the config file.
Please refer to the user guide for more details on methods and parameters https://www.biorxiv.org/content/biorxiv/early/2018/11/24/476531.full.pdf
config : string
Full path of the config.yaml file as a string.
mode : string
String containing the mode of extraction. It must be either ``automatic`` or ``manual``.
algo : string
String specifying the algorithm to use for selecting the frames. Currently, deeplabcut supports either ``kmeans`` or ``uniform`` based selection. This flag is
only required for ``automatic`` mode and the default is ``uniform``. For uniform, frames are picked in temporally uniform way, kmeans performs clustering on downsampled frames (see user guide for details).
Note: color information is discarded for kmeans, thus e.g. for camouflaged octopus clustering one might want to change this.
crop : bool, optional
If this is set to True, the selected frames are cropped based on the ``crop`` parameters in the config.yaml file.
The default is ``False``; if provided it must be either ``True`` or ``False``.
userfeedback: bool, optional
If this is set to false during automatic mode then frames for all videos are extracted. The user can set this to true, which will result in a dialog,
where the user is asked for each video if (additional/any) frames from this video should be extracted. Use this, e.g. if you have already labeled
some folders and want to extract data for new videos.
cluster_resizewidth: number, default: 30
For k-means one can change the width to which the images are downsampled (aspect ratio is fixed).
cluster_step: number, default: 1
By default each frame is used for clustering, but for long videos one could only use every nth frame (set by: cluster_step). This saves memory before clustering can start, however,
reading the individual frames takes longer due to the skipping.
cluster_color: bool, default: False
If false then each downsampled image is treated as a grayscale vector (discarding color information). If true, then the color channels are considered. This increases
the computational complexity.
opencv: bool, default: True
Uses openCV for loading & extractiong (otherwise moviepy (legacy))
(ps - if you videos are sufficiently similar, no need to use all for training! The idea to to make a network that can be used on novel videos)