No labels in final step

Hello,

I’m using DLC v 2.26. Everything went smoothly without error, but when I reached the final step where I open a new GUI to see the labels and modify them, I couldn’t find any! There were no labels anywhere in the video.
The pickled files are there, just no labels when I load the video and check. Any idea what might went wrong?

The max iterations for this 20 min video was set to 50,000 as we were just testing it.

My config values are:

individuals:

  • individual1

  • individual2

uniquebodyparts:

multianimalbodyparts:

  • head

  • butt

skeleton:

    • head
  • butt

bodyparts: MULTI!

start: 0

stop: 1

numframes2pick: 40

Plotting configuration

skeleton_color: black

pcutoff: 0.6

dotsize: 12

alphavalue: 0.7

colormap: plasma

Training,Evaluation and Analysis configuration

TrainingFraction:

  • 0.95

iteration: 0

default_net_type: resnet_50

default_augmenter: multi-animal-imgaug

snapshotindex: -1

batch_size: 8

Cropping Parameters (for analysis and outlier frame detection)

cropping: false

croppedtraining: true

#if cropping is true for analysis, then set the values here:

x1: 0

x2: 640

y1: 277

y2: 624

Refinement configuration (parameters from annotation dataset configuration also relevant in this stage)

corner2move2:

  • 50

  • 50

move2corner: true

try with the default augmenter and see if something change

Hi @MWMathis,

I am not getting any data when at the very end (When I refine the tracklets or when plotting), despite everything going smoothly. I read previous posts (Refine tracklets trouble, no markers) and tried your suggestions, but no luck! I tried to modify the tracklet length/trail length, I modified the inference_config, and checked that my skeleton is connected properly but nothing worked. Let me know what should I share with you to get some help. Much appreciated!

Please note that I’m not using the above, but I added my comment here since It is the same topic. I’m using version 2.2b7.

Here are more info:

The video is about 15 mins long.

The config file:

Project definitions (do not edit)

Task: smlocomotion

scorer: ws

date: Jul7

multianimalproject: true

Project path (change when moving around)

project_path: /home/kt/Documents/ws/smloco-ws-2020-07-07

Annotation data set configuration (and individual video cropping parameters)

video_sets:

/Documnets/ws/smloco-ws-2020-07-07/videos/A_cropped.mp4:

crop: 0, 400, 0, 400

individuals:

  • individual1

  • individual2

uniquebodyparts:

multianimalbodyparts:

  • head

  • neck

  • rshoulder

  • relbow

  • rhand

  • lsoulder

  • lelbow

  • lhand

  • butt

  • rknee

  • rfoot

  • lknee

  • lfoot

skeleton:

    • neck
  • relbow

    • lsoulder
  • lelbow

    • rshoulder
  • butt

    • head
  • lhand

    • butt
  • rknee

    • neck
  • lelbow

    • head
  • rfoot

    • head
  • relbow

    • rshoulder
  • rhand

    • lsoulder
  • butt

    • neck
  • rshoulder

    • lelbow
  • lhand

    • butt
  • rfoot

    • neck
  • lsoulder

    • head
  • lknee

    • head
  • rhand

    • neck
  • rfoot

    • butt
  • lknee

    • neck
  • rhand

    • head
  • lfoot

    • rknee
  • rfoot

    • rshoulder
  • relbow

    • head
  • lsoulder

    • lknee
  • lfoot

    • head
  • neck

    • neck
  • lfoot

    • butt
  • lfoot

    • lsoulder
  • lhand

    • head
  • lelbow

    • neck
  • butt

    • neck
  • lhand

    • head
  • rknee

    • relbow
  • rhand

    • head
  • rshoulder

bodyparts: MULTI!

start: 0

stop: 1

numframes2pick: 100

Plotting configuration

skeleton_color: black

pcutoff: 0.6

dotsize: 12

alphavalue: 0.7

colormap: plasma

Training,Evaluation and Analysis configuration

TrainingFraction:

  • 0.95

iteration: 0

default_net_type: resnet_50

default_augmenter: multi-animal-imgaug

snapshotindex: -1

batch_size: 8

Cropping Parameters (for analysis and outlier frame detection)

cropping: false

croppedtraining: true

#if cropping is true for analysis, then set the values here:

x1: 0

x2: 640

y1: 277

y2: 624

Refinement configuration (parameters from annotation dataset configuration also relevant in this stage)

corner2move2:

  • 50

  • 50

move2corner: true

video_sets_original:

/Documnets/ws/smloco-ws-2020-07-07/videos/A.mp4:

crop: 0, 1920, 0, 1080

The files produced:
ADLC_resnet50_smlocomotionJul7shuffle1_200000_full.pickle
ADLC_resnet50_smlocomotionJul7shuffle1_200000_full.mp4
ADLC_resnet50_smlocomotionJul7shuffle1_200000_bx.pickle
ADLC_resnet50_smlocomotionJul7shuffle1_200000_bx.h5
ADLC_resnet50_smlocomotionJul7shuffle1_200000_meta.pickle
ADLC_resnet50_smlocomotionJul7shuffle1_200000_sk.h5
ADLC_resnet50_smlocomotionJul7shuffle1_200000_sk.pickle

The inference output:
train_iter train_frac shuffle rmse_train hits_train misses_train falsepos_train ndetects_train pck_train rpck_train rmse_test hits_test misses_test falsepos_test ndetects_test pck_test rpck_test
200000.0 95.0 1.0 2.129516456852280 11.5 0.5 0.0 1.0 0.9583333333333330 0.8775203313392890

Let me know if you need more info.

Hello @WSin, are detections looking good (i.e., after using deeplabcut.create_video_with_all_detections)?
Could you tell us a bit more about the tracking method you chose? Did you run the cross-validation step?

Yes! The detection is looking great, with very few troubles. I used the default (Box) in this example, but I actually tried both. Yes, I cross validated!