Bias/Shift in evaluation result point locations

Hello,

We are using DeepLabCut for face and pupil tracking in head-fixed mice facing to the left of the camera image. We have been encountering a bias/shift in some point locations, where most pupil points are shifted anteriorly/to the left of the image (sample below), although some face points are not (e.g. nose & nostril–unfortunately these points are bright yellow in the image below). I haven’t encountered this before in other setups.

Things we have checked/tried:

  • Confirmed points were placed where intended in labelled images
  • Confirmed points were placed where intended in evaluation results
  • Confirmed bias/shift are present for both train and test images in evaluation results
  • Added more training images from different mice (>150 images) (same experimenter/camera)
  • Trained longer
  • Used internal augmentation/rotation
  • Retrained a whole network from scratch using different mice (same experimenter/camera)
  • Retrained a whole network from scratch using different & a different experimenter (same camera)

I am currently also trying to train a network with a cropped view of the eye to see if that changes anything. But I wonder if anyone else has experience this? For what it’s worth, we are using Win 10, M2020, 1/1.8" Monochrome, Dalsa Genie Nano GigE PoE Camera, files saved as .avi, DeepLabCut installed using https://github.com/DeepLabCut/DeepLabCut/blob/a3ff917fc04ae0199b051b1d0cc2f75c3700191a/conda-environments/DLC-GPU.yaml from May 12, 2020 (last commit)

Thanks for any thoughts!
Eyal

Hi! is this image from a frame of a video you created within DLC, or elsewhere? I’m also not clear which is shifted; can you post a labeled image from you and an eval. image? thanks!

HI MWMathis-- thanks so much for replying! This is an evaluation-results image created from within DLC using the evaluate network functions.

I’ve reposted the image here, as both the labeled image and then the eval image. I changed the dotsize and colormap to make the points a little easier to see. The crosses are my placements and I think are fairly well approximating the pupil edge. The dots are the predicted points, that appear shifted to the left of the image.


Thanks again for any thoughts

THanks! Can you tell me which version of matplotlib you have installed? We have noticed some inconsistencies before, and want to be sure it’s not a “simple” plotting issue (ie., 1 pixel shifted). @jeylau

Hi @ekimchi, could you also show us the maps obtained with deeplabcut.extract_save_all_maps(). Make sure to pass all_paf_in_one=False and the index corresponding to that image into Indices=[]. I am curious about the location refinement :slight_smile:

@MWMathis matplotlib version was 3.0.3

@jeylau I tried to run deeplabcut.extract_save_all_maps() but looks like this function was added after the version that installed using the DLC-GPU conda environment file from http://www.mousemotorlab.org/deeplabcut
deeplabcut.version = 2.1.8.2 ?

so I updated deeplabcut to 2.2b7 and also updated matplotlib as per instructions at https://github.com/DeepLabCut/DeepLabCut/releases
matplotlib.version = 3.3.1

When I try to use extract_save_all_maps and pass all_paf_in_one=False, I get

TypeError: extract_save_all_maps() got an unexpected keyword argument ‘all_pf_in_one’

Looking at the source code in \Anaconda3\envs\DLC-GPUu\Lib\site-packages\deeplabcut\pose_estimation_tensorflow\visualizemaps.py
it looks like all_pf_in_one is no longer a valid argument?

def extract_save_all_maps(
config,
shuffle=1,
trainingsetindex=0,
comparisonbodyparts=“all”,
gputouse=None,
rescale=False,
Indices=None,
modelprefix="",
dest_folder=None,
nplots_per_row=None,
):

Here is what I got running as is
img1366_locref_train_2_0.95_snapshot-260000
img1366_locrefzoom_train_2_0.95_snapshot-260000
img1366_scmap_train_2_0.95_snapshot-260000

Thanks so much again!

Aw that’s true, the upgraded function will be soon available in 2.2b8 :slight_smile: Anyhow, looking at PupilW or PupilE for example, it looks like the confidence maps indeed predict a location that is 1–2 pixels off the pupil to the left, which is strange. Do you notice that shift too when loading the data in the refinement GUI?

Yes, it appears the shift also appears to be present when loading the data in the refinement GUI. Curious if you have other thoughts/suggestions?
Thanks so much!


sorry to make you test more, but if you downgrade, i.e. run pip install matplotlib==3.1.1 and run pip install deeplabcut==2.2b8 is it the same in the GUI?

No worries. I created a new environment using the gpu yaml file.
That installed matplotlib-3.0.3-cp37-cp37m-win_amd64.whl

I then ran pip install matplotlib==3.1.1 and pip install deeplabcut==2.2b8
deeplabcut==2.2b8 changed matplotlib to 3.1.3

It looks the same in the refine labels GUI:

I then downgraded to matplotlib 3.1.1 from here and looks the same still in the refine labels GUI:

My guess is that the plotting code is solid, but this shift might perhaps be due to something about our video and how it gets labeled/trained. I think our pupil image is relatively small, as we don’t have a zoomed in image of the eye and therefore only have about 20 pixels in diameter. Do unanticipated things happen when the image is relatively small due, perhaps due to internal pixel estimate rounding? How is that handled internally when the image data is used for training? I’m just curious, I see that the csv coordinates are not rounded after labeling, which is great.

I’m currently also training a newly labeled batch of images using a muDLC model in 2.2b7 to make a fresh model using the skeleton feature as well. I’ll let you know how that goes. Curious if you think this would make a difference?

Thanks as always

For whatever it’s worth, we seem to be getting better results with ResNet 101, so will roll with that over 50 for now. Thanks again!

1 Like