Trackings on multiple camera view at the same time

Sample image and/or macro code

cam1_20200303_0414122487-obj1.tif (117.3 KB)

Background

  • I have high framerate videos of flying mosquitoes from 3 different views, I cropped the images around the mosquito positions, and stitched them together (see image above)

Analysis goals

  • I want to get the 3d position of the tips of the legs, body tips as well as wings hinges and tips.
  • For that I though that deeplabcut would give me better result if I could train the network on all views at the same time. I made the hypothesis that in that way, the network will learn the correlation between the position of the different features in the various cameras view.

Challenges

  • Labeling correctly the different features (especially the legs that might end up mixed up)
  • I currently track each features (e.g. legs tips) for each view (e.g. front leg right #1, front leg right #2, and front leg right #3)
  • I am using deeplabcut 2.1.8

Question

  • I already labeled 150 images and train a network (resnet 50). It gave me relatively good results, but the accuracy can definitively still be improved.
  • So I am wondering, is there anyway I give 3d information to deeplabcut? (I have my own calibration data)
  • Is it indeed better to train the network on all 3 views at the same time?
  • Would it be possible to constrain the skeleton of each view to their on part of the image?