3D functionality from single camera view (containing mirror)

Hi!

After briefly mentioning it in yesterday’s deeplabcut-zoom (thank you again, it was fun!),
here’s my question for the whole community:

I have videos of mice running head-fixed on top of a wheel (by Richard Warren’s design), with a single camera focused perpendicular to the mouse / wheel. Through an angled mirror inside the wheel, the resulting video shows the mouse from the side and from below.

I have successfully used deeplabcut to train a network and extract the positions of snout, tailbase and 4 paws (everything x 2, for side and below perspectives) separately, but it would be much more beautiful - and likely meaningful - to use that information for 3D pose estimation.

  • I assume I have to create a new 3D project? Can I use my previously labeled images for this?
  • Do I have to calibrate the camera and how would that differ from calibrating 2 cameras?
  • Or would somehow splitting the video in half, covering each perspective, and treating them as two separate camera views work?

In any case, thank you for making deeplabcut, it’s really cool to work with!

Best,
Sonja

Hi Sonja,

Cool data! Couldn’t you just use the x & y coordinates of the bodyparts in the top part and then “look” up the z dimension from the bottom (mirrored image) as the vertical dimension. You could also of course average the two x coordinates for top and bottom, but not sure that is necessary. This would give you 3D coordinates in a reasonable coordinate frame without any triangulation etc. The only problem I would see with this approach is if your mirror is not perfectly aligned (but that effect to you could factor out by performing e.g. a Gram-Schmidt decomposition.

Cheers,
Alexander