After briefly mentioning it in yesterday’s deeplabcut-zoom (thank you again, it was fun!),
here’s my question for the whole community:
I have videos of mice running head-fixed on top of a wheel (by Richard Warren’s design), with a single camera focused perpendicular to the mouse / wheel. Through an angled mirror inside the wheel, the resulting video shows the mouse from the side and from below.
I have successfully used deeplabcut to train a network and extract the positions of snout, tailbase and 4 paws (everything x 2, for side and below perspectives) separately, but it would be much more beautiful - and likely meaningful - to use that information for 3D pose estimation.
- I assume I have to create a new 3D project? Can I use my previously labeled images for this?
- Do I have to calibrate the camera and how would that differ from calibrating 2 cameras?
- Or would somehow splitting the video in half, covering each perspective, and treating them as two separate camera views work?
In any case, thank you for making deeplabcut, it’s really cool to work with!