Reconstruct 3D from 2D images


Hi, There.

I have used deeplabcut to capture motion from 2d images(frame).
But, although the deeplabcut software captures from 2d frame images, how can I build the 3d structure from the 2d captured images?

Is it possible through the software? If it were not, could you let me know what the program can do that and If it can do that, could you let me know how to do that?




Hi @Walter_sh12,

I might misunderstand your question/intent and I am not familiar with DeepLabCut but generally spoken you cannot reconstruct volumetric information (3D) from a simple 2D image if there is no other data existing which codes the information for the third dimension. This will be true for other software as well.
If you have a simple 2D image plane showing a still image (from a video) a normal camera will not give you this additional information you would need to do such reconstruction.
An example where a reconstruction would be possible is, if e.g. the pixel intensity would stand as representative information for the height of a certain position of a terrain in units. Then a reconstruction of a landscape from such an image would be an option.

1 Like


Are you only recording with one camera? As @biovoxxel said, you’ll need more info in order to reconstruct 3D.
A “simple” way for 3D reconstruction is using multiple cameras (number depends a bit on your setup and what you are filming). There are toolboxes available in order to reconstruct 3D from multiple cameras, our lab is currently looking into argus, but the main idea is that the cameras need to have a fixed position and be calibrated in relation to each other, which then allows to reconstruct 3D.

There are also some pointers on the basic idea of 3D reconstruction in this preprint:
Using DeepLabCut for 3D markerless pose estimation across species and behaviors



Thanks for good advice!
We are looking for the software that reconstruct the 3d structure and will do that through your mention.
But we don’t know that deeplabcut can do that. Could you let me know what the toolboxes that you mentioned name it is?




you can train an individual network with DeepLabCut on multiple camera angles, but you need to know the camera placement for proper triangulation (see preprint mentioned above), i.e in deeplabcut you have openCV already installed, and can use their camera calibration tools: then yan can project the two (+) camera views into 3D space

here is a good notebook that can help guide you:



Hey Mathis, there is no mentioning of “triangulation” in the preprint mentioned above. I am having problems finding the relevant function in OpenCV.



This package ( that seems it is still being developed, has some easy to use functions to calibrate cameras and triangulate. I hope to try it this week after I collect some calibrations videos. It has a nice functions to run videos in batches too (2d).



DLC also runs videos in batches, the function analyze_videos allows for an input folder so any videos that get placed there (or in sub folders thereof), can be analyzed automatically.

Here is the script to batch process:

here is a simple script to run over many projects:



Hello, I’m the developer of anipose. It should be ready to use for 3D tracking with DeepLabCut! I’ve setup the documentation last week, but there may still be some rough edges on first setup.

If you end up using it, please let me know if you have any issues, either through email or by filing a github issue. I will do my best to help out, and update anipose as needed to make it work.

1 Like