Camera calibration for large test environment


I’ve been following your work for a few years and am excited to finally have a project to be able to test it out. I’ve been reading through the documentation for 3d tracking and had a few questions in regards to the calibration.

  • Do you have any tips for calibrating a much larger area, as I am looking at tracking a drone over a large space (20x10ft)
  • Do you have any recommendation as to how many depths should have this full frame calibration?
  • what depths should be used? Using a scale of 0-1, 0 being the camera and 1 being the max distance to track? (ex: 0.25, 0.5, and 1 for a quarter distance, half distance, and max distance)
  • Do you have any alternatives that you could recommend that wouldn’t require a scan over the entire frame? This will be difficult given the scale of the area I am trying to track.


Pawel Jaworski, P.Eng.

The easiest way is to separate the problem into different steps. With Argus you can find the camera intrinsics for each camera in one process in the lab, set up your cameras, and do a wand calibration to get DLT coefficients (an alternative 3D calibration strategy that is much easier in very large volumes in the field). Then you can use deeplabcut to track (train it for the wand ends in one project, and the drone in another project). Then convert the deeplabcut tracks back to Argus for the 3D output.

I have done this full workflow using 2 GoPro cameras in the field, flying the wand around attached to a drone, and tracking birds in a filming volume about 250 m x 250 m x 50 m.


Here is the link

1 Like

This is great, thanks for the quick reply :grinning: I’ll have read through this as it is all new to me, but it sounds just like what I need.


Thanks, I just fixed it.