I’m now starting to learn how to use deeplabcut.
I was wondering if it’s possible to use depth-sensing images (16-bit distance value per pixel) to train and as the input? All I’m finding is about using multiple cameras.
You can certainly use DeepLabCut with depth-sensing cameras. Basically just utilize the RRB for one network and e.g. the D for another (or the same network trained on both). But only 3 channels as input are supported right now. As of the 16-bit, the code will convert it to 8, which should still give you great results. You could of course also adapt this for your case. Please let me know if that makes sense.