Variables extracted from DLC show minimal results - faults in the procedure or DLC application?

Hi,

We’ve recently completed a project analysing how people interact with a new technology rendering digital haptic feedback, and in order to better understand the usage, we filmed blindfolded participants’ hands while they were exploring 2-D renderings of mazes before they went and actually explored the mazes. We have 10 videos for each participant, spanning from 20 seconds to 13 minutes per trial (BIG dataset). We applied DLC analysis and the results were pretty cool, we thought at first. Visually, we could see some trends from the x-y coordinates we got. We then extracted variables such as Euclidean Distance, Speed, Pathlength from the x and y coordinates of the first and last videos for trained and untrained mazes (4vids per participant), and analysed the relative change between first and last trials (first-last/first*100). We then did permutation analyses comparing these changes between trained and untrained trials and found nothing, to our surprise. We also correlated the tablet exploration with either behaviour or LEGO maze reconstruction and also, almost nothing (a correlation between path length and lego reconstruction that disappeared with multiple comparisons correction)! Even though it’s possible that we might be testing three different constructs, these should be interdependent. Also, carrying out this experiment I witnessed differences between people trained on one vs the other maze. We do find significant behavioural differences. So I’m thinking, maybe I applied this in the wrong way, or maybe I’m just transforming the data too much. We used about 270 labeled pics from 7 different participants with various types of lighting, skin colour etc. However, given an ambulant set-up, the frame was not exactly the same in all of the videos - might this be a problem? We only tracked one finger - the finger that was exploring the tablet, even if this changed (due to skin adaptation and tablet heating participants were allowed to change the exploring finger) - nevertheless we saw good performance in tracking even these changes in the network, but maybe we should always track the same finger, or the middle of the hand for better tracking results? I strongly believe in your application and its abilities, it’s honestly a super cool app, and I want to make the most out of it, so any suggestions would be welcome.

Thanks,
Ruxandra.