hi - after running the model evaluation, I’m trying to access to pixel errors for individual frames. I’m looking at the generated h5 file but i can’t find the header names. Would anyone know where the headers are stored or what the best way is to get to pixel errors for each individual frame? thank you!
I’m not sure what you mean by headers. The evaluation file contains the pixel error per body part and frame in a array like fashion.
Thank you very much for the reply, I try to be clearer. There is an image of the h5 I’m looking at below. There are 16 body-parts tracked, and the h5 lists 48 values for each frame. For body-part one for example, which value should I be looking at for the pixel error?
you should open this in another way, i.e. here: https://github.com/AlexEMG/DLCutils/blob/master/Demo_loadandanalyzeDLCdata.ipynb
then it becomes much more obvious:
Thanks! What I am looking for is the variance associated with the pixel error between my labels and the model evaluation for each individual frame- I’d like to be able to plot the pixel error and include error bars
For example, in the image I’ve drawn below, I’m looking for the pixel error in img0000.png for body part Ear_left_1. For img0000.png I labeled Ear_left_1 at about 756,567 - and the evaluation put Ear_left_1 at about 769,569. The difference between the two coordinates is about 4. For image0013.png, I get a difference of 5.749393 between my label and the evaluation for Ear_1_left. If I ran deeplabcut.evaluate_network(config_path, comparisonbodyparts=[‘Ear_left_1’], plotting=True), I assume I get the mean of these differences values in the results_csv/h5. Is there a way to access the difference for each individual frame like in the table drawn on the right?
So what you want to do is use the .h5 from the analyze_videos output, which has the img index, and compare to the CollectedData.h5 file, which has ground truth for each image index. We currently don’t have a function for that - i.e. comparing human-labeling to network labeling. (but I can put this on our rolling updates backlog, and/or if you want to make that function, that’s great!)
You can just calll: pairwisedistances(DataCombined,scorer1,scorer2,pcutoff=-1,bodyparts=None)
In the same way as it is done in the evaluation! https://github.com/AlexEMG/DeepLabCut/blob/master/deeplabcut/pose_estimation_tensorflow/evaluate.py#L224