I’m using DeepLabCut to analyse an object recognition test. When I labelled the frames, I labelled both the mouse and the objects (ROIS). I did so because I needed to know the coordinates of both objects in order to define ROIS for the analysis. However, I have seen my results and I have noticed that labelling both objects made me get some noise. I have checked the videos and I’ve seen that when the animal is interacting, the labelling fails. Instead of interacting, the “nouse” label gets lost or out of the ROI (while the animal is in). I think that maybe having so many dots makes it confusing when interacting. That’s why I’d like to know if there is a way to determine coordinates of an object in a frame without a label. I need those coordinates to define my ROI. Is there a way? I hope I explained myself.
As such this should work. Perhaps you could specifically re-label some example frames during interactions where the predictions are wrong. Perhaps there were not enough examples like that in the training set. I.e. just use the active learning loop, add a few frames and retrain. Let me know if that works (See Fig 2 & 6 in https://www.nature.com/articles/s41596-019-0176-0)