Bit of a strange one. Ive noticed that changing the batchsize during inference has zero change on the length of time it takes to analyse a video.
Please see this comparison of a batchsize of 4 vs a batchsize of 64. I have tried multiple other batchsizes and resolutions too.
I wonder if this is due to a bottleneck elsewhere in the inference code?
This is on a multi-animal project but I have observed the same phenomena in single animal.
Ill take a deeper look at the code and see if theres anything I can improve for now