Dear Napari team (and everyone else!),
I am working with @joeljo on using Napari using APR data (GitHub - AdaptiveParticles/LibAPR: Library for producing and processing on the Adaptive Particle Representation (APR).). Briefly, APR allows to represent data more efficiently than using pixels. The sparser the data, the higher the speed up in computation and memory footprint.
As a first step we implemented a simple hack consisting in giving Napari an object with a getitem method so that the pixel image is reconstructed on the fly. Doing that we can benefit from all Napari features out of the box for 3D, blending, etc - which is great.
Now, we are using Napari to display the stitching results of many tiles and it gets really slow. The conversion from APR to pixel should be fast (It takes Napari more than 1 second to display 16 tiles even though we are able to load, stitch and globally optimise the whole 4x4 tiles volume in 2 s.) and I tried to profile Napari to understand where is the bottleneck. I first tried to print some elapsed time in the getitem method but it does not print in PyCharm terminal nor in Napari terminal. I tried to return the object used to create the layer after using calling Napari but the object is the same as the input object, as if Napari deep copy the data object under the hood. I quickly looked into the source code but couldn’t find anything useful. What’s the best way we could profile (time and memory) Napari?
I am also curious on the best way to implement the support of APR in Napari. Should we create a custom layer or an entire plugin? I’d happy to give you more detail on APR or what we are trying to achieve.