Real time 2D/3D rendering using napari?

Hi,

I’m thinking of possible architectures for a real time rendering application capable of showing 2D and 3D volumes, and was wondering whether napari is a viable option as a sort of a front end for this app.

I believe that currently napari can only render static files, but in theory I can imagine that using some plugin (which obviously I will develop) I can feed the vispy renderer with the stream of data.

My question mainly concerns with the viability of the effort - should I invest my limited time in pursuing this option or do you think that constraining myself to vispy and napari wil be more of a burden than an opportunity?

Hey @HagaiHargil!

The sky is the limit here, and you can definitely use napari for this purpose. Depending on your single-frame-size you may spend more or less time optimising performance here, but for moderate volumes napari works well. Have a look at napariboard.py, in which I create a dask array that is updated at each epoch of a pytorch training cycle:

as well as the live-tiffs example together with the live-tiffs-generator script in the repo.

The key insight is that we can display dask arrays, and dask arrays can contain any arbitrary computation, and that you can trigger canvas re-draws (ie moving data from RAM to VisPy/GPU buffers) at arbitrary points using the thread_worker API.

Let me know if those examples get you on the right track or if you need further clarification!

2 Likes

Thank you very much for these references. I somehow missed the live-tiff examples, I’ll take a deeper look at them and at the napariboard example, it seems to be a very satisfactory solution.

Hey @jni,

are there any plans to integrate that procedure deeper into napari? E.g. running plugins in a different thread per default and updating the viewer sporadically… Then plugin developers wouldn’t have to deal with that in every plugin :upside_down_face:

Cheers,
Robert

Hi,
If you want to render in real-time using the vispy layer directly, I have a demo here:

It bypasses a lot of napari classes so is probably not best practice, but it demonstrates manipulating the GPU texture buffer to enable realtime rendering of data as it comes in.

2 Likes

Awesome @VolkerH ! Do you by chance know if / think it’s possible to bypass napari so far that you can hand over an OpenCL-buffer pointer into Vispy, which takes OpenGL pointer? You know, I process on the GPU and Vispy renders on the GPU. Every byte in computer RAM is on a detour. The “bypass” could be a short cut :wink:

1 Like

I had a similar idea (or the same, depending on whether I am interpreting your comment correctly), i.e. running GPU processing code on the texture buffer but I never looked into how you may get a pointer to the texture buffer from vispy so you can use it in OpenCL. But I am sure the pointer will be somewhere in vispy (or one of its dependencies). Whether it is safe to operate on it and whether the rendering updates as the data changes without furher callbacks is another question. Would be quite fun to see the various image operations live in action as e.g. a large convolution makes its way through the buffer.

1 Like

Pretty awesome example @VolkerH! You mentioned there that you downsampled the data in space and time. Was that due to actual performance issues, or did you try to mitigate them in advance?

Hi @HagaiHargil

There are two reasons I downsampled it:

  • to keep the download of the sample data set reasonable,
  • at the time I created the PR in December I was working on an Ultrabook with a Celeron mobile processor, very little RAM and no dedicated GPU. Under these circumstances that is pushing it.

I had previously run this on a Ryzen Workstation with RTX2060 GPU with larger volumes. I don’t know exactly how far one can push it.

Here is a variant of the code that live streams from a webcam texture live update from webcam · GitHub (Note that the comments in that gist refer mostly to a previous version of the code and probably don’t make sense in that context).
I was able to run this on my ultrabook, so I think in many microscopy scenarios live volume display of 3D data (maybe 8x the data rate of a webcam) should be feasible with a good workstation.

pyopengl/pyopencl do support interop and direct buffer exchange (though it requires that pyopencl was compiled with GL support). See API here, and an example here:

2 Likes

Awesome @talley I was almost sure this is possible. Thanks for the pointer! Do you know if that also works with CUDA? Asking for a friend :wink:

Yep, there is similar functionality in pycuda. API here, and example here

2 Likes

I think that is something we’ll want to offer at some point. We might start with providing that for a more limited set of functionality/ operations like was mentioned here Discussion: Intended use of `experimental_provide_function...` hookspec · Issue #2150 · napari/napari · GitHub, but very open to additional ideas too!

1 Like