Does "Toggle grid view" create multiples viewbox?

Hi everyone,

I would like to learn the inner workings of how Napari renders content on the canvas, mainly how does napari renders the layers. As an initial step, I would like to understand what happens when I click on the icon “Toggle grid view”. When I click on such icon all the layers appear on the canvas, so,

  • does this action produce multiples viewboxes (one per each layer) and then plot each layer within its corresponding viewbox ?..or
  • does this action just plot all the layers within a single viewbox?
  • Is there any “low-level” function in Napari that allows to create a viewbox, renders something to it and then add it directly to the canvas?..I would like to create something similar to “subplots” in Napari, where I can manipulate each subplot independently (it would also be nice to be able to link the different subplots if necessary :nerd_face: )

Thank you very much for all your help! :wink:

the grid button is a QtGridViewButton, which … when clicked, essentially just toggles viewer.grid.enabled… where viewer.grid is an instance of GridCanvas. That GridCanvas object emits events whenever any of its properties are changed. The ViewerModel is listening to those events (setup here) and one of the reactions to an event is to call the viewer. _on_grid_change method. And, you’ll notice, all that method does is translate the position of each layer in the viewer (if grid mode is enabled), based on the GridCanvas.position method.

soo…

unforunately nope, not yet.

yep. we still only have a single view canvas.

There is tremendous demand for this sort of thing! So we want it too :slight_smile: see the following issues for some discussion:

Thank you very much for your kind reply @talley :wink:.

If I understood correctly, in Napari we only have one VispyCanvas (which inherits from VisPy’s SceneCanvas), and within the canvas we only have one single VisPy’s ViewBox, and when we click on the grid button all the layers get plotted within that ViewBox, am I right?..This ViewBox only have one associated camera, which in 2D mode corresponds to a PanZoom camera. Therefore, when I click on the toggle button, and then proceed to do mouse-wheel zoom over the canvas, among all the layers currently rendered in the Grid View, only one of them gets truly zoom-in (the zoom target is the layer where the mouse is currently located), is this correct?.

When I took a look at the _on_grid_change_method, I saw there a call to _subplot. In this “subplot” function I was expecting to see the actual code that renders the image, but instead I saw that this method focused on configuring some parameters of a layer, could you please point me to the code that does the actual “plotting” of the image when we go to Grid view mode?.

Thanks for pointing me to the issue 760 and issue 561, very interesting!, but after reading these issues
I got very confused by the terminology :innocent:. So I went back to revisited this excelent SciPy talk, where Almar Klein, one of the founders fathers of VisPy, explains the main concepts behind VisPy, and then I went to check this VisPy example. In this example we can see one SceneCanvas, which contains a grid of ViewBoxes, and each viewbox has its corresponding camera, and according to this code , these cameras can be linked.

My confusion is the following:

  • why do these issues mentioned multi-canvas as a main-way to realize the “subplot” functionality?
  • when they mention “multi-canvas”, do they mean multi-ViewBoxes?.

Please let me give you a little bit more context by explaining the use-case I have in mind: Let’s say we have 3 layers (all of them the same size), which correspond to 2D images of the same object captured by 3 different sensors (e.g., xrays, rgb camera and depth camera). When I click on the grid button, I would see the 3 images next to each other. Then, when I do some mouse-wheel zoom-in in one of the images, I would like to see how the exact same area gets also zoom-in in the other two images. At the moment, this is not possible in Napari because we only have one single ViewBox (where all the layers get render one next to each other) with one associated camera. Please bear with me, I am just brain-storming here :nerd_face:, by taking some inspirations from the previous example , I think we can accomplish the desire functionality described in this use case by using a multi-ViewBox approach instead of a multi-canvas approach: we already have one SceneCanvas, so what if we create one ViewBox for each image, each one having a PanZoom camera (in the 2D case), and then we link the cameras, this way if we zoom-in in one image, this would also zoom-in in the other two images, what do you think
about this approach?..is there any disadvantage of this approach with respect to the multi-canvas approach described in the issues?

Thank you very much for your attention, and please excuse me for the long text :blush:.

1 Like

Wow, @jmontoyam! That’s some very impressive detective work! As always, I might add! :blush:

could you please point me to the code that does the actual “plotting” of the image when we go to Grid view mode?.

This doesn’t actually do any new plotting, which is why you couldn’t find it: all it does is change the transforms on each layer so that they are spread out on a grid instead of being on top of each other. The rendering doesn’t change at all! One canvas, one scene, one viewbox: just that the layers get spread out in space.

I haven’t followed the technical details of the “multicanvas” discussion very closely, but I think you are probably right that the language is a bit muddled in those issues and doesn’t follow the VisPy Canvas/Scene/ViewBox vocabulary very closely, and you are probably also right that 1 canvas/multiple viewboxes can probably be used to implement at least some of the use cases we’re interested in!

Having said that, it’s not trivial to hook things up to our napari architecture as it stands now. Specifically, it would not be trivial to show only a subset of layers in each viewbox. But I think it’s worth aiming for that.

At any rate, this is definitely the sort of discussion that might benefit from a higher-bandwidth discussion over video. Do you think you would like to come to one or more of our developer/community meetings? These are announced on the #dev-meeting channel on our Zulip chat room. We have one tomorrow (Thursday) 4pm Pacific time (America/Asia-friendly time), and every other week on Wednesdays we have one for Atlantic (America/Europe) friendly time (8:30am Pacific) and one for Eurasia (Europe/Asia) friendly times (9am Central European time). Tomorrow’s weekly meeting is probably a really good one to come to because Nick is planning to give a (fast) overview of napari architecture to some devs from Quansight who are joining the team!

Thank you very much @jni for your kind reply! :wink:

I am really interesting in understanding the Napari rendering architecture :nerd_face:

Let’s go a step back: Let’s say I have 10 image layers and I am not in “grid view mode”.

  • Are all of these 10 layers currently “living” in the SceneGraph, each one within a ViewBox, but only one of the them is visible as a top level viewbox?..If so, when I click on the “eye icon” (layer-visibility), this just trigger an event that changes the top level viewbox, am I right?.

  • If the previous mental model is not correct, what about this one: there is one and only one
    viewbox at any time, and this viewbox has one and only one child (the current visible image layer).
    If so, when I click on the “eye icon” (layer-visibility), this just trigger an event, and within this event the current visible image layer is remove from the view box, and another image layer get add it to it, am I right?. In this case, everytime I click on the “eye icon”, am I trigering an opengl draw call to render the new visible image?. Or, everythime we add an image layer, we trigger an opengl call to render the image on a framebuffer object, and then when we click on the layer visibility icon, we just trigger an event that takes one of those 10 framebuffer objects and add it to the viewbox?

  • Could you please point me to the code where Napari does the call to the the vispy function to render an image layer?

  • I would really love to get to know the Napari rendering architecture, do you guys have any document or any recorded video where I can find this information?

Thank you very much for inviting me to the Napari dev-meetings, can I join even if I am not a core developer (I have not contributed a single line yet, only one tiny bug report :innocent:). If time permits, I would really like to join to the dev-meetings, but today’s (Thursday) meeting is not possible for me, because according to google 4pm pacific time is 1am Belgium time. If it is ok with you guys, could you please record the video?, that would be an excellent resource for getting to know the Napari architecture ;).

Hey!

Yeah I forgot you were in Belgium until after I posted. =) Not to worry, hopefully we can see you this coming Wednesday, 9am Belgium time? =) Absolutely non-devs are very welcome! Our favourite meeting is when members of the broader community show us cool stuff they were able to do with napari. =) But even just asking questions is great!

Regarding your architecture questions:

  • No, all the layers are actually visible in the same viewbox: you can tell this by changing the opacity or blending of the topmost layer — the other ones are still there. It’s just that they have a “z-ordering” in how they are rendered.
  • As above: they are all children of this viewbox and are all being rendered, except the ones where visible=False (the eye icon).
  • This one is a bit tricky, but the short answer is that the code you’re after lives in napari/_vispy/vispy_image_layer.py. Briefly, we have an Image class that holds our data (nD). But we can only display 2D or 3D of the data. So when we change the slider/slice, we trigger some events, and those events are hooked up to the vispy class I just linked to. So, if we e.g. change the slider position, we want to change the data in the image node, and _on_data_change() is called, which changes the texture being displayed. When you change whether something is visible, we change the corresponding attribute on the vispy node here. There’s a few complications to all the above because of async rendering, but hopefully this gets you oriented.
  • We don’t have this doc, yet. We definitely should work on it. :grimacing: It’s actually something we discuss often but haven’t got around to yet.

I’m not sure whether we’ll be able to record the meeting tomorrow and anyway it might amount to not much more detail than what I wrote above. We can schedule another more polished one anyway, don’t worry. =)

Thank you very much for your kind reply @jni!

Yes!, I will join you guys next Wednesday! :wink:, I really want to learn the inner working of Napari! :nerd_face: