Question about accessing volume rendering coordinates

Hi everyone,

I would like to access the maximum intensity projection pixels’ coordinates from their position on the 2D canvas.

From what I found, this is not possible because this information is not stored (napari/volume.py at 45cdcc85f17442dcb8eab7f65311ba21467419c8 · napari/napari · GitHub).

Is this correct?

Since I only need the coordinates from a single position on the 2D canvas I’m thinking about computing it using the viewer’s camera data, but I don’t understand what are the resulting coordinates when clicking on the 3D rendering view projection.

Is there a more practical approach to solving this?

Thanks,

Sounds like you’re interested in 3D picking, which isn’t supported in napari yet. See some discussion here and links within 3D interactivity / picking · Issue #515 · napari/napari · GitHub. I’m not quite sure what machinary already exists for this in vispy. It is something that has been requested by others too. I’m sure we could try and figure something reasonable out if you wanted to work on this.

Did I understand your need?

Yes, you did.

I was able to implement for the MIP rendering using the viewer’s camera data, but it still needs some debugging.

When I’m done I’m going to continue the discussion on the GitHub issue, I’m not sure what is the most appropriate way to insert that in Napari, especially because each rendering algorithm requires a different computation.

Thanks,

1 Like

Hmm ok let’s pick that up on the issue. I think if we have some code samples working for some of the different modes we can the piece something together that works more generally. Thanks for picking this up!!