Hi everyone,
I would like to ask your advice on how to accomplish the following task in Napari in the most efficiently way :
- During an experiment, a machine record images of size 64x10240 (uint16). The number of recorded images depends on the duration
of the experiment, it could be hundreds, thousand, tens of thousand or hundreds of thousands of images, it depends on the duration of the
experiment. All the recorded images are store in a specific folder as follows:
/experiment1/image000000000.tif
/experiment1/image000000001.tif
/experiment1/image000000002.tif
/experiment1/image000000003.tif
/experiment1/image000000004.tif
.
.
.
/experiment1/image000000100.tif
.
.
.
/experiment1/image000001000.tif
.
.
.
I would like to address the following use-cases, if possible, using Napari:
- Right after the experiment is finished, I would like to be able to visualize, as fast as possible, the whole data to check if everything was acquired as
expected during the experiment (visual inspection to detect errors). Even though the machine acquired one image ( 64x10240 (uint16) ) at a time, rendering one single image
is not particularly interesing (the resolution of the image is 0.1 mm, therefore 64 rows represents 6.4 mm, but we are interesting in seeing “big objects”, bigger than 6.4 mm).
Therefore, if possible, I would like to:
- Render not one, but a group of images at a time, for example render 10 images at a time, which would correspond to 64mm, and explore them using an slider
(each time I move the slider I would render a single “medium” image of size 640 x 10240 instead of a small one of size 64 x 10240).
I have tried the following beginner approach, but Napari shows me only one single image at a time:
import napari
from dask_image.imread import imread
images = imread( "/experiment1/*.tif" ) # images -> dask.array<concatenate, shape=(398, 64, 10240), #dtype=uint16, chunksize=(1, 64, 10240), chunktype=numpy.ndarray>
viewer = napari.Viewer()
viewer.add_image(images)
When I executed this code, Napari let me see only one single image at a time, and it also let me explore them using a slider (the slider goes from 0 to 397, because
shape=(398, 64, 10240)).
What would you suggest me to change if I want to visualize, let’s say 10 images at a time (that is, a single image of size 640 x 10240) instead of a single image of size
64 x 10240?
- Instead of exploring groups of images at a time, I would like to be able to ensamble and visualize the whole images into a single big image. Let’s say during the
experiment I capture 1000 images of size 64x10240, so I would like to ensamble them into a single big image of size 64000 x 10240.
I can accomplish this by executing the following code, but it is extremely slow (around 21 seconds, using standard Napari installation, and around 11 seconds when
using export NAPARI_ASYNC=1):
import napari
from dask_image.imread import imread
images = imread( "/experiment1/*.tif" ).reshape(-1, 10240) # images -> dask.array<getitem,
#shape=(3184, 1280), dtype=uint16, chunksize=(8, 1280), chunktype=numpy.ndarray>
viewer = napari.Viewer()
viewer.add_image(images)
What would you suggest me in order to be able to visualize all those images as a single big image, and be able to render it as fast as possible?
- As a small variant of the previous case, I would like to be able to visualize all those images as a single big image, but in this case I would like to show to the user
how the canvas get painted. Does Napari wait until it receive all the dask array chunks and them it renders the full image in one-go, or Is it possible in Napari to render each chunck
as soon as it is available in VRAM?
Thank you very much for all your kind help!