Saving image from napari

Hi,

I absolutely love napari! I was wondering if it is possible to save/export a napari canvas, with channels 405, 488, 561, 647, as RGB image. It would be ideal if it could be exported for use further use with matplotlib. Currently, the Viewer.screenshot() functionality works, but it would be great if there was a way to grab just the image in a manner independent of how large or small the napari window is.

Thanks for the help!

Best,

Otto.

You should be able to do this from the File menubar by selecting the layer you are interesting in and using the “Save Selected Layer(s)” option and a .png extension. You should also be able to do this from a python script using the imageio imwrite function on the array, which is accessible as layer.data (you can get the layer at viewer.layers[layer_name] for example).

Please let us know if neither of those options work for you and we can try and provide a more detailed answer. It’s great to hear that you’ve lover using napari too!!

Thanks, Nicholas for your reply! I’m sorry, I should have explained more clearly. What I am after is the ability to get an RGB array from napari that is a composite of a four-channel image e.g. 405, 488, 561, 647. What would be ideal would be functionality something like:

img.shape # e.g. (4, 2048, 2048)

view.add_image(
    img,
    channel_axis=0,
    name=('405', '488', '561', '647'),
    colormap=["cyan", "yellow", "magenta", "red"],
    contrast_limits=[(0, 2000), (0, 4000), (0, 4000), (0, 4000)],
    scale=(0.325, 0.325),
)

img_RGB = view.layers.RGB_image() 
# access RGB composite image array, including added scale bar and using set contrast limits

img_RGB.shape # (2048, 2048, 3)

view.screenshot() gets close to what I would like. However, the returned RGB array is dependent on the size of the napari window.

Such functionality would be really helpful, as I could then add those images into matplotlib subplots etc. Here’s an example subplot that I would like to generate. At the moment it only shows a single channel, but I would like to add RGB composites.

BMP_day_3_test.pdf (8.4 MB)

Thanks so much for your help! If napari is not currently able to do this, could you possibly recommend any alternative libraries or functions for such a task?

I really appreciate all the awesome work you do with Napari! It is really helpful for my research!

Ah sorry! I understand what you want now - we don’t have anything directly in napari to do this yet. The combine_stains method from skimage might be a good starting point for you.

Alternatively you could do something in napari like

# Create empty rgba image
blended = np.zeros(viewer.layers[0].data.shape + (4,))
# Set alpha value to max
blended[..., 3] = 1
for layer in viewer.layers:
    # normalize data by clims
    normalized_data = (layer.data - layer.contrast_limits[0]) / (layer.contrast_limits[1] - layer.contrast_limits[0])
    colormapped_data = layer.colormap.map(normalized_data.flatten())
    colormapped_data = colormapped_data.reshape(normalized_data.shape + (4,))

    # perform an "additive" blending
    f_dest = normalized_data[..., None]
    f_source = 1 - f_dest
    blended = blended * f_source + colormapped_data

Assuming your just have your 4 channels loaded in viewer.layers.

If I run this on some example data and then add blended as an extra layer I can get the following

Hope this helps!!

1 Like

Hi Nicholas,

Thanks so much for your help! The combine_stains method from skimage looks very interesting! I’ll have a play around with this to see if I can get it working for four-channel images.

Thanks also for your napari code! Unfortunately, on my data, this code generates some strange results, both on four-channel and three-channel images. Here is an example from a three-channel:

blended:

The original 4-channel image is here:
test_data.tif (16.0 MB)

What example data are you using? It would be great to test out your data to help figure out what is happening when I try with mine.

Thanks so much for the help!

1 Like

Hmm, i must having something off in my blending formula, maybe @jni or someone else can take a deeper look. This comment contains some useful links to the openGL blending that happens under the hood Blending modes · Issue #20 · napari/product-heuristics-2020 · GitHub

As for data, i’m using this tiff (which I believe comes from the ImageJ demos and is under a CC0 data sharing license) cells.tif (772.0 KB)

This is a link to the full script image-demos/cells_blended.py at b6dd23f649bb484bd9704828fd0154314b9fd8fd · sofroniewn/image-demos · GitHub

Hope that helps!!

1 Like

Hi Nicholas,

sorry for the delayed reply - things got busy at work. Thanks so much for your help and for providing the example image data!

I’ve had a go using different blending and I’m unsure, but as you suggested, the problem may have been the additive blend formula.

def blended_img(viewer):
    import napari
    import numpy as np
    
    blended = np.zeros(viewer.layers[0].data.shape + (4,))
    for layer in viewer.layers:
        # normalize data by clims
        normalized_data = (layer.data - layer.contrast_limits[0]) / (
        layer.contrast_limits[1] - layer.contrast_limits[0]
    )
        colormapped_data = layer.colormap.map(normalized_data.flatten())
        colormapped_data = colormapped_data.reshape(normalized_data.shape + (4,))

        blended = blended + colormapped_data
    
    blended[..., 3] = 1 # set alpha channel to 1

    return blended

When using this, I get the expected results using my data:

And on the example data:

Fingers crossed this is correct! Thanks for your help, it would be great if this functionality could be added to Napari. I would be very happy to help if I can.

2 Likes

Amazing, that’s great. Agreed this would be nice to expose in napari one day!!