Napari issues (it works but no GPU and the GUI is all squashed!)

Hi, So ive got napari working on jupyter lab, but seem to be having issues with a couple of issues…

The GUI is all squashed and i can barely see the writing or any of the options (any idea how to change size of the window and the buttons automatically?)
napari%20window%20buttons|690x472

There’s also an error int he command line of the window (which I notice is loaded as a spyder window)

So I can load and image into the viewer but it seems to be using most of my CPU rather than my GPU resources (just from looking at task manager)… The file is only about 2.8GB. Lots of waste if you look at the image but it’s strange that the GPU isnt used.

When I try to use a Dask array from a zarr file (42GB), the whole thing hangs up and just doesnt do anything because of a memory stall…the error isnt shown below but I can recreate if needed. If this is too big for napari, thats fine. Ill have to use mat plot lib to process the image before looking at later once segmented.

!

my code is very simple and i saw the comments of 100GB files being read with napari so unless this is on something more powerful than my laptop I’m not sure what to do?

Laptop specs are
{“platform”: “Windows”, “platform-release”: “10”, “platform-version”: “10.0.18362”, “architecture”: “AMD64”, “processor”: “Intel64 Family 6 Model 158 Stepping 9, GenuineIntel”, “ram”: “16 GB”}’ with an Nvidia Geforce 1060 GTX 6GB

Any help would be appreciated!

I should add, that i can take one slice to view using matplotlib. I was hoping to be able to use a viewer to look at the stack as i process the images. Otherwise I’m going to have to go back to my old plan of splitting everything, processing it all in imagej and then trying to restitch. I should also advise that this data one array to be segmented together. So would be really helpful to get going.

One more last thought, I have a GPU, as previously advised. I also have an eGPU (the GPU on this isnt as great). Is there a way to point the napari to specific GPU’s or even spread the use?

ok maybe more an aside rather than the original problem. but would be interested to hear your thoughts!

Hi @Sh4zKh4n, and thanks for your interest in napari!

This is definitely not too big for napari! What’s happening is that zarr lazy loading to check for image pyramids is failing right now. See this issue. It turns out it’s easier to instantiate a complete zarr array in memory by accident than a dask array! The workaround for this is to load the zarr array not with zarr.open but with dask.array.from_zarr. Of course we will work to fix the situation with zarr.open arrays directly.

… …

Aaaand I just looked at your code more closely, and I see that you’re using da.from_zarr already! :man_facepalming: So this is a different problem. My next guess is that the chunking is the problem. Currently we calculate the contrast limits by looking at the top, middle, and last slices, which in your case results in 3GB 12GB being loaded! Can you try view_image(..., contrast_limits=[0, 255]) and see whether loading becomes zippy again? However, it’s still 4GB of data being loaded per slice (group), which isn’t going to be super-zippy. If your primary use is looking at slices, I might recommend rechunking to e.g. (20, 2051, 2051), which will load about 100MB at a time — much friendlier!

This is also a known problem, see this issue. Sorry about that! It appears that some of our Qt configuration isn’t interacting well with HiDPI (aka retina displays) scaling on some machines. Unfortunately for this we don’t have a workaround yet, other than turning off scaling on your display, which is probably not something you want to do (everything will be tiny). We’ll make it a priority to fix this soon!

We still don’t have any good leads on this I’m afraid… Follow progress on this issue. As I understand it though, it does not affect functionality?

I’m not sure whether your eGPU is maybe not being detected by OpenGL. Having said that, only rendering (the stuff in the black canvas) is done on the GPU, things like IO (obviously), thumbnail drawing, slicing, etc are on the CPU, but they are generally not limiting.

Would you mind copying the contents of the Help > napari info window here?

Thanks again! I hope the comment about rechunking at least is helpful :grimacing:, please bear with us as we work on the other issues!

2 Likes

Hi @jni, thanks for the reply. Will try those options tomorrow. Though I think you got confused on my first and last point. Sorry for making it confusing. Let’s go with the eGPU q) that was an aside and isn’t really important at the moment. We can get back to it.

The main issue is, when I look at task manager, I can see all this memory and CPU being used but the gpu is sat idle… Even in the case of the smaller 2GB np array image. All the work is done by the cpu and motherboard ram, instead of the gpu. Ill send a screen ahot tomorrow of the task manager when I reattempt .

Illsend the help info to you tomorrow morning. The array I’ve wanted is chunked in 1.05GB’s but will rechunk , which should work and actually might be helpful when I get to registration. I’ve got 6 of these arrays and they make up a bigger one once the over lap is registered, so having an easy way of look at the data like napari would be really helpful!

Well, I think this is expected… You are displaying a 2D slice of shape what, 2K x 2K? ie 4MB? That’s not going to strain the GPU in the slightest. All of the logic for slicing through a volume is in the CPU. Only when you switch to 3D rendering do you expect to push the GPU.

Perhaps you are expecting that we “preload” the 2GB into GPU memory? That is an optimisation that we might consider some day, but right now, lazy loading was more important, so we choose to only instantiate one slice at a time, and we haven’t found that slicing is so slow that we feel this is a bottleneck. Please let us know if your experience is different.

So I had a go at this @jni and unfortunately the same problem this morning. Theres insufficient memory to load the file. Even though it’s a dask array and it’s chunked in the z direction.

Blockquote
You are displaying a 2D slice of shape what, 2K x 2K? ie 4MB?

my 2K x 2K 2D slice is more like 130 MB, see below chunk information from Dask (screenshot). The total array is 43.61GB, as you can see. I chunked the data from source and used auto selection in the z axis, as dask can determine a 128MB chunk size based on the other parameters (2051 x 2051). Ive even tried chunking along the x axis and the same issue.

image

When I load Napari as you can see from task manager

image

My RAM and CPU are being absolutely hammered. Which is weird considering the github for Napari says they’ve tested this for a 100GB file.

The code I wrote is as follows:



import numpy as np
import scipy
import matplotlib 
import matplotlib.pyplot as plt
%matplotlib inline
import skimage
import dask
import dask.array as da

plt.rcParams["figure.figsize"] = (15,15)

%time daZ_Col3_load = da.from_zarr('XCT/Col3_chXY_z1010367x2051y2051.zarr')

daZ_Col3_load 

daZ_Col3_load.shape # check shape ~ output is in z, x, y axis 

%time plt.imshow(daZ_Col3_load[5184], cmap='gray'); plt.colorbar() # load middle slice and check time

%gui qt5
import napari

%time viewer = napari.view_image(daZ_Col3_load)
The error out put is below:
**--------------------------------------------------------------------------**

**MemoryError** Traceback (most recent call last)

**<timed exec>** in <module>

**~\Anaconda3\lib\site-packages\napari\view_layers.py** in view_image **(data, channel_axis, rgb, is_pyramid, colormap, contrast_limits, gamma, interpolation, rendering, name, metadata, scale, translate, opacity, blending, visible, path, title, ndisplay, order)**

  118 blending **=** blending **,**

119 visible **=** visible **,**

**--> 120** path **=** path **,**

121 )

122 **return** viewer

**~\Anaconda3\lib\site-packages\napari\components\viewer_model.py** in add_image **(self, data, channel_axis, rgb, is_pyramid, colormap, contrast_limits, gamma, interpolation, rendering, name, metadata, scale, translate, opacity, blending, visible, path)**

507 opacity **=** opacity **,**

508 blending **=** blending **,**

**--> 509** visible **=** visible **,**

 510 )

511 self **.** add_layer **(** layer **)**

**~\Anaconda3\lib\site-packages\napari\layers\image\image.py** in __init__ **(self, data, rgb, is_pyramid, colormap, contrast_limits, gamma, interpolation, rendering, name, metadata, scale, translate, opacity, blending, visible)**

145

146 ndim, rgb, is_pyramid, data_pyramid = get_pyramid_and_rgb(

**--> 147** data **,** pyramid **=** is_pyramid **,** rgb **=** rgb

148 )

149

**~\Anaconda3\lib\site-packages\napari\util\misc.py** in get_pyramid_and_rgb **(data, pyramid, rgb)**

191 max_layer **=** np **.** floor **(** np **.** log2 **(** largest **)** **-** **9).** astype **(** int **)**

192 data_pyramid = fast_pyramid(

**--> 193** data **,** downscale **=** downscale **,** max_layer **=** max_layer

194 )

195 data_pyramid **=** trim_pyramid **(** data_pyramid **)**

**~\Anaconda3\lib\site-packages\napari\util\misc.py** in fast_pyramid **(data, downscale, max_layer)**

231 **for** i **in** range **(** max_layer **-** **1):**

232 pyramid.append(

**--> 233** ndi **.** zoom **(** pyramid **[** i **],** zoom_factor **,** prefilter **=False,** order **=0)**

234 )

235 **return** pyramid

**~\Anaconda3\lib\site-packages\scipy\ndimage\interpolation.py** in zoom **(input, zoom, output, order, mode, cval, prefilter)**

589 **if** order **<** **0** **or** order **>** **5:**

590 **raise** RuntimeError **('spline order not supported')**

**--> 591** input **=** numpy **.** asarray **(** input **)**

592 **if** numpy **.** iscomplexobj **(** input **):**

593 **raise** TypeError **('Complex type not supported')**

**~\Anaconda3\lib\site-packages\numpy\core\_asarray.py** in asarray **(a, dtype, order)**

83

84 """

**---> 85** **return** array **(** a **,** dtype **,** copy **=False,** order **=** order **)**

86

87

**~\Anaconda3\lib\site-packages\dask\array\core.py** in __array__ **(self, dtype, **kwargs)**

1313

1314 **def** __array__ **(** self **,** dtype **=None,** ****** kwargs **):**

**-> 1315** x **=** self **.** compute **()**

1316 **if** dtype **and** x **.** dtype **!=** dtype **:**

1317 x **=** x **.** astype **(** dtype **)**

**~\Anaconda3\lib\site-packages\dask\base.py** in compute **(self, **kwargs)**

163 dask **.** base **.** compute

164 """

**--> 165** **(** result **,)** **=** compute **(** self **,** traverse **=False,** ****** kwargs **)**

166 **return** result

167

**~\Anaconda3\lib\site-packages\dask\base.py** in compute **(*args, **kwargs)**

435 postcomputes **=** **[** x **.** __dask_postcompute__ **()** **for** x **in** collections **]**

436 results **=** schedule **(** dsk **,** keys **,** ****** kwargs **)**

**--> 437** **return** repack **([** f **(** r **,** ***** a **)** **for** r **,** **(** f **,** a **)** **in** zip **(** results **,** postcomputes **)])**

438

439

**~\Anaconda3\lib\site-packages\dask\base.py** in <listcomp> **(.0)**

435 postcomputes **=** **[** x **.** __dask_postcompute__ **()** **for** x **in** collections **]**

436 results **=** schedule **(** dsk **,** keys **,** ****** kwargs **)**

**--> 437** **return** repack **([** f **(** r **,** ***** a **)** **for** r **,** **(** f **,** a **)** **in** zip **(** results **,** postcomputes **)])**

438

439

**~\Anaconda3\lib\site-packages\dask\array\core.py** in finalize **(results)**

963 **while** isinstance **(** results2 **,** **(** tuple **,** list **)):**

964 **if** len **(** results2 **)** **>** **1:**

**--> 965** **return** concatenate3 **(** results **)**

966 **else:**

967 results2 **=** results2 **[0]**

**~\Anaconda3\lib\site-packages\dask\array\core.py** in concatenate3 **(arrays)**

4311 **return** type **(** x **)**

4312

**-> 4313** result **=** np **.** empty **(** shape **=** shape **,** dtype **=** dtype **(** deepfirst **(** arrays **)))**

4314

4315 **for** **(** idx **,** arr **)** **in** zip **(** slices_from_chunks **(** chunks **),** core **.** flatten **(** arrays **)):**

**MemoryError** : Unable to allocate array with shape (10367, 2051, 2051) and data type uint8 ```

the only thing, I think can do then is to viewer is unpacking and merging all of the chunks to load into memory. It so strang because when I just take one slice from Dask for matplotlib, to plot this. There’s no issue at all. Just here…

any thoughts would be appreciated.

Hi @Sh4zKh4n

There are a couple things here still going on unfortunately, one is that right now we automatically compute image pyramids for very large axes unless we are told not too, which is maybe not a sensible choice for a default. Right now that is causing us to instantiate all your data which is causing the hanging you are observing.

As was noted above we also try guess your contrast limits - this is a little more robust, but maybe it’s still better to explicitly set you contrast limits at the beginning too.

Finally your chunk size of 130 MB is fine for showing one image, but if you start moving the slider around to browse them you’ll notice that it’s still a bit slow. If you can chunk each z-slice independently i.e. (1, 2501, 2501) then I definitely would.

Can you try this minimal example

import dask.array as da
import napari


data = da.random.random((10367, 2501, 2501), chunks=(1, 2501, 2501))
print(data)

with napari.gui_qt():
    viewer = napari.view_image(data, contrast_limits=[0, 1], is_pyramid=False)

and see if it works you?
You should see

dask.array<random_sample, shape=(10367, 2501, 2501), dtype=float64, chunksize=(1, 2501, 2501), chunktype=numpy.ndarray>

I get pretty good performance on my laptop:

You can then try swapping out the random dask array for you dask array from zarr and performance shouldn’t change too much. We should be able to get your use case working really well for you, as that is something we really want to support well, so please bear with us as we iron out some of the api kinks and improve our documentation to make this all more clear.

1 Like

Hi @sofroniewn,

I did try the contrast change but I still had the same problem with my data. I’ve tried out the dummy data in a python console and it worked perfectly! I then tried the console for my data and performed just as well, the ram and cpu were not being overworked. This is at least a good workaround if i need to look at some data I can use a python terminal. I didn’t rechunk the data for individual slices and kept this to a chunk size of ~130MB, it worked perfectly fine. The lag on the slider wasn’t far off the random data set performance.

I went back to JupyterLab and the code worked perfectly! The tutorial on the napari website advises to use the magic function as shown below for Qt. I would say that the performance was nearly on par with the random data set!

but it looks like if you just call it using the ipython console commands in jupiter with the contrast and python is_pyramid=false) the whole thing works perfectly in Jupyterlab. Ive actually accidentally opened to napari gui’s to the same file (twice) and though it’s a bit sluggish (calling at the same file) it’s worked. That great! Im guessing you have been on this fix for a while but thank you for getting it working! I can more easily . That makes a big difference for me! This is already a sub-volume of my total data set. cutting it up further would have been a pain when it came to working out the right image processing pipeline! Well, goodbye ITKwidgets for now! I’m going to be using napari!

Hi @sofroniewn and @jni , so good new and bad news! I can interact with my data and I have tried a smaller slice and it is a bit faster but (having stumble onto napari from @jni blog:

One of the cool things I thought about the module was the on the fly processing in the python console! Problem is that when I click on the console button, the console opens but its missing the terminal! This isn’t a stop use case, as you are still working on napari. I can come and call back napari after Ive processed data but looks like the python termina is now missing!

I went back and to a python terminal and just tried the the minimal example but same thing has occured. The terminal is missing to do any further analysis on the data set?

Hi @Sh4zKh4n can you explain a little more what you mean by

when I click on the console button, the console opens but its missing the terminal

and maybe provide a screenshot. For the example I used previously if I click the console button the terminal then appears and I can access the viewer and its data as is done in my screenshot below. Does that work for you?

If you launch from an ipython console clicking the console button does nothing, we should disable it, but you can interact with napari from the ipython console if you launch with something like viewer = napari.view_image(data)

This functionality should be working well now, so if we run into any bugs we’ll definitely work on fixing them asap

Hi @sofroniewn

I run the my data, I get a very small GUI using jupyter lab


Once expanded, (everything is tiny, I suspect this is because a I have a 4K laptop screen).
I click on the console and the terminal for ipython is no longer loading. This was loading before hand for me, when I loaded in the previous method.

unfortunately the same issue with ipython, though as you advise it shouldnt do anything. I just would have though that from jupyterlab, this would be more important. Since running the ipython terminal to do some quick analysis on a sample and then returning to jupyter to record and note the successful steps, would be really beneficial. I hope that makes sense?

For the console, is it potentially that the console is actually there but you are scrolled way way down so you don’t see the launch message or the typing line - I’m just looking at your the scroll bar on the right hand side. I have noticed before that sometimes when we launch the console you have to scroll up to see the text, it might have something to do with you gui scaling issue, we can work on fixing that too!

So having tried to work out some other stuff, i ended up ruining my anaconda enviroment enough that even with an uninstall, my jupyter lab distribution was bricked! Ive had to do a complete wipe of the laptop. So back to square one. I couldnt see the bar you were talking about due to the DPI effect issue. I have found a fix, thats a work around for windows:

Basically, load a window of napari. Right click the icon => properties=> compatability tab=> change DPI settings=> override scaling problems (tick) and in the drop down menu select System(enahnced). So I can now scaling is fixed.

You can see the Jupyter QT console now working, but I have called this in an IPython Kernel and not jupyter. I have noticed I a getting a warning regarding PyQT version… saying the anaconda version 5.9.2 isnt tested but 5.12.3 is tested. Previously I installed 5.12.3 but unfortunately this had issues with the spyder console and i couldnt fix it. So at least I have a working system after a complete re install.

There is still the issue of the Jupyter QtConsole from Jupyter Labs. I dragged the scroll bar up (now that I can see the scroll bar!). Unfortunately no luck. It is still not loading. (left side called by jupyter labs, right side by python terminal). As you can see even with a different PyQT5 install, the issue occurs. So there must be a jupyter based conflict. I know its not a priority right now, since i have a working system, from the console. I will just have to swap back and forth until you guys have a more stable version.

At least there is a fix for Windows users with DPI scaling issues.
(I know I know I should move to linux… one day, one day)

Cheers guys for all the help. Ill let you know in a week how its all going with my data processing!

Ah! Nice find! Thank you for sharing that, I’m sure many people will find it useful. I wonder if the same thing will work on Linux GNOME, but can’t test that right now.

I wouldn’t call it not a priority, it’s just so difficult to reproduce that we can’t effectively chase it up. So, it’s not a priority when weighted by difficulty. We certainly appreciate you sharing all the progress you make, as it helps us to narrow down causes and issues.

For what it’s worth, I typically have an IPython terminal and a napari instance side by side, rather than using the napari built-in console. With the right window layout it is quite pleasant to work this way. =)

Thanks, I look forward to hearing about it!

1 Like

Hi Juan,

I’m a postdoc at UCSF (next to Biohub). I’m using the Lenovo X1 Extreme Gen2 and I have both Windows 10 and Ubuntu installed. Both systems have the scaling issue with napari. I think it’s because the high resolution screen. I can bring my laptop to you guys if needed.

Hi Wanpeng!

Well, I’m in Melbourne, Australia, so it probably won’t be me helping you with this! :joy: Thanks very much for the offer though, it is very helpful! Maybe @sofroniewn can make a date with you to explore the issue, though! We know for sure that it’s because of the hidpi display, but what we don’t know is how to configure Qt properly so that it scales everything correctly on such displays in both Linux and Windows. (It should also do the right thing on low resolution displays, and, importantly, in mixed environments where one display might be high res and the other low res, such as when connected to a projector.)

Hi @Wanpeng-Wang thanks for the offer. I am often at the biohub on thursdays so could be fun to meet up, but I will be travelling the next couple weeks before the holidays! I do have another linux machine at CZI that i can access in the mean time though and we will try and get the scaling issue fixed soon though. Sorry about this!