Brainreg Tutorial Error in Napari

Hello,

I’m running into an IndexError followed by a MemoryError when running the test_brain output directory in napari with the allen_mouse_50um visual toggled on:


IndexError Traceback (most recent call last)
~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/labels/labels.py in _raw_to_displayed(self=, raw=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
704 try:
→ 705 image = self._all_vals[raw]
image = undefined
self._all_vals = array([0. , 0.1210181 , 0.7342237 , 0.35524179, 0.96844739,
0.58946549, 0.21048359, 0.82368919, 0.44470729, 0.06572538,
0.67893098, 0.29994908, 0.91315468, 0.53417278, 0.15519087,
0.76839647, 0.38941457, 0.01043267, 0.62363827, 0.24465637,
0.85786196, 0.47888006, 0.09989816, 0.71310376, 0.33412186,
0.94732746, 0.56834555, 0.18936365, 0.80256925, 0.42358735,
0.04460545, 0.65781104, 0.27882914, 0.89203474, 0.51305284,
0.13407094, 0.74727654, 0.36829463, 0.98150023, 0.60251833,
0.22353643, 0.83674203, 0.45776012, 0.07877822, 0.69198382,
0.31300192, 0.92620752, 0.54722562, 0.16824371, 0.78144931])
raw = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
706 except IndexError:

IndexError: index 1030 is out of bounds for axis 0 with size 50

During handling of the above exception, another exception occurred:

MemoryError Traceback (most recent call last)
~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/_qt/widgets/qt_layerlist.py in changeVisible(self=, state=2)
593 “”"
594 if state == Qt.Checked:
→ 595 self.layer.visible = True
self.layer.visible = False
596 else:
597 self.layer.visible = False

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/base/base.py in visible(self=, visibility=True)
365 def visible(self, visibility):
366 self._visible = visibility
→ 367 self.refresh()
self.refresh = >
368 self.events.visible()
369 if self.visible:

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/base/base.py in refresh(self=, event=None)
927 “”“Refresh all layer data based on current view slice.”""
928 if self.visible:
→ 929 self.set_view_slice()
self.set_view_slice = >
930 self.events.set_data()
931 self._update_thumbnail()

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/base/base.py in set_view_slice(self=)
791 def set_view_slice(self):
792 with self.dask_optimized_slicing():
→ 793 self._set_view_slice()
self._set_view_slice = >
794
795 @abstractmethod

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/image/image.py in _set_view_slice(self=)
588 # Load our images, might be sync or async.
589 data = SliceDataClass(self, image_indices, image, thumbnail_source)
→ 590 self._load_slice(data)
self._load_slice = >
data =
591
592 def _load_slice(self, data: SliceDataClass):

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/image/image.py in _load_slice(self=, data=)
599 if self._slice.load(data):
600 # The load was synchronous.
→ 601 self._on_data_loaded(data, sync=True)
self._on_data_loaded = >
data =
global sync = undefined
602 else:
603 # The load will be asynchronous. Signal that our self.loaded

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/image/image.py in _on_data_loaded(self=, data=, sync=True)
623
624 # Pass the loaded data to the slice.
→ 625 if not self._slice.on_loaded(data):
self._slice.on_loaded = >
data =
626 # Slice rejected it, was it for the wrong indices?
627 return

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/image/_image_slice.py in on_loaded(self=, data=)
136
137 # Display the newly loaded data.
→ 138 self._set_raw_images(data.image, data.thumbnail_source)
self._set_raw_images = >
data.image = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
data.thumbnail_source = None
139 self.loaded = True
140 return True # data was used.

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/image/_image_slice.py in _set_raw_images(self=, image=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32), thumbnail_source=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
94 image = np.clip(image, 0, 1)
95 thumbnail_source = np.clip(thumbnail_source, 0, 1)
—> 96 self.image.raw = image
self.image.raw = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
image = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
97
98 # save a computation of view image if thumbnail and image is equal

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/image/_image_view.py in raw(self=, raw_image=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
71
72 # Update the view image based on this new raw image.
—> 73 self._view = self.image_converter(raw_image)
self._view = array([[0.]])
self.image_converter = >
raw_image = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)

~/anaconda3/envs/brainreg/lib/python3.8/site-packages/napari/layers/labels/labels.py in _raw_to_displayed(self=, raw=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
707 max_val = np.max(raw)
708 self._all_vals = low_discrepancy_image(
→ 709 np.arange(max_val + 1), self._seed
global np.arange =
max_val = 576073699
self._seed = 0.5
710 )
711 self._all_vals[0] = 0

MemoryError: Unable to allocate 4.29 GiB for an array with shape (576073700,) and data type int64

I’ve gotten the MemoryError with data type int and float. Any insight? I’m an undergraduate student looking to use some of this for a research project, but I’m not proficient enough to really debug this.

Thanks!

Are you loading the data directly into napari (e.g. dragging and dropping), or do you have your own script?

P.S. When you post, could you also tag the post with the specific software you’re using (e.g. brainreg) so that the right person sees your post? I’ve added it to this post. Thanks!

Dragging and dropping at the moment. (Hope to have my own script eventually, but I’ll get there later :slight_smile: )

Thank you for the tag, I’m new to the site. :slight_smile:

Don’t worry, so are we (yours is question number 3)!

How much memory does your machine have?

Could you try installing brainreg into a new conda environment to see if there’s something wrong with the napari installation?

If you can send me the brainreg output directory, I can check nothing has gone wrong there.

It looks like the computer has 16gb. I’ll try doing a new conda environment, but to be honest this is a new one just created today.

Couldn’t upload here but maybe this will work to get the output data?

That output directory looks fine (assuming you aren’t trying to open the .gz compressed files). It opens in my napari, using ~250MB memory.

What operating system are you using? It would be great if you could try in a fresh conda environment:

  • pip install napari[all]
  • pip install pip install brainglobe-napari-io

And then try to drag and drop the directory into napari.

I’m using Ubuntu at the moment. I will try a fresh environment.

Fresh environment and still getting errors. Although the visualization doesn’t freeze up like it was doing before


IndexError Traceback (most recent call last)
~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/labels/labels.py in _raw_to_displayed(self=, raw=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
704 try:
→ 705 image = self._all_vals[raw]
image = undefined
self._all_vals = array([0. , 0.1210181 , 0.7342237 , …, 0.42812353, 0.04914162,
0.66234722])
raw = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
706 except IndexError:

IndexError: index 1123 is out of bounds for axis 0 with size 1108

During handling of the above exception, another exception occurred:

MemoryError Traceback (most recent call last)
~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/_qt/widgets/qt_dims.py in _set_frame(self=, axis=0, frame=198)
347 # disable additional point advance requests until this one draws
348 self._play_ready = False
→ 349 self.dims.set_current_step(axis, frame)
self.dims.set_current_step =
axis = 0
frame = 198
350
351 def enable_play(self, *args):

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/components/dims.py in set_current_step(self=Dims(ndim=3, ndisplay=2, last_used=0, range=((0…0), order=(0, 1, 2), axis_labels=(‘0’, ‘1’, ‘2’)), axis=0, value=198)
234 full_current_step = list(self.current_step)
235 full_current_step[axis] = step
→ 236 self.current_step = full_current_step
self.current_step = (198, 0, 0)
full_current_step = [198, 0, 0]
237 self.last_used = axis
238

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/utils/events/evented_model.py in setattr(self=Dims(ndim=3, ndisplay=2, last_used=0, range=((0…0), order=(0, 1, 2), axis_labels=(‘0’, ‘1’, ‘2’)), name=‘current_step’, value=[198, 0, 0])
152 are_equal = self.eq_operators.get(name, operator.eq)
153 if not are_equal(after, before):
→ 154 getattr(self.events, name)(value=after) # emit event
global getattr = undefined
self.events =
name = ‘current_step’
value = [198, 0, 0]
after = (198, 0, 0)
155
156 # expose the private EmitterGroup publically

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/utils/events/event.py in call(self=, *args=(), **kwargs={‘value’: (198, 0, 0)})
548 continue
549
→ 550 self._invoke_callback(cb, event)
self._invoke_callback = >
cb = , , , ], scale_bar=ScaleBar(visible=False, colored=False, ticks=True, position=‘bottom_right’), active_layer=, help=’’, status=‘Registered image [198 187 85]’, theme=‘dark’, title=‘napari’, mouse_move_callbacks=, mouse_drag_callbacks=, mouse_wheel_callbacks=, _persisted_mouse_event={}, _mouse_drag_gen={}, _mouse_wheel_gen={}, keymap={})>
event =
551 if event.blocked:
552 break

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/utils/events/event.py in _invoke_callback(self=, cb=, event=)
565 cb(event)
566 except Exception:
→ 567 _handle_exception(
global _handle_exception =
self.ignore_callback_errors = False
self.print_callback_errors = ‘reminders’
self =
global cb_event = undefined
cb = , , , ], scale_bar=ScaleBar(visible=False, colored=False, ticks=True, position=‘bottom_right’), active_layer=, help=’’, status=‘Registered image [198 187 85]’, theme=‘dark’, title=‘napari’, mouse_move_callbacks=, mouse_drag_callbacks=, mouse_wheel_callbacks=, _persisted_mouse_event={}, _mouse_drag_gen={}, _mouse_wheel_gen={}, keymap={})>
event =
568 self.ignore_callback_errors,
569 self.print_callback_errors,

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/utils/events/event.py in _invoke_callback(self=, cb=, event=)
563 def _invoke_callback(self, cb: Callback, event: Event):
564 try:
→ 565 cb(event)
cb = , , , ], scale_bar=ScaleBar(visible=False, colored=False, ticks=True, position=‘bottom_right’), active_layer=, help=’’, status=‘Registered image [198 187 85]’, theme=‘dark’, title=‘napari’, mouse_move_callbacks=, mouse_drag_callbacks=, mouse_wheel_callbacks=, _persisted_mouse_event={}, _mouse_drag_gen={}, _mouse_wheel_gen={}, keymap={})>
event =
566 except Exception:
567 _handle_exception(

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/components/viewer_model.py in _update_layers(self=Viewer(axes=Axes(visible=False, labels=True, col…ouse_drag_gen={}, _mouse_wheel_gen={}, keymap={}), event=, layers=[, ])
250 layers = layers or self.layers
251 for layer in layers:
→ 252 layer._slice_dims(
layer._slice_dims = >
self.dims.point = (198.0, 0.0, 0.0)
self.dims.ndisplay = 2
self.dims.order = (0, 1, 2)
253 self.dims.point, self.dims.ndisplay, self.dims.order
254 )

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/base/base.py in _slice_dims(self=, point=[198.0, 0.0, 0.0], ndisplay=2, order=[0, 1, 2])
856 # Update the point values
857 self._dims_point = point[offset:]
→ 858 self._update_dims()
self._update_dims = >
859 self._set_editable()
860

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/base/base.py in _update_dims(self=, event=None)
543 self._ndim = ndim
544
→ 545 self.refresh()
self.refresh = >
546
547 @property

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/base/base.py in refresh(self=, event=None)
927 “”“Refresh all layer data based on current view slice.”""
928 if self.visible:
→ 929 self.set_view_slice()
self.set_view_slice = >
930 self.events.set_data()
931 self._update_thumbnail()

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/base/base.py in set_view_slice(self=)
791 def set_view_slice(self):
792 with self.dask_optimized_slicing():
→ 793 self._set_view_slice()
self._set_view_slice = >
794
795 @abstractmethod

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/image/image.py in _set_view_slice(self=)
588 # Load our images, might be sync or async.
589 data = SliceDataClass(self, image_indices, image, thumbnail_source)
→ 590 self._load_slice(data)
self._load_slice = >
data =
591
592 def _load_slice(self, data: SliceDataClass):

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/image/image.py in _load_slice(self=, data=)
599 if self._slice.load(data):
600 # The load was synchronous.
→ 601 self._on_data_loaded(data, sync=True)
self._on_data_loaded = >
data =
global sync = undefined
602 else:
603 # The load will be asynchronous. Signal that our self.loaded

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/image/image.py in _on_data_loaded(self=, data=, sync=True)
623
624 # Pass the loaded data to the slice.
→ 625 if not self._slice.on_loaded(data):
self._slice.on_loaded = >
data =
626 # Slice rejected it, was it for the wrong indices?
627 return

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/image/_image_slice.py in on_loaded(self=, data=)
136
137 # Display the newly loaded data.
→ 138 self._set_raw_images(data.image, data.thumbnail_source)
self._set_raw_images = >
data.image = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
data.thumbnail_source = None
139 self.loaded = True
140 return True # data was used.

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/image/_image_slice.py in _set_raw_images(self=, image=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32), thumbnail_source=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
94 image = np.clip(image, 0, 1)
95 thumbnail_source = np.clip(thumbnail_source, 0, 1)
—> 96 self.image.raw = image
self.image.raw = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
image = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)
97
98 # save a computation of view image if thumbnail and image is equal

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/image/_image_view.py in raw(self=, raw_image=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
71
72 # Update the view image based on this new raw image.
—> 73 self._view = self.image_converter(raw_image)
self._view = array([[0.]])
self.image_converter = >
raw_image = array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32)

~/anaconda3/envs/brainregv2/lib/python3.8/site-packages/napari/layers/labels/labels.py in _raw_to_displayed(self=, raw=array([[0, 0, 0, …, 0, 0, 0],
[0, 0, 0,…],
[0, 0, 0, …, 0, 0, 0]], dtype=uint32))
707 max_val = np.max(raw)
708 self._all_vals = low_discrepancy_image(
→ 709 np.arange(max_val + 1), self._seed
global np.arange =
max_val = 599626927
self._seed = 0.5
710 )
711 self._all_vals[0] = 0

MemoryError: Unable to allocate 4.47 GiB for an array with shape (599626928,) and data type int64

I think this is caused by Change raw_to_displayed to only compute colours for all labels when required by DragaDoncila · Pull Request #2415 · napari/napari · GitHub - maybe @DragaDoncila and @jni can add more, but it looks like because we have lots and lots of labels it’s trying to create a big array that isn’t possible because the max label is so large.

We’ve run into this problem already a couple times so we should think about how to fix. Sorry about this!! Not sure if there’s an easy way to get around this right now either.

1 Like

Yeah this is definitely an issue with using max label value to generate a colour lookup. We need to either revert that change or maybe just do some checking on what the max label value actually is and default to the original way we were doing it if it’s very large.

This has also been an issue where the label values don’t start at 1 and so even just having a few labels, whose integer values are large, will cause the same issue.

There is no easy way around it at the moment. @sofroniewn we can chat at the dev meeting, I can put up a PR today to either revert this change or do the introspection depending on what we decide.

2 Likes

Regarding what to do for now from the user side, you do need to be using a script but you could use skimage.segmentation.relabel_sequential to relabel your image to a sane range :joy: before displaying. The only reason we don’t use that function in napari by default is that it’s a bit slower than the array indexing, see this issue for more details. It’s ok as a one time cost, just not something you want to happen every time you move the slider in napari.

But yes we need to come up with a general fix for the issue on the napari side. Hold please! :telephone_receiver: :hourglass_flowing_sand: :pray:

1 Like

Thanks all. It’s like having the napari avengers swooping in to help :rofl:

Unfortunately, we’re often stuck with an “insane” range. These images are brain atlases, where we need to keep the labels the creators chose to try and keep some compatibility with other software.

I’m not sure how I haven’t noticed this before, but this has killed a few of my plugins. Even if you have the memory available, it can take minutes to make the labels layer visible.

2 Likes

0.4.7 killed it, or large labels images? We generally weren’t doing amazing on them before anyway, if your labels image was large in xy. That’s because even before the “speedup” :grimacing: we needed (and still need) to convert from labels range to floats in [0, 1] for VisPy. That turns out to be rather tricky to do. In the long term we want to pass the uint32s (or whatever) directly to VisPy and colormap in a shader, but we are not close on that one.

0.4.7 :cry: Loading (actually making them visible) has gone from pretty much instantaneous, to not possible on some machines (e.g. @kelfie’s).

They’re not necessarily large images (often 100MB), but they do sometimes have a few very large numbers in them.

1 Like

Ok, we could maybe try and fix before 0.4.8 - looks like that is getting delayed till monday/ tuesday anyway for other reasons. thoughts @jni?

Yup. Had a nice discussion with @DragaDoncila and @talley this morning, for now we’ll probably add a check for the implied array size and revert to the old behaviour if it’s too big. @DragaDoncila suggested she’d have some time to fix but please correct me if I’m wrong there @DragaDoncila! :sweat_smile:

Yep, I can’t guarantee a PR this afternoon, but I can guarantee one tomorrow morning! And certainly we can get it in for 0.4.8 early next week.

1 Like

We’ve now got an improved version here in the 0.4.8rc6 release candidate. Sounds like it’s still using a lot of memory, but it’s at least better than before!! We’ll keep working on it