StarDist3D_Error: 'StarDist3D' object has no attribute '_tile_overlap'

Hi @Guillaume_Jacquemet and @uschmidt83,
I am trying to run the ZerocostDL4Mic’s stardist 3D notebook, based on Uwe Schmidt’s paper. But sadly I am facing recurring session termination on Google Colab and the following error when I tried to run the notebook on my workstation.

number of images: 3

  • training: 2
  • validation: 1
    empirical anisotropy of labeled objects = (2.6666666666666665, 1.391304347826087, 1.0)
    Number of steps: 5
    Config3D(anisotropy=(2.6666666666666665, 1.391304347826087, 1.0), axes=‘ZYXC’, backbone=‘resnet’, grid=(1, 2, 2), n_channel_in=1, n_channel_out=97, n_dim=3, n_rays=96, net_conv_after_resnet=128, net_input_shape=(None, None, None, 1), net_mask_shape=(None, None, None, 1), rays_json={‘name’: ‘Rays_GoldenSpiral’, ‘kwargs’: {‘n’: 96, ‘anisotropy’: (2.6666666666666665, 1.391304347826087, 1.0)}}, resnet_activation=‘relu’, resnet_batch_norm=False, resnet_kernel_init=‘he_normal’, resnet_kernel_size=(3, 3, 3), resnet_n_blocks=4, resnet_n_conv_per_block=3, resnet_n_filter_base=32, train_background_reg=0.0001, train_batch_size=2, train_checkpoint=‘weights_best.h5’, train_checkpoint_epoch=‘weights_now.h5’, train_checkpoint_last=‘weights_last.h5’, train_dist_loss=‘mae’, train_epochs=400, train_foreground_only=0.9, train_learning_rate=0.0003, train_loss_weights=(1, 0.2), train_n_val_patches=None, train_patch_size=(48, 96, 96), train_reduce_lr={‘factor’: 0.5, ‘patience’: 40, ‘min_delta’: 0}, train_steps_per_epoch=100, train_tensorboard=True, use_gpu=False)
    Using default values: prob_thresh=0.5, nms_thresh=0.4.

AttributeError Traceback (most recent call last)
c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\stardist\models\base.py in _axes_tile_overlap(self, query_axes)
673 try:
→ 674 self._tile_overlap
675 except AttributeError:

AttributeError: ‘StarDist3D’ object has no attribute ‘_tile_overlap’

During handling of the above exception, another exception occurred:

InternalError Traceback (most recent call last)
in
135 #Here we check the FOV of the network.
136 median_size = calculate_extents(Y, np.median)
→ 137 fov = np.array(model._axes_tile_overlap(‘ZYX’))
138 if any(median_size > fov):
139 print(“WARNING: median object size larger than field of view of the neural network.”)

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\stardist\models\base.py in _axes_tile_overlap(self, query_axes)
674 self._tile_overlap
675 except AttributeError:
→ 676 self._tile_overlap = self._compute_receptive_field()
677 overlap = dict(zip(
678 self.config.axes.replace(‘C’,’’),

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\stardist\models\base.py in _compute_receptive_field(self, img_size)
659 z = np.zeros_like(x)
660 x[(0,)+mid+(slice(None),)] = 1
→ 661 y = self.keras_model.predict(x)[0][0,…,0]
662 y0 = self.keras_model.predict(z)[0][0,…,0]
663 grid = tuple((np.array(x.shape[1:-1])/np.array(y.shape)).astype(int))

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\keras\engine\training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
1627 for step in data_handler.steps():
1628 callbacks.on_predict_batch_begin(step)
→ 1629 tmp_batch_outputs = self.predict_function(iterator)
1630 if data_handler.should_sync:
1631 context.async_wait()

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\def_function.py in call(self, *args, **kwds)
826 tracing_count = self.experimental_get_tracing_count()
827 with trace.Trace(self._name) as tm:
→ 828 result = self._call(*args, **kwds)
829 compiler = “xla” if self._experimental_compile else “nonXla”
830 new_tracing_count = self.experimental_get_tracing_count()

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
892 *args, **kwds)
893 # If we did not create any variables the trace we have is good enough.
→ 894 return self._concrete_stateful_fn._call_flat(
895 filtered_flat_args, self._concrete_stateful_fn.captured_inputs) # pylint: disable=protected-access
896

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1916 and executing_eagerly):
1917 # No tape is watching; skip to running the function.
→ 1918 return self._build_call_outputs(self._inference_function.call(
1919 ctx, args, cancellation_manager=cancellation_manager))
1920 forward_backward = self._select_forward_and_backward_functions(

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager)
553 with _InterpolateFunctionError(self):
554 if cancellation_manager is None:
→ 555 outputs = execute.execute(
556 str(self.signature.name),
557 num_outputs=self._num_outputs,

c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
57 try:
58 ctx.ensure_initialized()
—> 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:

InternalError: Blas SGEMM launch failed : m=262144, n=96, k=128
[[node model/dist/Conv3D (defined at c:\users\vineetku\appdata\local\programs\python\python38\lib\site-packages\stardist\models\base.py:661) ]] [Op:__inference_predict_function_632]

Function call stack:
predict_function


I succeeded to get one results on google colab and they look promising. I guess with your help i might be able to run the notebook more smoothly.

Thank you very much for your time

I’m not sure what the problem is, but it could be as simple as not having enough (available) GPU memory. Try reducing the train_patch_size and/or train_batch_size.

Best,
Uwe

1 Like

Hi @uschmidt83,
Thanks for your great help, the program is working great after I reduced the train_patch_size !!.

1 Like