ZeroCostDL4Mic - Cellpose

Dear ZeroCostDL4Mic team, Dear @Guillaume_Jacquemet

I’m testing the Cellpose - collab notebook. Running it went very smoothly. I had only two minor issues, 1) with the model folder (see at the end for more details), 2) my labels/masks images were 32-bit and it expects them to be 8/16-bit (might be worth mentioning it in the already detailed section about data )

My main question is about the prediction I got, see below:

I guess the main issue here is coming from the “incomplete annotations” of the ground truth that confused the training process? (not my annotations :sweat_smile: , just some existing ones that we try to make a “good” use of )

Another hypothesis is the input data. As I downloaded the cellpose dataset I realized that “grayscale” training images are all “green.png”. Shall I resave my 8-bit gray tif as green.png?

Thank you again for making this notebook available and for your suggestions to improve the training!



NOTE about the minor issue with model folder:
When running the step 4.2 ‘Star training’ , I got the error

** MXNET CUDA version installed and working. **
>>>> using GPU
>>>> pretrained model /root/.cellpose/models/cyto_0 is being used
>>>> during training rescaling images to fixed diameter of 30.0 pixels
NOTE: computing flows for labels (could be done before to save time)
  0% 0/20 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/usr/lib/python3.7/", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.7/", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.7/dist-packages/cellpose/", line 258, in <module>
  File "/usr/local/lib/python3.7/dist-packages/cellpose/", line 237, in main
  File "/usr/local/lib/python3.7/dist-packages/cellpose/", line 633, in train
    train_flows = dynamics.labels_to_flows(train_labels, files=train_files)
  File "/usr/local/lib/python3.7/dist-packages/cellpose/", line 90, in labels_to_flows
    veci = [masks_to_flows(labels[n][0])[0] for n in trange(nimg)]
  File "/usr/local/lib/python3.7/dist-packages/cellpose/", line 90, in <listcomp>
    veci = [masks_to_flows(labels[n][0])[0] for n in trange(nimg)]
  File "/usr/local/lib/python3.7/dist-packages/cellpose/", line 148, in masks_to_flows
    slices = scipy.ndimage.find_objects(masks)
  File "/usr/local/lib/python3.7/dist-packages/scipy/ndimage/", line 305, in find_objects
    return _nd_image.find_objects(input, max_label)
TypeError: 'numpy.float32' object cannot be interpreted as an integer
  0% 0/20 [00:00<?, ?it/s]


FileNotFoundError                         Traceback (most recent call last)

<ipython-input-18-e83cc56c5829> in <module>()
     43   shutil.rmtree(model_path+'/'+model_name)
---> 45 destination = shutil.copytree(Saving_path+"/train_folder/models", model_path+"/"+model_name)
     47 # Displaying the time elapsed for training

/usr/lib/python3.7/ in copytree(src, dst, symlinks, ignore, copy_function, ignore_dangling_symlinks)
    317     """
--> 318     names = os.listdir(src)
    319     if ignore is not None:
    320         ignored_names = ignore(src, names)

FileNotFoundError: [Errno 2] No such file or directory: '/content/first_test/train_folder/models'

After entering into the cell of the step 4.2, I had a look to
destination = shutil.copytree(Saving_path+"/train_folder/models", model_path+"/"+model_name)
before going to the cell at step 4 ‘Create the model and dataset objects’ and I added 2 lines :

model_folder = train_folder+"/models"

After this modification training was able to start!


1 Like

Hi @romainGuiet

Thanks for reaching out and glad you find the notebook useful.

Regarding 1) I will check and correct the bug ASAP.
For 2) yes, the notebook expect 8/16 bit images (so that PNG images can be loaded directly), I will add this info.

I am however unsure what you mean regarding the training data. I so far only used label images to train (or retrain cellpose). Are you training using masks?



I also cannot reproduce the error you describe with the model folder.
Did you choose a path that already exist as “model_path” in 3.1 ?
The notebook expect that this path already exists and will create a model folder inside (with your chosen model name).

Now I get it :slight_smile:

You get an error because your training crashed and did not complete. So there are not models to copy over at the end of the training.

Looking at the message the reason is likely because your training dataset are masks instead of label images

Thank you @Guillaume_Jacquemet for replying so quickly!

and sorry for not be clear on that part !

The images I use are like these (32-bit gray levels vs labels ):

  1. they are not fully annotated, eg some cells (in some fields a lot of cells) are missing in the label image. My guess is that it is a bad thing for the training, and I should have a label for every cell in the image.

  2. the other possible explanation for the failed prediction comes from the cellpose dataset, in which “gray images” are actually “green-rgb.png” see below :

    So I’m not sure if I should convert my image to such format.

Is it a bit more clear?

Thank you again,



Yes, thanks a lot for clarifying!

Lack of annotation in the images would not cause your issue. You would be able to train and it clearly looks like the training is not starting.

I have trained using RGB png as well as RGB tiff but also 16-bit greyscale tiff. If you indicate the correct channel to train on in the setting cell, anything should be possible. The notebook also shows you which channel is used to train, so this should give you a good indication if the data is loaded properly.
But you get an error during the label flow computation which occur just before training, so my guess is that the issue may come from the label images. Or the way they are loaded.

Do you get a sensible data display in cell 3.1? just after the data is loaded?
Also how many images do you have in your training dataset?


That’s already some good news!

3.1 input :

3.1 output looks ok :

3.2, augmentation enabled




You mentioned that :

But mine are 32-bit (and are not normalized, with positive and negative value).
Would it be possible that it skip a normalization step if the dataset is already 32-bits?

Thank you again for your efforts to solve this!


But now, your error is gone no?
Looks like you are training fine

Maybe it was a bad idea to mix 2 topics :grimacing: , I had a minor issue with the model folder location that prevented the training to start but I was able to solve it and start the training , as mentioned here in the first post :

My main issue now is that prediction images are empty , see below

while if I evaluate the original “cytoplasm” model (instead of the new model based on cytoplasm) I get:

SO it seems I managed to make the network forget what a cell is :face_with_hand_over_mouth:

NOTE : this was trained with source being 16-bit image (so it might not be because of source being 32-bits)