ZeroCostDL4Mic - YOLO2

Dear ZeroCostDL4Mic team
(@Romain-F-Laine , @Guillaume_Jacquemet, @Ricardo_Henriques )

I’m testing the YOLO2 - collab notebook. Running it went very smoothly (congratulations for that!) The only minor issue I encountered was about ‘My Drive’ transformed to ‘MyDrive’ (which is a recent change by google I guess)

I’m now facing an issue with the training and I’m pretty sure that the problem comes from the images I used.

From @Bertrand_Vernay suggestion on this tweeter thread I used his macro, that I modified to better fit my input data and then use Labelmg.
BTW, maybe a paragraph can be added about these steps in the notebook.

About the images. I have 2 large images (10000 x 10000) (made of 25 positions)

zoom :

zoom in LabelImg

with approximately 1000 cell per images annotated as either dead or alive, cf output from notebook below

Fortunately (from biologist POV) we have more alive cells that dead ones , but that means that we have unbalanced classes (from an analyst POV)

From the training output, it seems it doesn’t learn anything

Seen labels:	 {'alive': 1472, 'dead': 617}
Given labels:	 ['alive', 'dead']
Overlap labels:	 {'dead', 'alive'}
(13, 13)
Epoch 1/30
 - 58s - loss: nan - val_loss: nan

Epoch 00001: val_loss did not improve from inf

/content/gdrive/My Drive/keras-yolo2/ RuntimeWarning: invalid value encountered in greater
  netout[..., 5:] *= netout[..., 5:] > obj_threshold

alive 0.0000
dead 0.0000
mAP: 0.0000
mAP did not improve from 0.
Epoch 2/30
 - 22s - loss: nan - val_loss: nan

Epoch 00002: val_loss did not improve from inf

alive 0.0000
dead 0.0000
mAP: 0.0000
mAP did not improve from 0.
Epoch 3/30
 - 48s - loss: nan - val_loss: nan

Epoch 00003: val_loss did not improve from inf

My guesses here are :

  • that having such a big image with 1000 annotations was not a good idea
    OR/AND ?
  • the cells are defined by too few pixels
  • something else? :sweat_smile:

Thank you for your inputs,



PS : small issue with the TRAINING. I first tried to use one image as the training and the other one as the “Quality Control” (since each image as more than a 1000 annotations I thought it was fair).
But doing so, the training step sent a Attribute Error . ‘ProgbarLogger…’. Adding the second image to the training set solved this issue but now I don’t have QC anymore ! :grimacing:

1 Like

maybe also because the bounding boxes are to wide and capture too much of the background around the dead cells?


1 Like

Hi @romainGuiet,

Sorry for the very late answer. Yes, I think you would have a much better chance of success by using smaller images to perform the training. I would recommend training yolo on your tiles rather than on the full image. If you need help with Yolo feel free to reach out here or on our github page! I can forward your question to Lucas who is our yolo expert

1 Like