DeepMIB Output Models are blank and rotated

Hi,

I’m new to MIB but really excited about the capabilities of DeepMIB. I have trained several models with what appears to be success. However, I have a few issues with being able to see the output of predicted images.

If I click the “Load images and models” button, the images load, and the model appears with the Materials of the model listed. But no prediction is visible on the image itself. If I try to load the predicted model manually, it is also blank. However, I can load a model from the .mat files generated by DeepMIB. These will open for square files, but they are always rotated 90 degrees from the original image. If the file is not square but rectangular, then it will fail to load because the dimensions of the model do not match the dimensions of the image.

I am using the standalone MIB version. If these issues would be solved by using the Matlab version, I would be happy to purchase Matlab to get DeepMIB up and running smoothly.

Thanks for any help!

Hi Lee,
thank you for the comments!
The standalone version (is it Windows?) should be completely fine. The Matlab version will give more potential in perspective, but for now there is not big difference.
Could you try to check Results\PredictionImages\ResultsScores files? These are AmiraMesh files with prediction scores. You can load them directly into MIB as normal image files. You should see mutlicolor images, where number of color channels matches number of classes in the model.

Ilya

Hi Ilya,

Thanks for the quick reply!

If I open the .am files as you suggested, they are pure black and have no model associated with them.

I know this model is actually working well because I can go to “explore activations” and see that the “custom Dice Segmentation Layer 2D” looks very good.

But if I load the original image and the prediction model for that image, I get the proper segmentation layers showing up in the “Material” list but there is no segmentation in the image. If I run “get statistics” on the material, there are zero objects.

I would be happy to send you any files that would help to troubleshoot.

Thanks again,

Lee

If there is no signal in different color channels in the score files, it means that the prediction did not work for some reason. If you want you can send me some files or we can check that via zoom.

Hi both,
I’m also new to deepMIB and I’m trying to train my first model.

Did you find what went wrong? I have exactly the same problem with my data. I’m trying to do segmentation of mitochondria on EM images. But when I load the original image and the corresponding prediction model, I get the segmentation layer (mitochondria) but there is no segmentation in the image.

I tried with the data you used in the youtube tutorial and it works, so I guess it’s probably a problem with my data. (PS my images are not square but rectangular too, was that the problem?)

Thanks for any help,
best,
Anaëlle

Hi Anaëlle,
there was a problem with setting of the input patch size. It should be [height, width, depth, color-channels]. If you work with 2D EM microscopy datasets you should use something like: “128, 128, 1, 1”, you can also make larger patches if you wish (for 3D EM it will be something like “96, 96, 96, 1”)
How was your loss function?
Could you load the prediction scores and see how they look (Predict tab->Load prediction scores)?

Ilya

Hi Ilya,

Thank you for your quick reply. My input patch is 124, 124, 1, 1 and it doesn’t work. My validation loss was around 19% while the valiadation accuracy was around 89%, but I stopped them after 3 epochs to see if I could see the predictions.
For information my images do 437px height and 675px width with 11nm/px in X, Y and 50nm in Z.

I also tried with a larger patch (256, 256, 1, 1) but I had the same problem. Here the validation loss was around 22% and the accuracy was around 87% but I also stopped it after 3 epochs.

For both model when I load the prediction score I can only see a full red window.

Thanks again,
Anaëlle

For both model when I load the prediction score I can only see a full red window.

this indicates that the training was not yet successful, the scores are multichannel images, where each channel represents probability of a pixel to belong one or another class. Red is the first channel, which is linked to the Exterior (background) class. If you see only red color, only exterior was predicted.
How large was your training set and what were the settings? You can post here a screenshot from the Train tab.

PS my images are not square but rectangular too, was that the problem?

no, it is not the problem

Thanks for your reply.
The training set contains 157 images (20% of these images are taken for validation in the preprocess step).
I run once again the model and I didn’t stop the training. Now I can see prediction.

But the settings are probably not well suited for my data. Here a screenshot of the train tab:


And here the train settings:

Do you have any suggestion to try to improve it? is it the learning rate too tiny which result in a failure to train?

Ohh, I see. Technically it should work but there are some difficulties. For example, you have large empty nuclei, which may take the whole patch.
There are may be some suggestions:

  1. increase the patch size and you may not really need to have too many layers
  2. create a modified set for training, which is concentrated around mitochondria: i.e. you can do crops from the training datasets which will contain only mitochondria (it is possible to do that in MIB from Get Statistics). After that you will do training only for patches that actually contain mitochondria. Unfortunately that operation is not yet adapted for deep learning and it can’t create automatic crops of defined size. Or alternative solution is to use Mask to select areas with mitochondria and crop those areas from the training set. This should allow the network to nice warming-up run.
  3. We’ve did tests and found that the better training can be achieved if you decrease number of patches per image to 1 (increase mini-batch size to whatever is possible) but train the network for more epochs. You should also have Shuffle->every-epoch enabled. The drawback that you will create too many checkpoint files, which you have to remove later manually.
  4. You can also drop the validation fraction to increase the amount of data for training

If you want we can arrange a zoom chat to check those things.

1 Like

Hi Ilya,

Thank’s a lot for the suggestions. I will try and come back to you.

Happy new year,

Best,
Anaëlle