Improving Neural Network Segmentation Output for Automated Analysis


I’m very new to image analysis and I’m trying to develop an automatic high throughput method for measuring the distance between the outer and inner membranes of bacteria.
I have tomograms of bacterial cells and I’m feeding them into the convolutional neural network, EMAN2.22.
However, the segmentation output is still quite noisy and I end up with large clusters which are not part of the membrane.
I’m really looking for advice on the best way to go forward to tackle these unwanted clusters, which I think will mess up my measurements, please and I’d be really grateful for any pointers towards basic methods I might not have found yet.
Is it possible to tackle it at the neural network output stage?

I’ve attached my current workflow, the segmented image was captured via EMAN2 so I’m hoping an upgrade to 2.22 will also improve results. I’m currently just working in python with one 2D slice of the tomogram.

  1. Created Binary image based on otsu method (only inner membrane shown here)
  2. Skeletonised Binary Image (using skimage.morphology.skeletonize)

The ultimate aim is to have two lines representing the inner and outer membrane and use something like (atm) cdktree to calculate the distance.


Thank you so much! Any help would be really great!

Hi est, do you refer to clusters visible on the Otsu thresholded image and first draft overview ?
If yes I have 2 options in mind :

  • binary operations (some rounds of erosion followed by dilations) which are available in skimage for sure
  • after the segmentation do a connected component analysis so that you have a set of segments, and discard all segments that are too small. You will have to find the optimal threshold for the size but it should not be too hard

Thanks so much. This has put me on the right track.