I’m very new to image analysis and I’m trying to develop an automatic high throughput method for measuring the distance between the outer and inner membranes of bacteria.
I have tomograms of bacterial cells and I’m feeding them into the convolutional neural network, EMAN2.22.
However, the segmentation output is still quite noisy and I end up with large clusters which are not part of the membrane.
I’m really looking for advice on the best way to go forward to tackle these unwanted clusters, which I think will mess up my measurements, please and I’d be really grateful for any pointers towards basic methods I might not have found yet.
Is it possible to tackle it at the neural network output stage?
I’ve attached my current workflow, the segmented image was captured via EMAN2 so I’m hoping an upgrade to 2.22 will also improve results. I’m currently just working in python with one 2D slice of the tomogram.
- Created Binary image based on otsu method (only inner membrane shown here)
- Skeletonised Binary Image (using skimage.morphology.skeletonize)
The ultimate aim is to have two lines representing the inner and outer membrane and use something like (atm) cdktree to calculate the distance.
Thank you so much! Any help would be really great!