Choosing filters for mitochondrial segmentation using Trainable Weka Segmentation

fiji
weka
imagej
segmentation

#1

Hi everyone,

I am new to image analysis using FIJI and this forum. I am trying to use TWS (trainable weka segmentation) to classify individual mitochondria in HeLa cells. My problem right now is looking for the right balance of filters, I have looked online for other researchers doing using the same method but could not find any information regarding the filters. I have also looked at the filter list available but I couldn’t really figure out which ones would be suitable for my current experiment.

In my current analysis, firstly I subtract background using 40px rolling ball radius and then used gaussian, sobel, hessian, difference of gaussian, and gabor filters with TWS where processing seemed to take a bit of time. Therefore, I would like to find the balance to not use too many or too little filters.

Here are some of the sample images I took:

z

I would appreciate any suggestions, thank you!


#2

Your images are strongly over-exposed and not suited for image analysis.

Please optimize image acquisition!

Reagrds

Herbie


#3

Hi Herbie. Thank you very much for the quick reply, I will try to take more images with less exposure. Meanwhile I have other images like these which had less exposure:


Would these pictures be suitable or what kind of improvements would you suggest?

Regards,
Nico


#4

Nico,

are you sure the posted sample images are the raw original images?
If not, please post those.

Exposure is much better. Please control via the histogram of the images.
You should use the whole range of 8bit gray values! Better though would be more which requires the 16bit image format.

Furthermore, please tell us exactly what you are interested in beyond segmentation, e.g. number, size, position etc.
Please tell us for at least one of your sample images what makes one of these dots a mitochondria and what not, i.e. just encircle the structures you are interested in.

Regards

Herbie


#5

Dear Herbie,

Thank you very much for your reply.

I tried posting the TIF image, but somehow I couldn’t so I’m just gonna link it to my google drive:
goo.gl/4sE9LZ . I have also attached the RAW stack just in case.

You mentioned to use the whole range of 8bit gray values and to control via histogram of images. I’m not really sure about this step, should I just use the menu image>type>8-bit/16-bit? Also, I’m not really sure what you mean by controlling via the histogram.

Here are some of the examples I circled, for the file OPA1 KO.TIF, the mitochondria are expected to not form networks and thus appear as individual circles, while for DRP1 KO.TIF, the mitochondria are expected to form network, connected with each other. WT.tif is the trickiest, as they lie in between those two. I couldn’t make any example for the latter as chances are most of the mitochondria are connected. In general, the mitochondria that form networks do so through their shorter axis and not their longer axis.

So the information I’m interested in are the area, perimeter, circularity, form facter, aspect ratio and solidity.

Regards,
Nico


#6

Thanks Nico,

for the provided details that are extremely helpful for those who may be able to help.

I can read the original z?-stacks and I understand that you are mainly interested in a single slice of every stack, e.g. slice #4 or #5 of stack “OPA1 KO.lsm”.

For a reasonable classification as indicated in image “sample opa1 ko.png” I see little chance no matter what method you use.
Please note that in your case, segmentation is not the proper term because segmentation doesn’t care so much about the semantics. In the first place segmentation would separate all dot-like structures but would hardly differentiate between mitochondria and non-mitochondria.
For me, as an untrained observer in this field, it is even impossible to distinguish both.
In short, a machine needs to know what you know but it only understands mathematics. Consequently, we need a formal description of what distinguishes mitochondria and non-mitochondria. And for this purpose I fear that the spatial resolution of the images is insufficient, but I may be wrong.

If you have formal definitions of separating features and if these features can be extracted from the images, you could train a classifier. If pixel based classification of TWS is suited can be doubted.

Regarding histograms and bit-depth:
If you have an image and look at the histogram, you will see how many pixels have a certain gray level. If the image is 8bit, it can have 2^8 = 256 gray levels. The histogram tells you, if you really use this range or not. If an image has more than a few pixels of value 255, it must be regarded as over-exposed. If however, you have no pixels with values, say above 180, then you don’t really use the range which means you loose information.

If you have a camera that is able to grab images with more than 256 gray levels, then please do so. Such images will be represented as 16bit images (they can show up to 2^16 = 65536 gray levels) even if they actually show e.g. only 1000 gray levels (10bit). This has to do with computer preferences …

I hope you got the points up to now.
In any case please study the ImageJ User Guide:
https://imagej.nih.gov/ij/docs/guide/index.html

Regards

Herbie


#7

Hi Nico,
Here’s what I got by Image > color > split channels ( discard the two blank channels if you are using PNG. Then go to Image > Adjust threshold and set it for a 75 to 255 range (apply) Then analyze particles with a 75 to 255 range.
Hope this helps
Bob

Summary.csv (103 Bytes)
Results.csv (264 Bytes)


#8

Bob,

maybe I got it wrong, but simple thresholding doesn’t separate the encircled dots from all others. Please read carefully the posts of the OP and inspect the desired result marked in this image
https://forum.image.sc/t/choosing-filters-for-mitochondrial-segmentation-using-trainable-weka-segmentation/20635/5?u=herbie
posted by the OP.

Regards

Herbie


#9

Hi Herbie,

Thanks for the response, what I’m trying to segment is the Z-projection of the layers stacked into one image and not the individual slices.

I tried TWS after reading this: https://open.bu.edu/bitstream/handle/2144/29974/Miller_bu_0017E_13620.pdf?sequence=5&isAllowed=y . I tried the methods here, but just couldn’t find the info for the filters used and therefore I tried seeking help in the forum. I might also try to ask the author regarding the conditions used in their experiments though.

By looking at their experiments, the result seemed decent and that’s why I tried the similar methods. I think that if TWS can differentiate pixel patterns, they shoudl also be able to differentiate between supposedly connected and disconnected mitochondria.

Here is one example I did using my previous data, which was overexposed, nevertheless worked as expected:

Here are the files: Z projected TIF file and arff data from TWS
https://drive.google.com/open?id=13l2W9yxoI18TYs5tZSaj8dS8JEbPZiv7

and here is the result:

Regards,
Nico


#10

Hi Bob,

Thanks for the suggestion. Unfortunately, I don’t think I can apply that as sometimes smaller and fainter mitochondria might have about the same brightness value with the background noise, and therefore I am trying to use TWS to segment it.

Regards,
Nico


#11

Nico,

actually I’m quite puzzled.

Does that mean that the encircled dots are only a few mitochondria?
This would imply that perhaps all dots are mitochondria.
This changes the situation completely.

Please be more specific

Herbie


#12

Hi Herbie,

Yes, I am sorry for not circling all of the mitochondria in the previous example as I thought you only needed some of the mitochondria to be classified. Also yes, all the dots are mitochondria, in fact everything red should be mitochondria.

Regards,
Nico


#13

Yes, I am sorry for not circling all of the mitochondria

You just could have mentioned it.
This was highly misleading and my comments are to be interpreted accordingly!

I think that if TWS can differentiate pixel patterns, they shoudl also be able to differentiate between supposedly connected and disconnected mitochondria.

I don’t agree with your conclusion here, although it might be possible.
TWS doesn’t really “differentiate pixel patterns”. It does pixel classification after various pre-processings which is not the same!

BTW, why do you a maximum projection and why do you change images in this thread?

Good luck for your work

Herbie


#14

Hi Herbie,

Thank you for your response and my apologies for not stating clearly the individual mitochondria.

I see, I might not have really understood the algorithm of TWS, I am really new to this and probably have misunderstood the original paper published on TWS.

I have done maximum projection for all the tif images as longer mitochondria might span over several different Z positions. I used the overexposed image as I firstly tried classification with that one and that the results showed as expected.

Thank you very much for the suggestions and sorry for not stating clearly the processing parameters.

Regards,
Nico


#15

I have done maximum projection for all the tif images as longer mitochondria might span over several different Z positions.

Why not use sum or mean projection?
In science you have to justify your processing steps.

I see, I might not have really understood the algorithm of TWS

I think this is not possible for a beginner.
Signal/Image processing and classification theory are master studies!

[…] probably have misunderstood the original paper published on TWS.

Does this study already report the differentiation of the various states of the mitochondria or does it just report what you’ve replicated above?

Still unclear

Herbie


#16

Hi Herbie,

Thank you for your response.

Thank you very much for this suggestion, indeed using sum projection showed better results that might be more scientifically acceptable, I will try using sum projection for the data processing from now un. Up until now, my lab has only used max projection and manually classified the mitochondria and I just followed their methods.

Sorry, I meant the TWS paper available on https://academic.oup.com/bioinformatics/article/33/15/2424/3092362

Regards,
Nico


#17

Wow Nico!
You seem to be doing it quite right in your attempts. Have you tried analyzing them by size (although I think you have)?
Just don’t give up, these look great.
Bob


#18

Nico,

here is the result

of my first attempt to segment the following image

The approach doesn’t use classification.

Regards

Herbie


#19

Hi Bob,

Thank you. I have just been using TWS and as Herbie said before, analyzed it based on pixels.

Regards,
Nico


#20

Hi Herbie,

I dont really understand what you mean by not using classification, did you just do thresholding without TWS?

Regards,
Nico