Is there any possible way to determine the accuracy of 3D skeletonization using BoneJ plugin? The reason for determining accuracy is, so far, I have analyzed 7 3D stacks of binary images. And, every time I am getting the similar trend of the distribution of branch lengths. I am doubtful about the exact similarity of branch length distribution in different stacks.
Currently, I am first segmenting the images using Trainable Weka Segmentation and then performing the action of 3D skeletonization using Skeletonize3D. Further, I am using the plugin Analyse Skeleton to determine the branch length.
Thanks for getting in touch. I just brainstormed a bit with @rimadoma about your question and this is what we came up with:
I don’t know enough about Weka to help you more with that, but I am guessing you use it to generate a binary image that represents the structure you’re interested in, which you then skeletonize.
Some minor caveats first, which may help you think about your problem differently:
I’m not entirely sure what you mean by “accuracy”, but I am guessing you mean how well does the skeletonization represent the original structure? Unfortunately, this really depends on the structure you’re trying to represent, so this turns almost into a philosophical question. The only thing I know how to do is to visualize the skeleton and the original side by side and check (very unscientifically) visually if I think the skeleton is a good representation. If you or someone else has a better answer, I’d be really interested in hearing it.
In connection with the above, what makes you think the branch lengths should be different?
Be aware of the difference between Euclidean distance and branch length in the output of Analyse Skeleton!
Essentially, you can specify how “long” a branch needs to be to “count” through the parameter “minimum trabecular length”. Connected branches that are shorter than that get simplified into a node. You can find the end points of the remaining, long branches in the “Edge endpoints” table, which is a result of “Inter-trabecular angle” (Make sure you tick the “print centroids” box) and then analyse the (Euclidean) length distributions in those.
If your length distributions look very similar because of noisy short branches from the skeletonization that are obfuscating the signal of the distribution of larger branches, this may improve your results.
I really appreciate your help. I will present my response in the same order as you asked in the last message
I am using Weka Segmentation to create binary images with required structure.
I tried to side by side check and visually the results don’t look very good to me. I am doing such analysis for the first time that’s why I am not sure how to quantify the accuracy of skeletonization.
I am comparing the branch length for different stacks. And I think it should be different because the specimens I am analyzing were subjected to different experiment condition. So, the similarity of distribution of branch length for all 7 stacks is a big coincidence. By the term distribution of branch length, I mean a number of branches in branch length interval of 4-5 pixels are highest for all 7 stacks while if I consider different interval, the number of branches are decreasing in the similar fashion.
The link for latest version of BoneJ2 is a Beta Version. And, there is a warning that it may break your system. So, should I give a try to it or not?
Yes, as stated above, I have no better answer other than visual inspection for the “accuracy” of the skeletonization maybe someone else does?
One explanation may be that the structures you are getting are not suited to be represented by a skeletonization.
Another explanation may be that your binary images are noisy and that this noise affects the skeletonization. There are, as far as I know, several ways to reduce noise in a binary image: you could try BoneJ>Purify, successive dilations and erosions (Process>Binary>Dilate/Erode) (as described e.g. in this paper), Gaussian or Median filtering (Process>Filters>Gaussian blur 3D/Median 3D), despeckling (Process>Noise>Despeckle) or combinations of these. I don’t know which one, if any, will work well for you.
A priori I would say different experimental conditions do not mean different outcomes, but I don’t know the specifics of your experiment. So I think you should keep this possibility in mind. In fact, the paper we mentioned earlier used Analyse Skeleton to show that the angles between the bone trabeculae of different people are very similar.
Yes, it is still in an experimental state - if you have the time and patience and reducing the noise as explained above doesn’t help, I would recommend giving it a try, but installing it alongside your current ImageJ/Fiji version. Maybe the inter-trabecular angle algorithm helps you, maybe not. As stated in the previous post, if your distributions are similar because of noisy short edges, I think there is a chance it will help, but I don’t know, unfortunately.
Please give it a try - but do it in a sandboxed version of ImageJ. Install a fresh ImageJ in a new directory, that you can use for testing BoneJ2. If it breaks horribly you simply delete the directory. If you try it, please share your experiences with us here.
It’s really super unlikely to do anything outside your ImageJ installation. If you have a special setup that is working well for your experiments, it’s better not to muck around installing and uninstalling experimental software because it might cause something that you depend on to change in an unexpected way - hence the advice to use a ‘burner’ ImageJ installation.
I tried to use the Intertrabecular Angle Algorithm. It was working fine regarding the minimum count and iteration. And the good part is unlike previous BoneJ where skeletonize and analyze skeleton are run separately, in this command both runs at the same time.
The only problem that I found is it doesn’t account more than one skeleton at the same time while analyzing the data. It will just consider one skeleton, unlike previous BoneJ that could handle multiple skeletons at the same time. In my images, skeletons are distributed throughout the image. And, I need to analyze all of them.
Can you suggest me how should I proceed further? Because I also think that due to the noisy edges may be I am getting the similar results in all 7 stacks and actual branch length is being obscured. So, the idea of making those edges with length less than minimum count as nodes will be good.
Is there any possible way in the previous BoneJ> Skeletonize 3D which will not consider those branches with length less than certain pixels (X) and treat them as part of root branch. And, only identify nodes at those points where another branch has the length greater than x-pixels. The same idea adopted in Intertrabecular Angle Algorithm of minimum pixel count.
Yes, analyzing more than one skeleton is a functionality that we’d like in the future - I just haven’t had the opportunity to implement this yet and I can’t really give you a timeline for it at the moment.
If you’re just interested in the edge length distributions of edges longer than x (and not in angles and number of connections, which is what ITA is primarily designed for), would it be possible to use the Analyze Skeleton output and simply ignore/delete all edges shorter than x in your analysis. It won’t be exactly the same than what you get from ITA, but maybe a good enough approximation?
The idea of doing the above-suggested method will remove those edges which are lesser than the desired length. But it will affect the actual length of branches. Since the undesired branches have created a junction voxel rather than slab voxel, the branch length in output is not the actual length.
Sometime before @iarganda uploaded a script which can prune those end edges from skeleton which is lesser than threshold length and may be creating slab voxels. I believe this method may help me in the better way of removing undesired length branch and preserving the actual length. While I am running that script I am constantly getting an error that ImagePlus is required but doesn’t exist. I am not sure how to resolve that issue so that I can give a try and see the output. The script is below shown. Can anybody help me with it?
Exactly, this is what I meant by [quote=“alessandrofelder, post:9, topic:6098”]
It won’t be exactly the same than what you get from ITA
[/quote] in an earlier post. Again, I cannot tell when I will manage to find the time to implement the multiple-skeleton functionality to ITA - sorry! I would ask you to keep up to date with BoneJ2’s github site to see when this is implemented.
I think people other than me will be better at explaining this in the correct technical terms, but I believe it basically means what it says: your script can’t find an image (i.e. an object of type ImagePlus (?)) to run the script on. It doesn’t know which image to use or your image is of the wrong type - would anything like that make sense? Again, I am still a bit wobbly on using the terms correctly, so maybe someone more experienced should answer this or you should check whether someone else has answered a similar question somewhere on the forum?
Hello everybody and sorry to come late to this interesting conversation,
Regarding the skeleton accuracy, apart from what @alessandrofelder already said, I would suggest creating a ground truth skeleton by manually skeletonizing your data or using a semi-automatic tool such the Simple Neurite Tracer. After that you will be able to measure the differences (with the metric you prefer) between your skeleton and ideal one.
About the script, you need to run it from the Script Editor by selecting Beanshell as language and with the skeleton image already open and selected (otherwise you will get the ImagePlus error message).
Slab voxels can be created by removing end-points of 1-pixel branches or by removing branches that start on a junction that stops being a junction after the removal.