Extracting "local diameters" from 3D objects

Hi! I’ve constructed a pipeline to segment and skeletonize axons in a 3D volume. However, I’m stuck with an analysis issue for extracting certain measurements from my images:

Analysis goals

  • I want to measure the local diameter along the axons to construct a cumulative probability distribution plot of axon diameters in my 3D volumes.
  • My idea is as follows: in the rough schematic above, at a given point along the skeleton (blue), I would take the average length of a few rays (red) that cast outward from the skeleton. This would be the approximated diameter at that given point. Then I would repeat this for all the points along the skeleton in that image volume.


  • I think the hardest part to wrap my head around is determining which direction the axon is traveling in order to determine the proper plane where the cross-section lies (i.e. differentiate between a cross-section vs. a surface).
  • I am also open to measuring local cross-sectional area instead of local diameter, if that is too hard.
  • Put simply, I don’t know where to start or what to do. Is there an algorithm in skimage or opencv that is similar to what I describe?
  • I’ve read some examples about Rayburst sampling that seem to get at what I’m trying to do:

Please let me know if I can provide any further information. Any help is very much appreciated, thank you so much!

Hi @kamodulin,
This looks like a super interesting and very challenging problem.
It’s not someting that people usually do but here are my 2 cents on that

Using rays pointing from the center reminds me of StarDist, which is used for segmentation, but maybe you can check how they do those rays and try to reproduce it, their python code is open source on github so…
Might be worth tagging the authors @mweigert and @uschmidt83 for additional feedback.

As a simpler alternative to the rays, you could deduct the average radius or perimeter from the area of the axon cross-section, if you can approximate the axon cross section to a circle (area = pi . r²)
You need to check if this hypothesis holds though by doing it manually for a few ones.
To get the area, you would need to segment the axon cross section obviously (with StarDist maybe ?)

If the axon cross-section is not in the imaged plane but tilted, and so one section can extend over multiple slices then it’s quite tricky indeed.
You would need to reconstruct the axon first I guess, so you have some kind of 3D shape representing the axon. Do you have sufficient resolution for that with the setup you used ?