Skeleton branch length producing zero values for most of the soil pores in Bonej2

11.tif (15.7 MB)

Hi all,

I am trying to measure pores in the soil. For this, I am using particle analyzer in bonej2 which provides me skeleton length and volume for each individual pore. I want to obtain an equivalent radius for each pore using the volume and length given by particle analyzer (considering pore as a cylinder)

Attached below, is the binary image of soil 6 cm deep (cropped from original 50 cm). However, bonej results give me skeleton length as 0 mm for most of the pores. why does it do that?

Also, I am interested in the surface area of each pore. But, bonej again cannot create a surface mesh for most of the pores whereas if I use the 3D object counter it gives me non zero surface area values for all pores (i think object counter uses different methods instead of generating the mesh, unlike bonej). Is it possible to pick surface area value from the 3D object counter and relate with volume and length from the bonej particle analyzer for each individual pore?

Can anyone please help in solving this problem?

Thanks,
Suman

1 Like

BoneJ uses Skeletonize3D to do the medial axis thinning to reduce the particles to skeletons. You can get an idea of what is happening by running Skeletonize3D on your input image. Small round particles with no branches get reduced to single pixels that have no length, which is perhaps not what you were expecting. You may get a better estimate of particle ‘length’ by calculating the Feret diameter. Bear in mind though that as a brute-force method the Feret diameter calculation can be quite slow on large particles, such as the one large pore running through your example image. You may have to filter it out based on size or touching edges, or just be prepared to go for lunch while it’s running.

You should set the Surface resampling value to a lower value, for example 1, to ensure that small particles are represented. Note though that this can result in ‘jaggies’ where the rectangular pixel grid is represented in the surface mesh. You will get non-zero surface area values but for small particles it’s likely that you’ll have a lot of error due to the feature size:pixel spacing ratio. I’ve updated the manual to make this clearer.

1 Like

Thank you for your reply. I could get some idea after comparing all the results from bonej, skeletonization, and obj counter. To get the cross-sectional area of each pore, I think I need to take the ratio of the Volume and the length (total branches) provided by the BoneJ’s particle analyzer. In this context, Feret diameter might underestimate the length?

Setting the surface resampling value to 1 helped. Thanks!

However, I am getting unexpected results for the skeletons. All the euclidean distances are greater than the branch length. Can you give me an idea of how is this possible?

Thanks,
Suman

Let’s go back to your original model assumption, that pores can be modelled as cylinders. Is that a reasonable assumption when your pore has many branches? Are your pores even very cylinder-shaped? What is the purpose of considering a pore as a cylinder?

I would be very careful about using the total branch length as an input parameter for this analysis. Please look in detail at the skeletons that are produced by Skeletonize3D in the 3D Viewer, and see if they are a fair representation.

If you want to get a cross-sectional area only, to e.g. estimate how well fluid might flow through a pore, you may be better off calculating the thickness (i.e. diameter d) of the pores directly, then estimating cross sectional area as A = π(d/2)² assuming a circular cross-section.

A nice addition to this analysis would be to locate pore throats and measure them, but we’re short on resources to do that right now.

That’s true if you have many branches, because the branches in the middle won’t be ‘seen’ by the Feret diameter.

By Euclidean distances, I guess you mean the Feret diameters? In that case it is because Feret diameters always pass from one surface point to another surface point, whereas the medial axis thinning in the skeletonisation method can prune pixels off the ends of branches, depending on the local topography and topology. So you should expect that the skeleton does not extend all the way to the tips of pores.

1 Like

Hi Michael,

I got your point about using thickness in this case. In fact, I tried using it today and got expected results too. You are right, I think the skeleton lengths are not a fair representation for this analysis.

By Euclidean distances, I just meant the Euclidean column obtained from the skeleton “branch information” results within the “Bonej2” plugin. I get expected values from the usual skeleton plugin outside of BoneJ but if I used the “Analyze skeleton” of BoneJ2, it gives me larger euclidean value. Because of this, I am getting tortuosity values smaller than 1 for all.

euclidean

Thanks again for your help!

That’s a bug, thanks for pointing it out. If you compare the result from AnalyzeSkeleton and BoneJ’s AnalyzeSkeleton, you can see there is a constant ratio which is the pixel spacing in the image. That needs to be fixed. I’ve made a bug report about it. In the meantime if it’s important for you, just multiply linear measurements by pixel spacing to calibrate them.

Thank you for this information. It helped a lot!

I was wondering why the surface area results are so much different with the surface resampling value of 2 vs 1. I just wanted to get non zero values in place of 0, but, if I used surface resampling of 1, I get a huge number for all the surface area results. I could not get a clear idea of what’s happening in there. With surface resampling value as it is, i.e 2., I get lower total surface area and it’s quite comparable with past results, but I cannot totally ignore the reason behind this huge difference.

The surface area values from the 3D object counter and BoneJ are quite comparable if I used the surface resampling value of 1 (both producing larger values).

Thanks,
Suman

Most likely it’s because with surface resampling of 1, the mesh is representing the pixel grid itself, meaning that the mesh is ‘blocky’, or has jaggies. That means that instead of smoothing between pixels (like the hypotenuse of a right-angle triangle, c² = a² + b²), you are measuring the sum of the other two sides (a + b of the right-angle triangle) which is always longer. In the 3D case consider the difference in surface area between a pyramid with perfectly smooth faces (like I.M. Pei’s one at the Louvre in Paris) and an Egyptian pyramid.

So you have to choose: smoother result that probably represents the true geometry a bit better, and losing under-resolved structures, versus over-estimated surface area and including all structures in the final result.

1 Like

Comparably correct or comparably incorrect? Two programs agreeing doesn’t mean that either of them is right. You should make a test image of e.g. a filled sphere with a radius on the scale of your image features and see if you get the predicted surface area (4πr ²).

Thanks for the detailed description of smoothing. That helps!

Below are the two surface images produced with a resampling factor of 2 vs 1. Your point about using a smoothed surface sounds good to me. However, looking at these images, it seems like it’s pretty much underestimating the surfaces with 2 as a factor. Do you suggest me to use a resampling factor of 1.5 instead?

Thanks,
Suman

You can’t tell how much it’s “underestimating” anything until you have established a ground truth. You should try making some test images. Try this which makes a cylinder of diameter 8 pixels:

diameter = 8; //<- change this value 

newImage("Untitled", "8-bit black", 128, 128, 1024);
run("Specify...", "width="+diameter+" height="+diameter+" x=64 y=64 slice=1 oval centered");
setForegroundColor(255, 255, 255);
run("Fill", "stack");
run("Select None");

Surface area of the cylinder should be πd(d/2 + h) where d is the diameter and h is the height.

Trying it out shows that changing diameters (2, 4, 8 pixels) and resampling (1, 2) leads to variation in surface area on the order of 20% or so, and quite close to the theoretical prediction (not 2× out, more like 10%). So I suspect that the main issue in your images is very small features that disappear when the surface mesh is made. If about half of the structure is very small then that would explain the 2× surface area variation.

1 Like

I’m not sure if non-integer resampling factors will just be rounded or floored to an integer. You can try and see but I suspect you will get the same answer as resampling with either 1 or 2.

Again, thank you so much. I really appreciate your help. I, too, see that it varies from 15-25% as compared to the theoretical prediction. As you said, around 30% of the total pores are not able to produce the surface mesh.

The CSV file that I have shared might make clear what’s going on. A very large pore on the middle section has contributed to much of the difference between the two results. I can see that all of those pores which are not able to produce a surface mesh has a thickness equal to twice the resolution (2*0.35 mm) or so.

I was wondering if the thickness stack allows me to filter out pores in 3D without disturbing the connectivity (unlike “analyze particles”). i.e according to the thickness of a particular cluster.

Thanks!
Suman

surface.csv (732.8 KB)