How to extract pattern from imaging 2D meshes?

Dear forum,

I have several images of a 2D mesh. The mesh is a collection of what appears to be triangles, rhombuses, pentagons and hexagons. Does anyone know if there is a way with ImageJ (or any other software) to extract the proportion of each of them in an image and later use this information to attempt a reconstruction of the mesh?

Many thanks.

Hello @rhjpires,

If this is the type of image you have, you can apply a threshold to preserve only the yellow pixels and then (if you have consecutive images of the same sample) call the 3D viewer to visualize those regions in 3D.

ignacio

1 Like

Hi Ignacio,

Thanks a lot for the reply. My aim is not so much visualize, but rather model the mesh. For example, based on the pattern composition of the structures laid out by the yellow pixels, I would like to find a 2D statistical representation of the mesh… so that, for example, if the mesh is mostly constituted by hexagons rather than rhombuses, than the final image should be a representation of that.

I’m now trying to Skeletonize the image, hoping that it can give me some statistical measure of the images, such as branch length, or number of branches at each intersection…

If you know any further way to go through this, please let me know!!

Cheers,

R.

I see. In any case, the 3D viewer rendering provides 3D meshes as well, so that might help you.

Another option is to use shape descriptors of the yellow areas, in 2D or 3D, so you can better define their shapes.

1 Like

Thanks again Ignacio! The shape descriptor could give some interesting info. In the meantime, skeletonize allows to better visualize the number of vertices in each yellow patch - The distribution for the number of vertices should give me an idea of the basic pattern underlying the mesh. Unfortunately, I’m doing this manually… and have a lot of images to process. Please, let me know if you find something that can help on this, would very much appreciate.

Cheers,
R.

Why don’t you use the Color Threshold tool to extract the boundaries of your yellow areas as ROIs? That way you could read all the coordinates automatically within a macro or script.

Have a look at this screenshot (ROIs are magenta):

2 Likes

Ignacio, thanks a million for your input.

This approach I used to determine the area, which is a very important parameter for me as well. I am now left with trying to determine the basic shape of the mesh. To define the yellow areas on basis of a circularity parameter for example, can be useful, but I think it is hard to conceptualize. It is a (flexible) filamentous network, so it can be approximated to a stiff mesh composed of some assembly of polygons… but which ones? and in which proportions?

Another perspective on the problem is to imagine that one of the ROIs has a rhombus shape (4 vertices), but once you look at the filaments you see that only 3 of the vertices are involved in connecting to the rest of the mesh, and that the 4th vertex is merely a kink - resulting from the flexibility of the filament that locally bends. This means that in reality, that shape although resembling a 4-sided polygon, is structurally a triangle. See example below, maybe it makes it more clear:

Better image to illustrate.

1 Like

I see. Maybe you can render the yellow regions in 3D as isosurfaces using the 3D viewer and then simplify the mesh as much as you want using any of the available tools for polygon/mesh reduction.

1 Like

Got the impression that a stack is required to convert into a volume - I have only a 2D image…

Sorry, I thought you had some consecutive slices of the same sample.

In any case, you can extract the polygons of each ROI and simply them in 2D as well, no?

1 Like

Hi Ignacio,

Thanks for the continued input. I used the Skeletonize plugin and used the resulting wireframe/skeleton image to calculate the areas using the “Analyze Particles” tool. This is not a problem for me, since the filament structure is extremely convoluted, from which results that an area calculation based on the wireframe model is likely to be more correct than the actual original data.

I see you were involved in implemented the skeletonize 2D/3D tool based on a 1994 paper by Lee et al, but there is a 1995 paper by Bertrand and Malandain.proposing some corrections. I am not a developer, and was wondering if in the implementation contemplated also their comments, or in any way their corrections can have an impact in the analysis of 2D images. Can you help?

This is the reference:
Bertrand, G. and G. Malandain. 1995. A Note on Building Skeleton Models Via 3-D Medial Surface Axis Thinning Algorithms. Graphical Models and Image Processing. 57(6): 537-538.

Many thanks,
Ricardo

Hello Ricardo,

Thanks for pointing this out. If I understood Bertrand and Malandain’s paper correctly they are only correcting some of Lee’s statements that pointed out potential errors on Bertrand and Malandain’s assumptions.

Are you unhappy with part of the Skeletonize3D behaviour?

Hi Ignacio,

Skeletonize works well for the most part. However, it seems to miss out on a few branches, in my case it is statistically not very relevant, it is only an issue as a measure of accuracy. Have a look at the composite image below. The white branches stem from the original data, the skeleton is shown in green lines and the colored area highlight those of areas which were used by “Analyze particles” to determine the area.

Cheers,

R.

1 Like

I see, in this case it looks like the missing branches are due to a lack of a global border (white frame). Another option in this case would be to use the watershed lines produced by any of the available watershed plugins. See the Morphological Segmentation plugin for instance.

1 Like