Creating 3D image based on 2D images

Hi. I have 2D binary images of the top and bottom of 10 discs which are cut from a cylindrical rock sample. I want to create a 3D model of that cylindrical sample based on those binary images and then I want to estimate MIL by running BoneJ on the 3D model. Could you please guide me on how I can create a 3D model with those images in Fiji? Thank you.

What is the reason for measuring MIL? If it is to measure pore and particle width then use Thickness. Anisotropy uses the MIL point cloud to calculate degree of anisotropy. For Anisotropy and the Separation option in Thickness you need to crop your image down to a stack containing only the sample and no ‘outside’ of the cylinder (i.e. a maximally contained ‘box’ within the cylindrical sample).

You might also benefit from using Particle Analyser, which can give you results on a per-particle basis rather than a bulk / continuum basis.

Thank you for your reply. The reason for running MIL is because I want to estimate fabric tensor so I can use it in my plasticity framework of this rock. So I need the accurate 3D model.

Anisotropy will produce a fabric tensor for you, but it assumes the whole image stack is filled with sample. So you have to crop down to a brick-shaped sample and cut off the ‘air’ outside your cylindrical sample.

Hi @Pouneh_Pakdel,

If by a 3D model you mean a mesh, it’s easy enough to get with the marching cubes implementation available in e.g. the old 3D Viewer library. See the 3D Viewer github repository. For example, in jython at the Script Editor:

from marchingcubes import MCTriangulator
from customnode import CustomTriangleMesh, WavefrontExporter
from ij import IJ

imp = IJ.getImage() # your ImagePlus image stack with 0 for background and 255 for foreground

mct = MCTriangulator()
threshold = 1
channels = [True, True, True]
resamplingF = 4 # at 25%. Adjust as required, max 1
triangles = mct.getTriangles(imp, threshold, channels, resamplingF)

mesh = CustomTriangleMesh(triangles)
filepath = "/path/to/my-mesh.obj"
WavefrontExporter.save({"my-mesh", mesh}, filepath)

# See:
# https://github.com/fiji/3D_Viewer/blob/aa2ae016f08ac47058135009a8801b81d3c1c0bf/src/main/java/customnode/WavefrontExporter.java
# https://github.com/fiji/3D_Viewer/blob/aa2ae016f08ac47058135009a8801b81d3c1c0bf/src/main/java/customnode/CustomTriangleMesh.java
# https://github.com/fiji/3D_Viewer/blob/aa2ae016f08ac47058135009a8801b81d3c1c0bf/src/main/java/marchingcubes/MCTriangulator.java

Alternatively, you could do all the above by simply loading your label image stack into the 3D Viewer as a mesh, and then exporting it in DTL or OBJ (wavefront) formats.

I wonder if #sciview can already do the above? @kephale?

You don’t need a surface mesh for Anisotropy - it works on 3D binary pixels.

But if you do want a mesh, BoneJ has a one-click solution to make a binary STL.

OK - now I get it. You can try loading them all as a stack using Images to Stack or by importing image sequence. The biggest problem is that you have a lot of missing information: the material that was removed by cutting the slices, and the rock in-between the two faces. Furthermore if the pixels are anisotropic (better resolution in xy than in z, usually), MIL approaches can be broken.

Thank you very much @mdoube. You are totally right about missing information. Unfortunately I don’t have other choice or more data. Do you know what should I do about z axis? I have only 20 images. How I can give depth to that stack because I want to have a depth of 300 cm and diameter of 150 cm. Is it correct to scale it and then add size under image tab so I can change z from 20 to 300 and run anisotropy? Is this approximation correct?

Thank you @albertcardona. I don’t need a mesh. However, I want to have a perfect 3D model of the sample which should be based on those 2D images. I tried imageJ but I don’t know how I can creat it. When I add my image to stack and I use 3D viewer the total sample is flat but every 2D binary image must have a depth of 15 cm so I could put it onto top of each other and in 3D viewer I can see the total sample with depth of 300 cm.

Just change the z-spacing (pixel size) in Image>Properties to the actual z-spacing, in mm (and also set the xy-spacing in mm).

Anisotropy (and other fabric tensor estimation methods) assume isotropically sampled data. My advice is to give up trying to get a fabric tensor this way, or to reduce the resolution in xy to match the resolution in z prior to running Anisotropy.

Thank you very much for your help. I really appreciate it. I will reduce the resolution and give depth based on that for estimating fabric tensor.