From point cloud to triangular mesh

I would like to go from a noisy, irregularly spaced 3D point cloud positioned roughly along a surface, to a surface mesh with somewhat regularly sized triangles. I am not familiar with this field, so I’m having troubles finding the appropariate solution. So far, this is my understanding:

  • NURBS would be a nice way to parametrically describe the surfaces, but to use them I would need data that’s mostly on a single plane in order to sample them regularly to get a mesh. This won’t work for surfaces with normals that vary significantly (i.e: parts of spheres, tubes, blobs).
  • a simple neighbour-to-neighbour mesh would be heavily distorted by the noise (some points may even lie “on top of” each other, along the normal of another point), and its face density would vary based on the sampling. I don’t know if this would be solvable, and I have a hard time finding information on this.
  • a more general approach may be to transform the point cloud in a binary volume segmentation, and use distance transforms similarly to how they are used to find the center point in a volume, but to find a “center surface” as a collection of voxels. Is this doable?

In general, my google-fu is quite weak on the subject. Can someone point in me in the right direction? If possible, I would like an approach using python libraries, though I’m up for implementing the code myself, if it’s within my skills.

This might be a useful starting point to try a few different algorithms without needing to write too much code: Surface reconstruction — Open3D 0.12.0 documentation

I’m not sure how well they’ll be able to handle high amounts of noise though.

Unfortunately Open3d is a bit outdated, and requires a python version lower than 3.8, while the rest of my code needs later versions. I will take a look at their implementation, though, and maybe I can find some inspiration!