Resampling for 3D viewer

I have been segmenting several axons as arealists from a SEM stack, using TrakEM.
For visualizing them in 3D and for exporting them as wavefronts .obj, I have used 3D Viewer from Fiji.
I would like to import these wavefronts in Blender and make some measurements such as surface area/volume. Now, 3D Viewer asks for a resampling factor. What would be the best value to be used and how does this influence the object mesh? And the further analysis on area/volume?

Thank you for your time,

Naively, I would suggest a resample factor of 1. This uses your original pixels, rather than downsampling them for performance reasons. Your rendering should look crisper without downsampling.

I do not know whether the value affects exported object meshes, though.

Thanks Curtis.
In fact that is also what I thought, but in the case of very long axons, a resample of 1 generates quite big .obj files that do challenge Blender a lot (crashes very frequently), once imported…so I was trying to understand if there would be a compromise…

As Curtis says, resampling of 1 will give you a more accurate mesh. That said, AFAIK, the marching cubes algorithm used to produce the mesh from the original image does not make any attempt to minimize the number of vertices it produces (although I believe the 3D viewer implementation does involve some smoothing). This means that there probably is both a need, and some wiggle room for, mesh simplification.

If you can successfully create the mesh in 3D viewer, then one thing you can do is decimate vertices. If you right click on the mesh itself you will get a drop down menu with an item “Decimate mesh” which you can use to simplify your mesh by reducing the number of vertices. You may want to run the procedure a few times and observe how the quality of your mesh changes.