how to create this quantitative 3D visualization (4th column) of the distance differences between segmentation and ground truth?
How to create quantitative 3D visualization of the distance differences between segmentation and ground truth?
Hi @Ma_Edward, @ThomasBoudier’s 3D ImageJ suite (https://imagejdocu.tudor.lu/plugin/stacks/3d_ij_suite/start) looks like it might have what you need. When I have a chance, I’ll have a quick play around and see if I can get it to do what you need.
Just to check @Ma_Edward, is this starting from having two binary segmentations, where you need to calculate the difference and then visualise, or do you already have the difference values and you just need to visualise them in 3D?
Thanks for your reply very much.
I attach a segmentation and ground truth 3D pair.
compute_and_visualize_difference.zip (577.9 KB)
Currently, I still do not find a way to compute the voxel-wise distance. Here is a demo to compute surface distance. However, the result is 1D vector rather than 3D volume.
The data can be loaded by
import nibabel as nib gt_nii = nib.load("path to ground-truth.nii.gz") gt_data = gt_nii.get_fdata() seg_data = nib.load("path to segmentation.nii.gz").get_fdata() # demo code to compute surface distance distance = compute_surface_distances(gt_data==1, seg_data==1, gt_nii.header.get_zooms()) print(distance['distances_pred_to_gt'].shape) # result is (287873,)
Any guidance would be highly appreciated.
This might be something that ITK would be a good option for, there are nice python wrappings for the C++ code so you could keep it all in python - @thewtex do you have any suggestions for Edward’s problem?
@Martin_Jones @Ma_Edward yes, one option is to find the voxels where the segmentation differs, then used a signed distance image filter to find the voxel-wise distance. False positives and false negatives could be identified separately.
For your data:
The signed distances look like:
This was generated with this notebook:
Thanks for your kindly help very much.
Your notebook is very useful to me.
I am very interested in the comparison between a prediction done with 3D StarDist and its corresponding grand truth, cell nuclei prediction to be precise. In this case, I would like to compare 2 .tiff stacks of 340x310x80 of 16 bits , and compute the volume of each nucleus and their centroids to see how accurate is the prediction and I was wondering if this notebook is the correct one to perform this task and if it needs special specifications to perform such analysis like a specific anaconda environment or if the images need to be of a certain type. Thank you very much for your help in advance