As far as I can tell, it is not trivial to do this, because the distances in one block will depend on the distances in the neighbouring block. If the distance transform in scipy was generalized, it would be possible to propagate the distances through the blocks, but unfortunately this is not the case (the docstring to distance_tranform_edt specifies that the input will be converted to bool.
My suggestion would be to (a) raise an issue in scipy, since it appears the the implementation is quite wasteful (as far as I can tell, the indices array is created even if the user doesn’t want it; that’s where I’m getting a MemoryError anyway) and they might be able to improve it relatively easily, and (b) maybe read the Felzenswalb paper and implement it for scikit-image? It’s a fun paper, I promise!
Otherwise, from me it’s the usual answer: it’s on our radar, and we hope to get to it “soon”!
did not return the indices array, implemented in a list, grab the neighbor and go out…
a little faster than scipy.ndimage one. but cost little memory. 2048^3 's array could be calculated on a 32G computer.
we can define the output dtype, filled by the result distance*scale.
an output result in specific dtype with the same shape with input.
an active point list in uint64 with length of img.size//8
an indices point list in uint64 with length of img.size//8
at most time, uint16 output is enough, distance should not be bigger than 65536, and in many case, not bigger than 10000. So we can use output=np.uint16 and scale=4, we can got a 4 times bigger round result, the error is smaller than 0.25. of cause, we can also give output=float32 or output=float64 to get a accurate value.