Transform an image and raster an image at specified voxelDimensions

Hi @bogovicj @NicoKiaru @maarzt @hanslovsky ,

I have a short question about Views.raster.

To set the stage, a typical pattern for me is (pseudo-code):

// transform an image
RandomAccessibleInterval rai;
rra = RealViews.transform( Views.interpolate( Views.extend( rai ) ), transform);
// create a "rastered" image
transformedRai = Views.interval( Views.raster( rra ), transform.estimateBounds( rai ) );

This is fine, however sometimes I would like to create my transformedRai at particular voxelDimensions, because, e.g. the transform might include a voxel calibration (as it is for example the case in the AffineTransform specified in bdv.xml).

The only way I currently know how to achieve this is to put a scaling transform on top of the actual transform:

// scale the transformed image such that integer coordinates correspond to desired voxel spacing:
double[ ] scalings = createScalings( originalVoxelSpacings, desiredVoxelSpacings );
scalingTransform = new Scale( scalings );
transformIncludingScalingToVoxelGrid = transform.preConcatenate( scalingTransform);
// and the rest as above

This works but somehow I felt that something like a Views.raster( rra, voxelDimensions ) would make sense and be more readable. Is there a reason that this does not exist?

How are you guys achieving the task of going back from a rra to a voxel grid?

1 Like

I am not aware of such a method but you could use Views.subsample after rastering. This works, of course only for integer subsampling steps.

I personally do not see a lot of benefit of having a Views.raster(rra, dims) because it is much less flexible than the AffineTransform3D approach. I typically map into some arbitrary global space, apply the transform A in global space, then map back, pretty much the same thing that you do.

T = G_2^{-1} A G_1
2 Likes

Piggybacking on this thread here with a question as I am curious how interpolation is implemented for AffineTransform3D in imglib2?

I recently did something in python where I calculated a combined affine transformation on a non-isotropic volume including such as combined = scaling @ shearing @ rotation @ rescaling, where @ stands for matrix multiplication. Applying the combined transofrmation was a single transform (only interpolating once) but as the used afffine transform functions didn’t have a concept of voxel spacing the interpolation weights for interpolating voxels between slices in the source volume were the same as the weigthts for x/y neighbours, thus leading to unpleasant artifacts. I had to work around this by applying the scaling transform first and then the other transforms. Interpolating twice was better in this case than interpolating incorrectly.

Having a library function that allows to pass voxel spacings in as weights for the interpolation would be very useful. Does this work in the imglib2/Java world ?

In imglib2 the AffineTransform and the Interpolation are handled separately:

realAccessImage = Views.interpolate( voxelGridImage,  new Interpolator(...) );

In above line you generate from an image that lives on a voxel grid an image that you can ask for values at any location in real space. And the interpolation function that you would like to use is passed in as an argument. Since this is Interface driven you can write your own Interpolator and thus do whatever you want.

The AffineTransform is then applied onto the realAccessImage:

transformedRealAccessImage = RealViews.transform( realAccessImage, myAffineTransform );
3 Likes