Assessment of chromatic aberration and channel shift

I’m looking for advice how to best assess the chromatic aberration of a dual-camera microscope. We acquired image stacks of auto-fluorescent beads, and we know there are (at least) two kinds of aberration/distortion:

  • Misalignment of the two cameras (rotation along the optical axis)
  • Chromatic aberration due to the different wavelengths used in two channels

In the past, we’ve been correcting for these distortions by measuring a combination of:

  • a 2D affine transformation (to account for rotation, shear and scaling in the focal plane)
  • a 3D translation model (to account for possible shifts in the z direction, along the optical axis)

This worked mostly to our satisfaction.

For some background, see also this related topic from 2015:


Now we have a project where we’d like to determine as accurately as possible the 3D distance between two corresponding points in two channels, and therefore I’d like to revisit the possibilities to measure the aberration.

I’d expect the transformation field (at least along the Z axis) to be somewhat non-linear, e.g. when looking at a xz view of a 3D stack:

image

So here are my questions:

  • Does anybody have experience measuring/correcting this kind of aberrations in 3D microscopy data?
  • Is there anything available (in ImageJ, ITK, scikit-image, or any other tool) to measure non-linear (geometric) transformations and apply them to coordinates of a point cloud (i.e. bead centers), or to an image?

I briefly had a look at the source of multiview-reconstruction by @StephanPreibisch et al., but was overwhelmed by the complexity of the project (and the sparsity of javadoc in some of the classes). Is there maybe some example code illustrating how to interact with the API? Also, I was not sure if multiview-reconstruction supports non-linear transformations or only affine ones?

3 Likes

I can only answer your second question. =)

If you have any kind of coordinate transformation model, it is reasonably straightforward to optimise an arbitrary function of your model and data using scipy.optimize. You can then apply this transformation to your image using scipy.ndimage.geometric_transform. We provide a template for how to do this with a pixel-to-pixel error model in chapter 7 of Elegant SciPy (also available as a runnable Jupyter Notebook on Binder here). If you have a strong idea of the parameterisation of your transformation (and it sounds like you do), it should not be too hard to generalise to an error model based on your marker coordinates.

Some further resources:

  • Calling a function in Python is expensive, so if you end up using this in production you will need to pass a LowLevelCallable to ndimage.geometric_transform, because it calls your given transformation repeatedly on each coordinate. My blog posts [1, 2] on the topic and corresponding library should help with this.
  • Having said this, if your transformation function is vectorized (ie can be applied to NumPy arrays of coordinates directly, e.g. a quadratic formula), then you can use ndimage.map_coordinates as in this Stack Overflow question.

Finally, we are actually hoping to get this kind of functionality into scikit-image sooner rather than later, we just haven’t had the bandwidth, so please report back if you come up with a nice framework using the above, or maybe even open a pull request! :wink: I can probably work on this stuff in May/June if you (or someone else in this forum) are still interested.

2 Likes