Hello friendly image analysts!
I’m using a microscope that relies on 3 cameras to collect simultaneously in each of 3 different channels. We’ve attempted to align these cameras as well as possible, but obviously it’s very difficult (nigh impossible) to achieve pixel perfect alignment. Using 200nM tetraspeck beads, I have collected an image with landmarks visible across each channel, and hope to use this image for alignment (files available through link at bottom of post).
The workflow I would like to use is:
- Record alignment image
- Conduct my other imaging
- Calculate alignment from alignment image
- Apply to all images
Using the Descriptor-based registration (2d/3d) I can accurately calculate the transformation between, for example, channel 0 and channel 1. I can return the transformation (e.g. [3,3](AffineTransform[[0.999970929479262, 0.007624971893748, -11.695266931561417], [-0.007624971893748, 0.999970929479262, -16.720713275268054]]) 1.654253271494951) from this as well. But I have no easy way to apply this transformation to a fresh stack. Is there something I’m missing here?
I contacted Stefan Priebisch (author of SPIM Registration and this Descriptor-based registration plugin) for advice, and he suggested to use their new tool suite BigStitcher. As it seems BigStitcher is mostly designed for orthogonal views or image stitching, it’s not immediately clear to me how to conduct channel registration, and using the method that seemed most clear (Multiview -> Detect Interest Points) pulls up the same display as Descriptor-based registration, but fails to detect any points even with the same settings as used successfully above.
Any recommendations for a scriptable protocol for doing what I’m aiming for?