Locate field of view (cell bodies) in larger image (nuclei)

Sample image and/or macro code

Background

I have a field of view from calcium imaging which represents the cell bodies. I then use the coverslip from the experiment to do some stainings, including nuclei.

Analysis goals

I am interested in locating the field of view (cell bodies) from the calcium imaging inside the larger image of nuclear stainings.

Challenges

  • What stops you from proceeding?

I am not able to find a package/method that could achieve what I want.
The three main problems I see are as follows:

  1. The problem is not looking for an exact image inside of another. I am trying to find the position of cell bodies given the nuclei.
  2. The histological stainings can lead to part of the cells detaching. So the template image is probably not found 100% inside the nuclei image.
  3. The orientation of the field of view is arbitrary. Since the coverslip is a disc, it could have been turned. So all 380° are possible.
  • What have you tried already?

I tried SIFT/MOPS algorithms that come with Fiji. MOPS finds 5 similar features but when checked, they actually do not correspond.

  • Have you found any related forum topics? If so, cross-link them.
  • Idea for solution:

Detect cells and nuclei separately (maybe using cellpose: http://www.cellpose.org/) and then calculate the relative distances between the cell bodies. Then, use these relative distances to find the part of the nucleus image which has the highest number of correct relative distances.

I have no clue how to actually do that, it is just a though.

Other related options: https://en.wikipedia.org/wiki/PatchMatch

Hi @Cumol

SIFT features are no good in this case as they compute features based on texture and the texture for each of these blobs is very similar.
The best approach would be to find the centroid coordinates of each cell (as you suggest, cellpose or Stardist can do that easily, but thresholding + watershed also should work).
Then you have two point clouds, and you want to look for point pattern matching algorithms.
If you don’t have texture features you can try Iterative Closest Point. This is more commonly used in 3D but there are 2D implementations, e.g. here: https://github.com/richardos/icp/blob/a955cc674ef8da6f3ed4460eb132c4e150e8ad1b/examples/example.py
The other option (similar to what you suggest yourself) is to calculate a feature vector for each point based on the number, distance and direction (difficult as orientation is arbitrary) to nearest neigbours and then use a more tradionitional affine estimation.

2 Likes

Thank you for you great suggestions, I will give them a try and report back! :slight_smile:

In Fiji you also have options for doing descriptor based registration.
There is also links to some videos describing it:

Plugins > Registration > Descriptor-based_registration_(2d/3d)

1 Like

Also, this thread might be relevant if you want to do this in the Java/Fiji ecosystem:

1 Like