Development of Python-based 2D section -> 3D atlas registration tool

Hi all,

During the development of cellfinder and amap, I have been asked many times about registration of 2D tissue sections to a 3D anatomical atlas, and there appears to be demand for a new Python-based tool to do this.

To be clear, there are already many excellent tools that do just this, and my previous response was to point users to these tools, and to avoid reinventing the wheel. However, it appears that for one reason or another, these existing tools are not suitable for everyone.

There will soon be someone recruited specifically for this project, but rather than developing this in private and then releasing to the world, we are following the lead of @haesleinhuepf (and many others) by developing entirely in the open. We are inviting anyone interested to get in touch and help develop core functionality, work on new features, test the software or suggest improvements.

I have set up a GitHub organisation to coordinate the development of this tool, so if you are interested in joining the discussion, send me your GitHub username and I can add you. Alternatively, feel free to message me on here, or via twitter.

As GitHub organisation discussions are private, I’ve copied the initial message below.

If you have any questions, please get in touch.

Thanks,
Adam



Hi all,

Thanks for responding to the call about developing a Python-based 2D section -> 3D atlas registration tool.

Background

The motivation behind this is due to the many requests I received about adding 2D data support for cellfinder and amap.

To be clear, there are many excellent tools that already do this, including (but not limited to):

However, there appears to be a demand for a new Python-based approach to solving these problems. Benefits could include:

  • A more comfortable development environment, for the many biologists moving to Python to take advantage of new tools (such as the excellent DeepLabCut.
  • Easier integration with Python-based tools, such as napari for visualisation and tensorflow for machine learning.
  • Integration with cellfinder and amap.This would allow the use of additional features developed for these tools, and exploration of multi-modal data.
  • Integration with additional tools for visualisation, such as brainrender by @FedeClaudi.
  • Compatibility with any atlas from any species via the brainglobe project. First version to be released soon, with multiple atlases for mouse, rat, zebrafish and human.

Aims

Although amap works well and has been valdated. It is limited to a specific type of data (full 3D images of an entire mouse brain). amap will be extended to other species and atlases via the brainglobe project, and registering smaller 3D volumes should also be possible (see issue).

What is missing is a 2D image to 3D atlas registration package, which will:

  • Be compatible with any existing atlas
  • Support arbitrary planes of section
  • Deal with sample imperfections (tilts, rotations, damage)
  • Support many sample preparation and staining techniques
  • Support multiple imaging modalities
  • Support the transformation of detected features (cells, fibre tracts) into standard space
  • Allow manual curation
  • Have batch functionality (for processing many sections from one organ)

I’m sure many of these aims have been achieved by existing software. We in no way want to reinvent the wheel, but as it stands, it seems that the available tools are not appropriate for all users. With this in mind, if you know of any existing tools (or have developed them), we would love to work together to make them compatible.

Why does this organisation exist?

Although this project aims for compatibility with cellfinder, amap and brainrender, it will be a standalone tool, and the development will be lead by @jonnykohl and his lab.

Rather that develop a tool for one application in a single lab, we want to make a tool with and for the community. Too much software in academia is released with little attention to how easy it is to use, and with no plan for longer term support.

With this in mind, we would like your help to develop this software collaboratively. This could include:

  • Suggesting features we may not have thought of, and which could help your research
  • Testing early versions of the software and submitting bug reports
  • Contributing to the project:
    • Developing core functionality
    • Addition additional functionality
    • Writing and improving documentation and tutorials

What’s in it for me?

The reason we are developing all this in the open is to prevent duplication of effort. By contributing to this project, you can help develop the perfect tool for your research, without having to develop the whole thing.

How can I help?

Initially you can help in three ways:

  • Get the ball rolling, and submit a pull request. Take a look at the (slightly out of date) roadmap which will give some hints on how to get started.
  • Comment here, or raise issues on the slicereg repository about features you would like to see.
  • Spread the word, and tell anyone else you think might be interested.

N.B. initial development may be a bit slow, until the person with the main responsibility for this work starts


13 Likes

If anyone want’s to be paid to work on this, Johannes Kohl’s lab at the Francis Crick Institute in London (U.K.) is hiring for a 12 month (potentially remote position). More details to follow, but get in touch with me if you’re interested.

job_description_sliceReg.pdf (72.8 KB)

1 Like

Application now live here: bit.ly/slicereg-job. Deadline July 26th.

1 Like

any update here? A collaborator is very interested in such a tool.

Is there also a potential workaround? Say, could we run aMap but specify some ridiculously large z-spacing?

Hi Bryant,

No update yet I’m afraid.

You can certainly use amap (or the new version, brainreg) with a large z spacing, but the assumption is that each image plane will be registered to the others. This is usually not the case with “traditional” 2D histology data.

Let me know if I can help in any way.

Adam

Hi Adam,

Thanks for starting this effort. I’m thinking about trying another approach – we have 6 slices at random intervals through the brain. Perhaps if we just copy some so that the “interval” is consistent, then choose the slice that properly corresponds to our data
For example:
idx = atlas slice position, value = sample position
our data: [4,6,7,9,15,20,21]
new data: [4,4,4,4,6,6,7,9,9, etc…]

In theory that could work, but your registration might not be that accurate. I suppose it depends what accuracy you needed.

Are your 2D planes already aligned to each other?

No the slices are not aligned. The data is crudely cropped from the tiled set. Probably complicating things is that these are Sagittal sections.

Apologies too, i’m not a brain researcher and am very inexperienced with the brain atlases, CCF, etc.
Regarding a more robust tool (that is the topic of this thread) – one can explore sagittal sections in the Allen Atlas viewer (example) and retrieve this as .svg through the Allen API. Can aMAP transformations operate on 2d-2d planes? If we remove some level of automation (slice to slice, instead of slice to 3D volume), perhaps this problem is easier?

Unfortunately if the 2D planes are not aligned, then this approach won’t work.

The orientation of the data doesn’t matter, but the current approach (in amap and brainreg) doesn’t work for 2D data, only 3D -> 3D, hence the need for a new 2D -> 2D tool.

So it seems to “work” in that i get predictions. But it’s clearly trying to align a coronal atlas slices with our sagittal slices. Perhaps this is where the journey ends – my assumption is that 3D to 3D mapping will not care (or will automatically determine) orientation. Or, is there a way to define initial atlas orientation?

1 Like

Hi @bchhun. I would say that actually looks promising!

The 3D to 3D registration is forgiving of minor orientation changes (e.g. off-axis sectioning), but it will find local minima when the two images are more than about 45 degrees off. If you reorient one of your 3D images to match the other (doesn’t matter which), I would expect reasonably good registration based on these images. If you’re using brainreg, you can specify the orientation of your data, to avoid these issues.

We’re still a way off proper 2D to 3D registration, but maybe it’s easier for us to have a chat sometime to see if I can be more help? Feel free to email me at adam.tyson@ucl.ac.uk.

1 Like

Hi @adamltyson,

Thanks a lot for your very detailed post and congratulations on your open approach !

For the record, on the java FIJI ecosystem, at least two projects are dedicated to this subject (registration of 2d slices in 3d atlas). Neither of them have been officially released yet :

1 - one is located here (https://github.com/fmeyenhofer/ABA_J), it’s an initiative originally initialed by the Lamy lab (https://www.unige.ch/medecine/Lamylab/en/)

2 - another one initiated in our own imaging facility (https://www.epfl.ch/research/facilities/ptbiop/). The source code is located here (https://github.com/BIOP/ijp-imagetoatlas/tree/sourceandconverter), we’ve done this openly (all the work and dependencies are accessible in github) but the announcement was rather … confidential!

While these projects are redundant with yours, I believe there’s a need both in Python and Java for such tools- and while there is an ever-growing user base in python, some people are more comfortable staying with the java side (Fiji and QuPath software).

On the technical part, what has still no equivalent outside of Java is the handling of multiple file formats thanks to bioformats + its multiresolution support. This is part of the reason why there’s a huge base of users in QuPath.

That being said, it would be awesome to have good interoperability between all tools and discuss in order to make sure that the results of all software can be easily shared.

I was thinking that maybe making a comparison table like the one done by @romainGuiet for 3D annotation tools could be great, and help us learn from each other (Comparaison of some tools for 3D dense ground truth annotations ).

I’ve done a draft here: (https://docs.google.com/spreadsheets/d/1K8hGOXA6HuamGQwQ_lmwX5robOQ7q6nKeZ3WJWWTCjA)

Anybody interested in contributing / modifying the criteria, please send me a message and I’ll give the editing rights.

@adamltyson, @FeliXM, @bchhun,

Best,

Nicolas

6 Likes

Hi @NicoKiaru, thanks for your reply.

I assumed there were some options in the java world (other than our old 3D registration package), thanks for sharing!

I believe there’s a need both in Python and Java for such tools

I totally agree. Our motivation for building a tool in Python is convenience, we develop in Python, and we have an existing ecosystem of Python tools.

That being said, it would be awesome to have good interoperability between all tools and discuss in order to make sure that the results of all software can be easily shared.

I also agree. I’ve had a few chats with developers of these kinds of tools, and the general consensus is that everyone would like common formats for intermediate and output data (detected cell positions, transforms between coordinate spaces etc.), but so far there hasn’t been much work on this. I’ve done some work to try and standardise things in our new 3D -> 3D registration tool (brainreg), such as saving deformation fields as a series of 3D tiff files that can be processed by any other software.

Some kind of public discussion would be good for this. I think neuroimaging is one of the few areas of microscopy analysis that could benefit from some more standards. I’d love to be corrected, but I don’t know of other areas of microscopy analysis where users regularly need to process varied multi-modal data in a common anatomical reference space. These standards however could be very simple (i.e. no new file formats, just an agreed layout).

Another area where interoperability would be helpful is with the atlases. Most registration packages only work with one atlas, whereas brainreg uses the BrainGlobe atlas API to support a (growing) selection of standardised atlases for different species and at different resolutions. A Java implementation of this to interface with these atlases could be helpful (but outside the expertise of the BrainGlobe team).

On the technical part, what has still no equivalent outside of Java is the handling of multiple file formats thanks to bioformats + its multiresolution support.

This is certainly an issue on the Python side for 2D -> 3D analysis. For 3D whole-brain data, users tend to have a series of tiffs, but this is certainly not the case for 2D. My long-term plan for this is to leverage napari, and the growing ecosystem of plugins, but bioformats in FIJI will always win here, for the number of supported formats, and ease of use.

I’d love to add to the comparison table adam.tyson.ucl.ac.uk. I know many of the developers of the other tools, so I can invite them too if that’s ok?

Best,
Adam

1 Like

Thanks for this answer, I’ll reply to the rest in more details, but for this:

Of course! That’s the idea. I add you right now and you (and actually any other interested person) can pm me to be added

1 Like

Thanks a lot @adamltyson for contacting the developpers. And thank you everybody for filling in the document (I don’t know who’s on the forum and who’s not, but it would be great to know!).

At this point, what do you think about:

1 - opening another thread related to this table in the forum ? If somebody would like to add another tool it’s easier to find on the forum

2 - regarding the questions of interoperability, what do you think of setting up a chat to discuss this ? A gitter channel linked with the slicereg or brainreg organisation ?

1 Like

I don’t know who’s on the forum and who’s not

I don’t think that anyone is on the forum, but maybe I’m wrong.

1 - opening another thread related to this table in the forum ? If somebody would like to add another tool it’s easier to find on the forum

Yep, I think discoverability is important, and we can save this thread for any slicereg questions.

2 - regarding the questions of interoperability, what do you think of setting up a chat to discuss this ? A gitter channel linked with the slicereg or brainreg organisation ?

I like gitter, but maybe the forum is a better place for it? I think developers are more likely to see it here, and it seems a bit more permenant.

1 Like

I agree a new thread in this forum is a good spot for this topic. @NicoKiaru do closed-source applications /tools like histolozee also belong in the table?

2 Likes

My vote would be to move the table discussion to a new thread in this forum. Please link it here too.

1 Like

The new thread is here for the comparisons:

1 Like