Announcing ELEPHANT: Tracking cell lineages in 3D by incremental deep learning

We are releasing ELEPHANT, a platform that integrates cell annotation, deep learning and proofreading of cell lineages in 3D+time image datasets.

Incremental learning on ELEPHANT allows the users to train deep learning models efficiently.

ELEPHANT extends Mastodon (@tpietzsch @tinevez @tomancak), which provides excellent tracking interface.

ELEPHANT implements a client-server architecture, allowing a flexible communication between Java-based user interface and Python-based backend. Furthermore, once the server is set up, the end-users do not need to have a powerful computer equipped with a high-end GPU.

The user manual is available here, where you can find the download links for the client and server applications.
The detailed descriptions of the method can be found on bioRxiv.
We appreciate your feedback on

Research supported by ERC.

Twitter: @ELEPHANT_track

@ksugawara, @Cagri Cevrim and @Michalis Averof


Very interesting application. Iā€™m looking forward to diving into it some more.

My company produces an augmented reality attachment for microscopes and we currently have a number of features implemented. I would be very interested in utilizing ELEPHANT during real-time pathology workflow. Due to some network restrictions at labs using our device integration as it currently stands would be cumbersome.

Since our system utilizes the microscope as a second-screen along with the PC display, do you have any plans or current methods to make this extendable as a plugin for another client with either a locally running server, or a container that can be deployed on a local network for processing images in real time?

Hi @heyJonBray thank you for interested in ELEPHANT!

Currently, we support only Mastodon as a client but can be extendable to other platforms.
The ELEPHANT server can be set up either on the same computer as the client is installed, or on another computer in your local network. We currently support docker-based setup on the Linux-based system (detailed system requirements). Please see this section in the user manual about the remote connection to the server.

Because your workflow is for pathology, I assume that you are interested in the detection part, not the linking part. I was wondering if your images are 2D or 3D?
Although it depends on the image size, the processing speed is generally enough fast for real-time interaction. (Supplementary Table 1)
Supporting 2D images is still an ongoing work but it would be faster than 3D processing in general.

1 Like