MultiViewReconstruction GPU processing

Hello, @StephanPreibisch @Tobias @hoerldavid

Could someone help us getting the MultiViewReconstruction plugin to run with GPU?

To get this thread started I am citing here my colleague Raul from EMBL Spain:

To enable GPU hardware accelerated processing you have to use JNA to load some system libraries with the CUDA bindings and code. We’re testing Separable Convolution:

https://github.com/StephanPreibisch/SeparableConvolutionCUDALib

You will find a binary release on the GitHub page and the source code. We’ve tried both binary and compiling source with VS2019 and CUDA10.1 with same results; the process tries to allocate GPU memory and in the best cases ImageJ crashes, even if the program correctly detects that the dataset is bigger than the amount of free GPU memory and splits the dataset to process in blocks.

We are very thankful for any advice!

@Raul_Alvarez_Tenor @Michael_Wahlers @gopishah

Hi @Christian_Tischer

We have a protocol on how we compiled it for Windows here
https://c4science.ch/w/bioimaging_and_optics_platform_biop/image-processing/deconvolution/cuda-deconvolution/

Although the repo we used is not the one you pointed at, but this one, which is slightly more recent

Perhaps this can help you get started? And perhaps @StephanPreibisch can comment if there is a more recent code for this?

Best

Oli

[EDIT]: As far as I see, the link you have for the separable convolution is so that you can use the fast Difference of Gaussian detector when looking for interest points. For the deconvolution, you need the other repository that I pointed you to.

1 Like

Hi @oburri

Thank you for the reply, we will try the protocol you mentioned to get multiview deconvolution working.

The problem @Raul_Alvarez_Tenor and @Christian_Tischer mentioned is for SeparableConvolution to speed up the process of detecting interest points, as you pointed out.

We would like to get both of them working, so in case you have a solution for compiling SeparableConvolution, do let us know.

Best
Gopi

Dear @Tischi,
I never used the GPUs, because back then the number of GPUs in the cluster was very, very limited compared to the huge number of CPUs i could use. Given that the GPUs will speed up only a part of the processing, it made back then more sense to me to do many time-points in parallel (on CPUs). One major bottleneck was the also the availability of RAM on the GPU nodes. Best give me a call for more details that are “EMBL cluster specific” Kind regards Tobias

1 Like

Hi everybody,

@Raul_Alvarez_Tenor compiled the libraries using the protocol suggested by @oburri and we have managed to run the multiview deconvolution successfully on a large dataset. Thank you all for your help.

It seems like we can speed up the processing ~3x (took 25hrs) when using GPU instead of CPU on our current setup, which is quite a gain if routinely deconvolving large datasets.

1 Like