Memory error in most recent version of 3DScript

Hi forum and @bene.schmid,

I rendered a dataset using the 3DScript plugin several months ago with no issue. Now, on a fully updated version of fiji and 3DScript, I get a memory allocation error on the same dataset on the same Linux computer when I toggle enable light. If I crop the dataset to a tiny fraction of the size it still works, but I was able to enable light on a 6GB version on a previous version of fiji/3DScript. The error only occurs when I toggle Enable Light.

If I have to crop the dataset any more there really is no point as half of the object will be missing.

System: Ubuntu 18.04 LTS, GPU RTX2080Ti 11GB mem.

GPU drivers may have changed as it is a shared workstation, but I have no way of knowing. Can’t rule it out as a factor.

And a followup question: Can I utilize the combined memory of two GPUs if I link them in the same computer? In fiji and 3DScript in particular?

1 Like

Hi @Sverre,
more memory is needed if you enable light, because the gradients needed for the rendering are precomputed and kept in memory. So I wonder whether you also enabled light several months ago when you didn’t have any issues.
Regarding 2 GPUs: Unfortunately, 3Dscript currently can only use a single GPU.
I know it may not be satisfying, but would downsampling be an option (instead of cropping your object away)?

Also, could you send me the dimensions of your data set? How many channels? Which bit depth?

Best wishes

2 Likes

Hi @Sverre,
could you try replacing libOpenCLRaycaster.so in Fiji.app/lib/linux64/ with the one from here:

https://faubox.rrze.uni-erlangen.de/dl/fiHGGJaHocBM4WppYy5V7hkx/libOpenCLRaycaster.so

Best

1 Like

Hi @bene.schmid, thanks for the fast and thorough help with troubleshooting.

This seems to have solved it! I was now able to open a 5.6GB 2 channel tif, 1870 x 1790, 992 images in total (496 per channel). I enable light and it keeps running, no errors. I tested a slightly larger image, 6.6 GB and this also ran flawlessly. Dimensions for 2nd tif are 1870x1790, 992 images total. 16 bit for all datasets.


Followup question: Does enabling light in multiple channels cost more memory than enabling light in a single channel? I assume yes but better safe than crashed imagej.


From someone who has no idea how much work this would be; is there any chance dual GPU support will be developed in the future?


edit, one more question:

Can I calculate how much additional memory is needed to enable light on a dataset?

Thanks again for the help, everything works now!