Does anyone else get OutOfMemory errors when running multiple files sequentially in DeconvolutionLab2? I have looked through previous threads on troubleshooting memory errors but I haven’t found anything that seems to apply in my case (I’m taking out the garbage and have a large working memory available).
I wrote a macro that runs DL2 on each z-stack in a folder, and waits until the it has finished processing each before starting the next run on the next z-stack. I call the garbage collector after each run of DL2, and I am running in batch mode. I can process about 5-10 z-stacks and then I get an OutOfMemory error. I have 14GB RAM allocated to ImageJ, which is more than enough for any given DL2 run (in the middle of the run it can use up to 10GB storing temporary images, FFTs, etc. I should say the z-stacks are quite large, dimensions 1024x1024x134).
When I look at JVisualVM profiler I see that something is not getting cleared by garbage collection and the plugin’s internal memory management. Below is a screenshot of the heapspace for ImageJ. You can see the rise and fall of each run of DL2, but each time something is leftover in the heap space.
I’ve tried taking a heap dump in between to see what is left over. It seems that after each round of DL2, an extra image object of about 568MB is left over in the Fiji heap, until these leftover images crash ImageJ. It seems like this amount of memory corresponds precisely to a 32-bit copy of the z-stack (1024x1024x134) that is getting saved and not de-referenced for garbage collection. However, this copy accumulates even if I don’t specify any outputs from the DL2 run.
Has anyone else had an issue like this? I have a lot of 4D datasets I want to process for deconvolution and if I have to keep restarting ImageJ this is going to put a huge cramp in my workflow!
I’m happy to provide a copy of my macro, though I think that the problem is in the plugin and not my macro.