AnalyzeSkeleton memory usage concerns

fiji
bonej
imagej
skeleton-analysis

#1

Dear AnalyzeSkeleton development team… This includes @iarganda (I believe).

First of all, I would like to thank you for the software. It’s a valuable and useful tool.

I post this thread here because I am experiencing some issues regarding the RAM memory demands of the plug-in, both when used via GUI or as a script (Jython, in my case).

I need to work with relatively large images, like the one attached below for reproducibility purposes.

The full analysis (no pruning, all outputs ticked) yields the desired results, but it blocks a huge portion of RAM: roughly speaking 6-7 GB… which is already a concern. But when the analysis is complete, such RAM space never gets freed. Thus, two or three runs of my code lead to a memory overflow in the workstation I’m using.

Is there something wrong in my installation perhaps? Can you reproduce the issue?

Thank you very much in advance,
Fernando

skela.tif (19.5 MB)


#2

This sounds like a memory leak, which occurs when objects are created by code but are not destroyed or marked OK to remove when the code has finished executing. They can be diagnosed definitively with a Java profiler like Visual VM. BoneJ uses AnalyzeSkeleton extensively; @rimadoma and @alessandrofelder have both looked at the code recently.

Have you tried a manual garbage collect, by clicking on the ImageJ status bar?

EDIT: I’ve tried but failed to reproduce this issue. Garbage collection works fine here. I suspect that there is a problem with your GC, which I believe you can set to be more aggressive (clears stale objects from the JVM earlier rather than running out).


#3

Thank you, Michael.

I first suspected of my own code: as you say, creating objects carelessly. But then I quitted Fiji, started it again and launched AnalyzeSkeleton’s GUI alone… And the same thing happened, 6-7 GB.

I didn’t know about the manual garbage collection via the status bar, but I indeed tried something similar, creating a simple script just to run IJ.freeMemory(). No success.

I assumed that this is a widely used plug-in, and that’s why I was skeptical with my installation. Given that you couldn’t reproduce the issue, on Monday I’ll investigate further the set-up of my GC. Thank you!

EDIT1: I’ve installed Fiji here in my laptop. I’ve been able to reproduce a major increase in RAM usage due to AnalyzeSkeleton, but this time “only” around 2.5 GB. Anyhow, I’ve solves it with your method of the manual GC via status bar, and/or programmatically running java.lang.System.gc()
Neither very elegant, nor convenient IMHO… But it does the trick!

EDIT2: Just to clarify – RAM didn’t get freed automatically at all, or at least it didn’t in a time reasonable for my patience, hehe. I’ve had to force it via java.lang.System.gc(). According to IJ library documentation, in theory IJ.freeMemory() should have called GC too… But it didn’t do the job.
I’m under GNU/Linux


#4

That GC is working suggests that there isn’t a memory leak. Rather AnalyzeSkeleton is a memory hog. That has implications for how much RAM you need to be able to process large images, or images with complex skeletons.


#5

My experience, Java and Python 's gc do not work well sometimes, Especially when we use ref point to each other, or use hash/dictionary. (and the image is not large at more, a 2048^3 image stack with 1e5 nodes and edges is very common)


#6

Thank you all for your assistance, I really appreciate it!

Back in my job’s workstation, I’ve observed a different behaviour wrt my laptop. This now seems as problematic as @mdoube suspected in the form of memory leak:

  • The programmatical GC java.lang.System.gc() frees a tiny part of the RAM, in the order of 0.2 GB or so.
  • If I later do the manual GC via status bar clicking once or twice (which btw is not practical, because my future code will require various skeleton analyses in a single run), then a major release of RAM takes place, 3-4 GB…
  • But regrettably there are yet some 3 GB of used RAM more than before calling the AnalyzeSkeleton plug-in.
  • No GC succeeds in releasing those, they only get freed when I quit Fiji.

What @yxdragon points out may also be part of the explanation, but I’d rather avoid migrating all my code from Python/Jython to Java, if possible…


#7

I had encountered this status, and I am sure I had lost all ref, even I had given null to all ref, then use system.gc, but not freed. I think the java’s gc is not bright enough in complex topology structure. and in python the ref circle each other can not be freed (because the ref counter would be one forever, may be that is why the weakref exists?)

by the way, If you use python, you can have a look this project, sknw, but it is a pure python package, not a fiji package, I did not know if it helps.


#8

Thank you.

Let me clarify, because perhaps it’s helpful:

Now that the problem seems to be identified and isolated, I’m using ImageJ/Fiji’s GUI to launch the AnalyzeSkeleton plug-in. My tracking of RAM usage is with System Monitor (i.e. Linux’s Task Manager), and the only Jython/Python code is to launch a tiny script just with a call to the programmatical GC java.lang.System.gc().

Thus, I’ve been able to discard any failure of my code regarding references, etc.
The fact that the behaviour is different in my laptop and the workstation (amount of RAM used or released) makes me wonder if there might be indeed a memory leak inside the plug-in.

I’m going to take a look at the library you suggest. If other solutions fail, I could try to integrate it in my Fiji’s Jython processing pipeline, running it as a external command.


#9

I’ve been rather busy with other things lately, but I’ve kept doing some tests to constrain the RAM memory usage by AnalyzeSkeleton.

It seems that the most promising work-around should be autocropping the skeleton mask before analyzing, then restoring it to the original size and location… Plus a massive use of programmatical garbage collection.

Thanks to the contributors.