Changing memory allocation has no effect




This is my first time posting to the forums, so I apologize if I am missing information or unclear. I am having a problem with Fiji memory allocations. I am running Fiji on a windows machine that has 16GB of RAM available to use (and a graphics card with 8GB). My processor is an intel core i7-6700 CPU @ 3.40GHz. I’m not sure how to attach screen shots (or if that is possible), but my version says 2.0.0-rc-61/1.51n, Java 1.8.0_66[64-bit]

I have tried setting the memory in Fiji to be higher but the Memory monitor consistently displays that it is only using 5-10% of whatever I set, and 0-3 threads (even though the default that it set for me was 8). I’m trying to use bioformats importer to create hyperstacks and it is painfully slow when I have anything over 80 stacks. My movies can range anywhere from 30 to well over 200 stacks (the latter are usually 1-2GB in size). I think the computer I’m using should be able to handle these images just fine, but I just can’t seem to get Fiji to take advantage of the full processing power, and I’m not sure why.

I checked the ImageJ.cfg and it reflects the changes I make in the GUI. For instance, this is the most recent version of it, when I tried setting it to 10GB.

-Xmx10000m -cp ij.jar ij.ImageJ

Are there any other ways to figure out what is causing the memory discrepancy?
Or am I just missing some important point?

Thank you,


Welcome to the forum, @anivarj!

Some thoughts in random order:

  • When you click the status bar, how much memory does ImageJ say is being used?

  • Have you read over this troubleshooting section about memory?

  • Have you tried checking the “Use virtual stack” option in the Bio-Formats import dialog?

  • Have you tried pressing shift+\ to generate a thread dump to see what is taking ImageJ so long? More details in this troubleshooting section about freezes/hangs.

  • What file format are you using? Some proprietary file formats are very challenging for Bio-Formats to handle well; you could try converting your data first using the acquisition software (although the downside of that is that you may lose metadata).


Hello @ctrueden!
Thank you for the reply, I have answers to your thoughts below:

When you click the status bar, how much memory does ImageJ say is being used?
It says whatever I have input into the Memory and Threads box.

Have you read over this troubleshooting section about memory?
I had not seen that but will try checking the Java environmental variable when I get home. It’s interesting that the guess by ImageJ (which should be 75% of my physical RAM according to that page) was so low. I wonder why ImageJ would not be able to “see” my computer’s resources?

Have you tried checking the “Use virtual stack” option in the Bio-Formats import dialog?
I know of virtual stack but usually do not need to use this option. I tried the same dataset on an iMac and saw that it was using 1-2GB generally to do the same tasks, which should be well within the limits of the Windows computer. I’m just not sure why the Windows computer will not utilize that RAM.

Have you tried pressing shift+\ to generate a thread dump to see what is taking ImageJ so long? More details in this troubleshooting section about freezes/hangs.
I have not tried that yet but will check it out and report back!

What file format are you using? Some proprietary file formats are very challenging for Bio-Formats to handle well; you could try converting your data first using the acquisition software (although the downside of that is that you may lose metadata).
I am using .tif files, so they should be fine right?


Thanks for the additional information.

You mean it reports maximum available memory matching what you put? But I assume it says you are only using a small fraction of that?

Are you actually receiving OutOfMemoryError stack traces? If not: what makes you think this is a memory issue specifically, rather than simply a hang or slowdown?

Absolutely—TIFF is the most battle-tested format for ImageJ, and will not have the problems I alluded to regarding bad performance. Although you may want to read to be aware of a couple of subtleties regarding ImageJ 1.x vs. Bio-Formats’s handling of TIFFs. But regardless, you shouldn’t be seeing hangs/slowdowns/freezes like you describe.

If you aren’t seeing OutOfMemoryError messages, then this may not be a memory-related issue, but rather some other bug. In that case, the shift+\ after the system grinds to a halt may shed some light on things.


I do not get the out of memory error you described. It reports the maximum available memory yes, but when I am running things it is using way less than that. The files still open but it just takes significantly longer. I thought it was memory related because the memory monitor constantly reports usage of under 1GB (mostly around 500MB) even though I set it to use more, and on the iMac it is typically using 1-2GB to open the same files and does so in about 30s (as opposed to minutes on the Windows machine). I just figured for some reason on the Windows it was not really taking the memory it needs to open the files quickly, hence the slowness. But I will run the command you suggested and see what I get!


Hi again @ctrueden,

Sorry it has been a few days, but I have tried your suggestions described above. I could not find a _JAVA_OPTIONS variable on my system, but I did manage to do a thread dump while the program was attempting to open a file. I have attached the contents of the dump along with a screenshot of the memory monitor in case that helps illustrate what is happening. Again, this same file at work uses about 1-2GB of memory to open, so I’m not sure why my computer at home does not use the same amount.

I’m not sure how to read a thread dump. Could you help me understand what it is saying? I see a few variables about timed waiting. What is that all about? (41.8 KB)


Hi Ani:
I’d like to ask a silly question. We have a machine that will boot into either 32 or 64 bits windows. Is it possible you’re booting into 32 bit windows?

If that’s not a problem, maybe try the lifeline version of ImageJ with Java 1.6?


Hi @Guido,

I checked the system preferences and it says it is using the 64-bit operating system.
I downloaded the lifeline version from May 30th (the last lifeline using java6) and still have the same problems. Actually for this version, it guesses my memory correctly (around 12GB out of the 16 total) but still only uses 45MB or so to open files very slowly.

Interestingly…I tried the 32-bit version just for fun, which also guessed and displayed 12GB of memory, and when I tried to open the file it gave me this error message (attached). It seems that for some reason it only thinks it has a small amount of memory to work with. For the 64-bit version, it did not give me this error, it just started trying to slowly open the file, but I wonder if it is related to the same issue?

I attached a screenshot of the error and also of the toolbar to show the discrepancy. Again, not sure if this is all stemming back from the same issue, but maybe?



Have you tried to open your files on another computer? Can you test this same thing on a different machine just to make sure it’s not something peculiar to your machine?

Also, I’m a little confused about what files you’re trying to open. Are they multi-image TIFF files that you are trying to open as a stack (not hyperstack)? Because the virtual image suggested above should technically speed up the opening, but then you may have the same slowness converting to hyperstack.

Maybe you could say what microscopy software is generating your files, and what format that software thinks it’s saving? If it’s really a problem with ImageJ, maybe somebody knows a workaround.


I have tried opening the files on an iMac (see above) and it is very fast and uses about 1.5-2GB to open the same images. I am trying to open a series of .tif files from a Bruker confocal running Prairie View software. We get the .tif files and then open them through bioformats as a hyperstack. This works fine on the Mac computers but is seeming to be a problem on my Windows computer. I could import as a virtual stack, but then it seems like it would be slower in the long run to access the images and changes to slices are lost. It just seems unreasonably slow on the windows computer. On the windows computer, it can take ~20s to load one, 10-slice stack, and my movies are usually in the 200-300 stack range (so it adds up). The iMac opens the whole thing in about 30s-1min and just seems to be taking advantage of more resources.

If it helps to know more about the processing stream, my pipeline is normally to import the files as a hyperstack, split the channels, do a median filter, max project each channel and then do some other things like enhance contrast and make difference/subtraction movies. I save everything from the processing pipeline as a TIFF. It seems that on the window’s side, I’m bottle necking at just importing the file. I don’t think the images are large enough to need a virtual stack since they seem to only require 2GB and my machine has 16GB. But again, I’m not coming from a computer background so I don’t really know why it would be so different across the different operating systems. Any help is appreciated!


Dear @anivarj,

I had a look at the thread dump. The following lines might be related to your issue

"Bio-Formats Importer" prio=4 id=29 group=main
   java.lang.Thread.State: RUNNABLE
	at Method)
	at loci.common.Location.isHidden(

Searching for known issues with revealed some performance related hiccups of that method. Hence, some more questions:

  • Which version of Windows are you running?
  • Are you loading your files from an external hard disk or a network share?
  • How many files are located in the folder from which you are loading the files?



I concur with @stelfrich that this is highly likely to be due to the performance of the file system. In particular, Bio-Formats can be agonizingly slow at reading files from network shares, especially on Windows. It can also be a problem if you have many thousands of files in the same folder; the next part of the stack trace reveals that a file listing is being performed. Relatedly: are you using the “Group files with similar names” option? Do you need it, or can you disable that option?



Thank you so much for taking a look! I am running Windows 10 and am loading files from an external hard drive. The number of files can vary but is usually somewhere between 2,000-4000. Normally I acquire about 200-400 z-stacks, each with 5 optical slices and usually 2 channels, so you can see how this adds up quickly! I was using Bioformats because I was under the impression that it was the best thing to use to load my individual files into a hyperstack, but perhaps there is a better option (at least for Windows)? Also, do you mind if I ask how you searched for Was that on here, or just through google? I tried searching it through the forums but got no results.

To answer @ctrueden, I usually do not use the “Group files with similar names” option. Does the thread dump show that I am? I tried it again to make sure that the box is unchecked and the speed is still the same.

EDIT: I was diddling around and tried importing through file-->import-->image sequence and that was much faster and used around 1.5GB of memory which is similar to what I see on the iMac. The downside is that importing this way seems to open my image as everything mashed together (all channels and optical planes and slices one after the other in a single stream),instead of having things separated by channel, optical plane and slice# (like the hyperstack output of bioformats).


Sometimes Bio-Formats will group files together anyway when it detects a multi-file format. In this case, your stack trace says:


Which tells me that for some reason, the Prairie TIFF reader was selected. Do you happen to have a microscope system from Prairie Technologies (now Bruker Nano Surfaces)? Or perhaps your system writes an XML file into the same folder as the TIFFs?

Possible things to try:

  1. Since it sounds like all your TIFFs are stored as individual files, you could try using ImageJ 1.x’s built-in File :arrow_forward: Import :arrow_forward: Image Sequence… command. It might be faster, or it might not work at all, depending on the internal structure of your files.

  2. You could try renaming any XML and/or CFG file(s) in the folder, then using Bio-Formats with the “Group files with similar names” option. I don’t know if this will end up being any faster, but it’s worth testing.

  3. You could move the files to a more performant file system. Perhaps an SSD, or an external disk formatted as something other than NTFS. If you test it, we’d love to hear back about your experiences—it would be a good Bio-Formats optimization tip for the wiki.

Ideas (1) and (2) above may both result in metadata loss during import, so if that matters to you, be aware.


Hi @ctrueden,

Yes, I am using a Prairie scope by Bruker, and it does write the XML files to the same folder.

I actually added a section to my last post about this, but I might have been editing it when you replied! See above for the answer to #1. Basically this was the fastest option that I have tried so far, but it did not import it like a hyperstack. It’s just one stream of everything all mixed together. Are there options for making a hyperstack from what it gives me? I’m new to image processing so I don’t really know what you might use the metadata for, but it seems like it is needed in this case to reconstruct my images correctly. But the fact that it loads faster is promising that the problem isn’t my machine.

I just tried this and it was faster and did it accurately once I set the “switch axis” option and could correctly specify that. My computer did freeze a few times, and one of the times I had to close and reopen the program, but I guess that is better than before. What impact will losing the metadata have on my image processing? I guess I always thought you needed it for things, but I don’t really know what would happen if you didn’t have it. Also, is there any way to import metadata and assign it later on?

Moving to flash storage isn’t really an option for me right now, but my external drive is formatted as exFAT to be compatible with both Macs and Windows computers. Not sure if that makes a difference?


You can edit the image properties (Image :arrow_forward: Properties…) to set the Z, T and C lengths. But in some cases this will not work depending on the order the image planes were imported.

There are many different sorts of metadata. The most important is the structural kind—dimensionality of your images and so forth. Without that, you cannot process them properly. The physical calibrations are also important if you care about real-world measurements. Other than that, things like which detector was used, what magnification, etc., are all nice to have, but not necessarily crippling to be missing, depending on what kinds of analysis you are performing. (As an example: if the PSF were present in your metadata, this is an extremely useful piece of information for algorithms like deconvolution, but not needed for many common things like thresholding.)

In general, no. Bio-Formats does have a “metadata only” mode which you could use to read and display the metadata, but you’d have to write code to attach it (somehow) to an already-imported image.

It would be much better if the performance of Bio-Formats could be improved. You could send a bug report to the Bio-Formats team requesting someone investigate speeding up the Prairie file format import. IIRC, the Prairie format’s XML file enumerates the TIFF files containing the image data, and so doing a file listing in the directory seems superfluous to me. If that step could be cut out of the process, I am guessing the performance would improve dramatically in your case.

Even better than that would be if you could record in a format other than one-TIFF-per-image-plane, which is known to have these problems when the number of planes gets large.

That is unfortunate; the performance of SSDs is dramatically better.

Ah, then I am not sure what else you can do regarding that. It might be related to how Windows handles file locking, which in my experience is slower than how macOS and Linux do it.


Thank you so much for your input, I will definitely send a bug report and look into getting it investigated. I would very much like to keep the metadata if possible, as I’m just starting out and not sure what types of analysis I will need to do further down the road. I can also look into the microscope acquisition and will ask around if I can have it save in some other format. And yes, I agree that an SSD would be better, I’ll see if I can swing that in the future when I’m up for a new hard-drive!