I’m working with large virtual image stacks (8 bit, 4096x4096, ~ 10000 images). I note that the resizing takes hours (down to 1024x1024). Furthermore I note that the resizing is not parallized. Maybe not the resizing is the problem, but read the images into memory. Still I’m wondering why this is not done in parallel. Any reason for that?
This is a little bit nasty, because reading/downsizing the images takes more time then processing them (as the processing is parallized on 24 cores).
Any suggestions how to improve the speed without removing averaging during downsizing?
I’m not sure if resizing or reading the images into the momory takes more time. I’ve to investigate. However, it seems to me that reading an image sequence is not parallized as well. Is there any general reason that makes a parallelization of reading the image data complicated? If not so, I will put some effort to implement that.
Well, if you use a virtual stack, the images are not being read into memory until you call some command that actually needs the pixel data of the whole stack, which might in your case be the Image > Adjust > Size… command.
A great part of this is already implemented in ImgLib2, see for example the imglib2-cache project. It’s just not trivial to call this from the legacy ImageJ1 interface, as it is just too much intermingled with UI classes (java.awt) some of which will always run on the EDT…
If you want to dive more into this, I guess the Imglib2 experts (@tpietzsch, @hanslovsky and others) will be happy to discuss (and to correct me if any of the above was not quite right, sorry…)