CARE out of memory

Hi, I noticed that when applying the trained tensorflow model of 3D denoising of CARE, I am running into memory issues when my Z slices are about 100 and XY dimension is 2048 pixels. I have kepler K80 gpu and the problem with setting n_tiles is that after n_tiles = 128 the program tries to increase it more but due to the exception of it exceeding n_tiles_valid it just hits an endless loop for which I have to terminate the kernel in the jupyter notebook to stop it.

Is splitting the image into smaller Z’s the only option?

@fjug @frauzufall

Unlikely the only option… any change to use less memory could help… more memory could help too.
But in all seriousness, I think @uweschmitt would be a better person to ask. If this mention here does not activate him, maybe you try it with an Github issue? (Or Email if you want to go all old school… :wink: )

He is currently not in Dresden, otherwise I would quickly run over to his desk…


1 Like

If I remember correctly, the Python code indeed just tiles in one direction though we wanted to improve that. Have you tried the CSBDeep Fiji Plugin for applying the trained network? It should be able to tile in all spacial dimensions.

1 Like

@frauzufall I tried the plugin only very quickly and on my laptop which only has CPU, it threw a bunch of errors so I did not look much into that after that, thought it must be coz of me trying to run it on a CPU only machine. But tomorrow I can try running the Fiji plugin on GPU enabled server and see what that gives me.

@fjug I could open a github issue for @uweschmitt , before I go really old school with the email (which i have to find from somewhere first). Till then I can just include a max tile in z direction like 25 for just my notebook and then stitch up the remaining Z’s so that the final output looks the same as 100 Z input stack.

I just remembered that Uwe already commented on a similar case on github, see this issue:

However, you need to use a currently undocumented (and likely to change) feature.
Instead of providing an integer for n_tiles , you can choose a tuple, such as (32,16) .
In this example, the largest image dimension would be split into 32 tiles, and the second largest into 16.


That could work, max dimension in my case would be X or Y and the second would be Z, I will try this in the morning after the training is complete and will let you know if input as a tuple resolved the issue :slight_smile:, thanks a lot for the instant user support :slight_smile:

1 Like

@frauzufall yeah the n_tiles = (32,8) type of input works, just slow on 1 GPU card, thanks a lot for your help.

1 Like