Overlapping tiles in CARE

Hi, I was wondering how the overlapping tile function of care works, if I input only n_tiles = (1,3,4) as input and specify no overlap_percent between tiles, the program is still making 12 overlapping tiles somehow, but what is the default overlap between tiles? I tried to dig into the code of predict_tiled but the coding is too dense to see how an input image is broken into overlapping tiles.

Can you explain the code a bit to me somehow? I input image and number of tiles, what happens to that in the code after that?

@uschmidt83 @fjug

1 Like

Uwe is the master capacity to help you with the programmatic details. (Also @mweigert or @tibuch are quite likely able to help you right away without having to figure it out themselves before…)

Still, from an abstract point of view, the required overlap is dictated by the network architecture. See, the receptive field size of the innermost nodes dictates how large the overlap has to be in order to being able to ‚glue‘ the overlap-reduced (cropped) tiles next to each other without artifacts being created.
Does this make sense?


Ah right thanks, it is kind of like the number of strides/max pooling operations in the network architecture dictate how much overlap there would be in a convolutional operation of sliding window operation for classification, which is also a network architecture dependent parameter. So for Unet it is the receptive field size, ok thanks I will read the code again keeping this in mind.

1 Like

Hi @kapoorlab,

In addition to what @fjug already said: For the tiled prediction to yield the same result as a full (without tiling) prediction, two things have to be appropriately chosen:

  1. The overlap between tiles has to be at least half the receptive field of the network
  2. The tile start/end coordinates have to be multiples of a common blocksize for which the network is translation invariant. For U-Net this is simply 2^depth with depth the number of maxpooling layers in the network (e.g. blocksize=8 for a network of depth 3)

The model.predict function automatically choses both overlap and blocksize such that these conditions are fullfilled (e.g. here).


Thanks a lot @mweigert. That helps.

@mweigert I followed you advice for tiling a U-Net in 3d but I keep having problems in the z direction.
I trained a U-Net with a (16,64,64) input size. Using the CPU I can predict a mask for a (31x1024x1024) image stack (by padding symmetrically to (32x1024x1024). To use the GPU I tile the image so that tiles overlap by 8 pixels or more in each direction (I actually used (16,32,32) overlaps) and I pad symmetrically the tiles at the borders of the stack. The tiling in x,y looks fine, but I get artifacts at the bottom of the stack. I’ve seen these artifacts also show up using CARE, so this doesn’t seem to be a problem with my tiling algorithm, but something else. Do you have any advice?


Hi @Adrian_Jacobo,

Could you post the prediction code you use and show an image of the artifact you observe?



You can find the code here: https://github.com/a-jacobo/MultiResUnet3D
The file 3_Predict.py tiles, predicts and then reassembles the image. I’ve also uploaded two sample images to the same repository:

  • myo6b_bactn_gfp_2dpf_w1iSIM488-525_s1_t30_label_CPU.TIF is predicted in the CPU with no tiling.

  • myo6b_bactn_gfp_2dpf_w1iSIM488-525_s1_t30_label.TIF is tiled and processed in the GPU using the script mentioned above. You can see at the bottom of the stack (slices 29-31) there are some artifacts and a “halo” of labels that is not present in the no-tiled image. I’ve tried different block and padding sizes always following your recommendations, but I keep getting this halo.

The weights of the network are too big to upload to GitHub but let me know if you need them and I can put them in some other repository.


I would not pad in z in your case. Other than that, hard to tell whats going on…

Thanks! I tried not padding in z, but I still get the same type of artifacts. At least I know I’m not making some silly mistake.

I just had a quick look at the prediction notebook and there are a couple of things, e.g. array bound issues (e.g. image[z-p,z+b+p] will return an empty array for z=0) and strange padding (one should not pad the patches with values from neighbouring patches and not by reflecting itself).

I would have a look at the tile_iterator from csbdeep
as outlined here.

Thanks so much! I tried to use your tiling function before but I couldn’t figure out how. I was able to do it now thanks to the example you pointed out, and it’s working very nicely! No need to reinvent the wheel!

I was using an out of bounds padding strategy I read about in a paper about U-Nets, but that doesn’t seem to be the right way to do it.

1 Like