I am interested in predicting the position of nuclei from single plane bright-field images. However, according to the documentation
The patch sizes of Label-free prediction (fnet) are hard-coded within the network itself. Currently the patches are 64x64x32 (x,y,z). So, the dataset used for training need to be at the very minimum these sizes, otherwise the notebook will crash.
Is there a workaround that would allow to use single-plane images ? Duplicating the image 32 times in a stack ? Filling the stack with blank images ?