Hi, may I know how I can denoise TIFF files with multiple frames in one go on N2V? It seems really pointless to denoise the frames one by one when there are thousands of them. I have the individual frames saved as TIFF files and I can very easily convert them into an image sequence or stacks. However, once I have done so, I can’t seem to denoise them with the model that I have trained, I can only denoise them frame by frame. May I know how I can overcome this problem?
I assume you have trained a 2D N2V model and would like to apply it to a couple thousand 2D frames. The recommended way to do this, is to apply the model individually to each frame. The reason is, that the 2D N2V model can only handle 2D inputs, so stacking the frames into a 3D volume will not work.
I would also advice against stitching them together to one large 2D frame. This could lead to weird artefacts at the frame-borders. And during prediction the frame is most likely tiled into smaller chunks due to the memory limit of the GPU.
But you should be able to automate this. Here some python example:
model = N2V(None, basedir='/path/to/basedir', name='model_name') predictions =  for img in images: pred = model.predict(img, axes='YX') predictions.append(pred)
I am facing this error when trying to predict the model: “ResizePreprocessor: Input “input” dimension 2 should have size 1 but is 80” (my frame number is 80). How can I overcome this?
I have tried changing the batch size and number of tiles, both don’t seem to work. Is it a problem with the model that I’ve trained? I only used a single image file to train the model so does that mean I need to train a model that has 80 frames?
What is the size of frame number 80? Did you check that the axes are set as expected? If you provide
axes='YX' the model expects a 2D image with dimensions Y and X. Could it be that it is a CYX image?