Batch prediction and Sanity Check for N2V / DenoiSeg for Fiji

I just updated the CSBDeep update site bringing a couple new features to the model archive display for N2V and DenoiSeg for Fiji.


File actions

  • save a model somewhere else by pressing File Actions > Save to...
  • change the demo image of a model archive via File Actions - you can choose an open image or a file and it will run the prediction on it. Press Save changes, to save both images to the archive

Batch prediction

  • run prediction on a whole folder of images by pressing Predict.. > Folder of images or stacks
  • just mentioning that you can also run prediction on a stack:
    • open the stack and the model
    • press Predict.. > Single image or stack
    • when setting the axes in the command options, set B for the batch axis - e.g. XYB

Sanity Check

Since it is smart to be cautious when reusing pretrained models, there is now a first version of a Sanity Check for comparing basic statistics between an potential input image and a model.

You have to…

  • to provide an input and ground truth image - what the model ideally should predict

The Sanity Check will…

  • run the prediction
  • collect and display basic image characteristics
  • what else should it do? Let us know!

The displayed statistics are useful for denoising tasks like N2V, but less for segmentation tasks - the sanity check for DenoiSeg is therefore disabled until we add a more suitable check for segmentation.

Please look carefully at what the check is printing and give me feedback for improvements.

Please keep doing your own sanity checks by comparing the results of the prediction with a method you trust.

Warning: Ignore or replace demo images of existing models

  • there was a bug in the model export during the N2V / DenoiSeg training in Fiji
  • the demo images saved with the model were stored normalized, which is incorrect
  • this does not affect the model prediction, the demo images are just not suitable demo images
  • you are advised to use the File Actions button to exchange old demo images with an image representative of the data the model was trained on

Sorry for the inconvenience.

Please answer here if you have suggestions or issues.




Would it be possible add an option to make it convert the output back to the original bit depth? With 40 GB raw datasets, it becomes incredibly disk/time consuming to use 32b.