BigStitcher Multiview Deconvolution kernel size error

Hi,
I have successfully used BigStitcher to perform Multiview Deconvolution of data from my light sheet using a sample seeded with beads and it works very nicely.

I thought I would try Multiview Deconvolution on some data that is not seeded with beads by using a theoretical PSF. I managed to generate the PSF (Width: 128 pixels, Height: 128 pixels, Depth: 101 pixels) and associated this with the views in BigStitcher.
However when I try to run Multiview Deconvolution in BigStitcher I get a message to say that the kernel (901) size is bigger than my block size (512) and that this will result in a negative effective size - Quitting.
I increased the block size to 1024 x 1024 x 1024 as this is bigger that he kernel of 901.
But I get a similar message, can someone please help.
See below for the log output:

Wed Jul 24 11:48:40 BST 2019: Selected (MultiView)Deconvolution Parameters: 
Bounding Box: Bounding Box 'My Bounding Box' (-407, -940, -447) >>> (431, 837, 391)
Downsampling: None
Downsampled Bounding Box: Bounding Box 'My Bounding Box' (-407, -940, -447) >>> (431, 837, 391)
Input Image Cache Type: Cached
Weight Cache Type: Cached
Adjust intensities: false
Multiplicative iterations: false
PSF Type: Efficient Bayesian - Optimization I (fast, precise)
Psi Init: Blurred, fused image (suggested, higher compute effort)
OSEMSpeedup: 1.0
Num Iterations: 10
Debug Mode: true
DebugInterval: 1
use Tikhonov: true
Tikhonov Lambda: 0.006
Compute block size: (1024, 1024, 1024)
Test for empty blocks: true
Cache block size: 32
Cache max num blocks: 10000
Deconvolved/Copy block size: 64
Compute on: GPU (Nvidia CUDA via JNA)
ComputeBlockThread Factory: ComputeBlockSeqThreadCUDAFactory: CUDA based using 2 devices: [GeForce GTX 1080 (id=0, mem=8116MB (8116MB free), CUDA capability 6.1)] [GeForce GTX 1080 (id=1, mem=8119MB (8119MB free), CUDA capability 6.1)]
Blending range: 12.0
Blending border: -8.0
Additional smooth blending: false
Group tiles: true
Group illums: true
Split by: Each timepoint & channel
Image Export: Display using ImageJ
ImgLoader.isVirtual(): true
ImgLoader.isMultiResolution(): true
(Wed Jul 24 11:48:40 BST 2019): Deconvolving group 1/1 (group=0-0 >-> 0-4)
Wed Jul 24 11:48:40 BST 2019: Approximate pixel size of fused image (without downsampling): 0.28669257079360155 µm
(Wed Jul 24 11:48:40 BST 2019): This group contains the following 'virtual views':
(Wed Jul 24 11:48:40 BST 2019): 0-0
(Wed Jul 24 11:48:40 BST 2019): 0-1
(Wed Jul 24 11:48:40 BST 2019): 0-2
(Wed Jul 24 11:48:40 BST 2019): 0-3
(Wed Jul 24 11:48:40 BST 2019): 0-4
(Wed Jul 24 11:48:40 BST 2019): Remaining groups: 5
(Wed Jul 24 11:48:40 BST 2019): Fusion of 'virtual views' 
(Wed Jul 24 11:48:40 BST 2019): Transforming group 1 of 5 (group=0-0)
(Wed Jul 24 11:48:40 BST 2019): Transforming group 2 of 5 (group=0-1)
(Wed Jul 24 11:48:40 BST 2019): Transforming group 3 of 5 (group=0-2)
(Wed Jul 24 11:48:40 BST 2019): Transforming group 4 of 5 (group=0-3)
(Wed Jul 24 11:48:40 BST 2019): Transforming group 5 of 5 (group=0-4)
(Wed Jul 24 11:48:40 BST 2019): Normalizing weights ... 
(Wed Jul 24 11:48:40 BST 2019): Caching fused input images ... 
(Wed Jul 24 11:48:40 BST 2019): Caching weight images ... 
(Wed Jul 24 11:48:40 BST 2019): Grouping, and transforming PSF's 
(Wed Jul 24 11:49:21 BST 2019): Setting up blocks for deconvolution and testing for empty ones that can be dropped.
Blocksize in dimension 2 (1024) is smaller than the kernel (1345) which results in an negative effective size: -320. Quitting.

I look forward to any help.
Thanks,
Ben

Hi,

best is if you open also an issue on the BigStitcher github page: https://github.com/PreibischLab/BigStitcher

I got this error usually when the Block Size was too small. I guess you still need a larger block size.
Deconvolution is really compute intensive. My recommendation is to really chop everything down to the necessary information using the bounding box. Also use downsampling. Depending on the task, with Lightsheet resolving single cells is usually sufficient, the acquired resolution is anyways not necessary. Means that downsampling it by a factor of 2 might still be fine, especially if it is only for visualization.

Hi,

Thanks for this I’ll open an issue on the Github page.
I have already used a bounding box but will try downsampling and see how I get on.