What is the definition of 'image sigma' in BigStitcher > Interest points detection?

fiji
imagej
bigstitcher

#1

Hi, I’m looking for a clear definition of ‘image sigma’ (x,y,z) in the Interest Points detection section of BigStitcher. The wiki description (https://imagej.net/BigStitcher_Interest_points) seems ambiguous:

If you chose to Define anisotropy for segmentation in the previous dialog, you will be asked for Image sigmas in X, Y and Z here. If you acquired your images with pixel sizes and z-spacing of \approx \frac{d}{2} (optimal sampling) with d being the resolution of you microscope (d_{xy} = \frac{\lambda}{2NA} and d_{z} = \frac{2\lambda}{NA^2}), you can leave the default value of 0.5 here. Otherwise, increase the image sigma when you have oversampling (smaller pixel distances) or decrease it for undersampling (larger pixels).

Since the default values in the fields are 0.5 one would interpret it as the ratio of pixel size over microscope resolution (the Nyquist criterion being a ratio <= 0.5). However this interpretation seems to be in conflict with the two examples set forth in the wiki, whereby with oversampling the sigma would increase and with undersampling decrease. On the contrary oversampling would imply smaller pixel size and thus smaller sigma.


#2

Hi @lollopus,

@hoerldavid and @StephanPreibisch should weigh in, but as I understand it, the sigma there is the width of the blurring kernel that bigstitcher uses for the Difference of Gaussian (DoG) interest-point detector.
Basically:

blur(img,sigma) - blur(img,(a_number_greater_than_one)*sigma)

I think of it as the “scale” of the features that it can find.

The idea there is that if your image is oversampled then the highest frequencies that are re-constructable (by Nyquist) are higher than the highest frequencies resolvable by the microscope. In this case, it would be wise to suppress (low-pass filter) those frequencies that are not resolvable by the hardware but are re-constructable according to nyquist because they must be noise. Increasing sigma effectively does this. That’s why oversampling -> higher sigma. (At least that’s what I think)

Hope that’s helpful,
John


#3

Hi John, thanks for shedding light on this question with the Difference of Gaussians interpretation. It does make sense that oversampling would require greater smoothing. What is still unclear to me is why have 0.5 as a default if sigma is not what I thought it was and, more importantly, how should one calculate these sigmas.

For instance in my current image stacks I sampled at 1.5 µm in x and y (microscope d_xy = 2.5 µm) and sampled at 10 µm in z (d_zz = 100 µm). So according to my (probably wrong) definition sigma in x and y = 0.6 and sigma in z = 0.1. Now with your interpretation I should use very different values from these, especially in z!

On this I would second your call to @StephanPreibisch and @hoerldavid:slight_smile: