Nyquist sampling and equispaced pixels

subsampling
signal-processing
#1

Why do we collect equispaced pixels when trying to sample an image at higher than Nyquist rate?

The Whittaker–Nyquist–Kotelnikov–Shannon theorem states that a bandlimited signal can be perfectly reconstructed if sampled at more than twice the bandwidth:

However, it does not specify that the samples, pixels, need to be equispaced. While some alternative schemes sampling schemes have been suggested, such as a hexagonal grid, I have not seen a justification for uniformity. While regularly spaced samples makes sense for periodic signals, it’s not clear to me why it makes sense for bounded signals such as an image from a microscope.

Because the image is acquired within a boundary, would it not make sense to increase sampling towards the boundary in order to have uniform rather than decaying image resolution? While we could retain resolution using regular sampling by oversampling the center of the image, this seems wasteful.

The impression that I am get between uniform sampling in image acquisition and Fourier analysis is that the field come into the habit of thinking of bounded images as periodic. Am I wrong?

#2

An alternative sampling would be something like a 2D Chebyshev Tensor Grid:

Source: https://math.stackexchange.com/questions/813831/what-is-a-tensor-product-chebyshev-grid

Further discussion on this sampling approach can be found here:
https://epubs.siam.org/doi/10.1137/130908002

Are there examples of image analysis being done using this approach?

#3

Hi @markkitt,

The non-uniform spacing that I’m most familiar with (and may be the most prevalent) is the quad-tree (2d) or octree grid.

Lots of work has been done with these data structures, e.g.

If the un-boundedness “assumption” / shorthand bugs you, you could also look into using a wavelet-basis for signal processing which are kind of like bounded, decaying periodic signals.

Obviously I’m not giving an answer really, just continuing the discussion, and pointing you to resources.

Happy to continue the talk,
John

1 Like
#4

Alternative sampling can be also based on the content of the image. Here is an example for that:

For microscopy I guess a lot is determined by the technical equipment, cameras and the scanners used. I guess some irregular sampling although theoretically better is just impractical or hard to implement.

Telling people then not only that their pixels are not square but also not equidistant could be just to much … image anarchy…

3 Likes
#5

The problem originates with acquisition. Storage and processing also interact with this issue, but cannot improve the situation if data was not acquired knowing the effective resolution decays towards the boundary of the image.

To have the entire field of view at full resolution, one would need to sample at a frequency at the boundary such that linear interpolation would produce accurate subsamples, but it would be wasteful to continue at this rate towards the middle of the image. If using a camera with a regular array of pixels, then there is an inherent trade off between the total field of view and the area captured at resolution limit of the optical system.

There are two common circumstances where microscopists are not restricted to a regular grid for sampling:

  1. Point scanning confocal microscopy
  2. Z-stacks

I have yet to find image acquisition software that would allow me to collect samples on something like a Chebyshev tensor grid or adaptively.

Once acquired a number of formats as you have mentioned could be used to store the data and render or raster it as necessary. The user display and interface should be based on equispaced pixels, but that should be rastered from an adaptively sampled acquisition.