Different number of cells detected with ImgLib2's difference of Gaussian

Below is a script to detect cells using the difference of Gaussian in the imglib2-algorithms package.

There are 3 approaches:

  1. Directly on the loaded image, specifying the calibration.
  2. On an isotropic view of the image.
  3. On an isotropic copy (scaled down) of the image.

The results are the same for 2 and 3, but one more for 1.


  1. Why the difference?
  2. Is there a simpler way to “reify” a view into an ArrayImg, than having to use a Cursor to copy pixel by pixel?


# A script to find cells by difference of Gaussian using imglib2.
# Uses as an example the "first-instar-brain.tif" RGB stack availalable
# from Fiji's "Open Samples" menu.

from ij import IJ
from net.imglib2.img.display.imagej import ImageJFunctions as IJF
from net.imglib2.view import Views
from net.imglib2.realtransform import RealViews, Scale3D
from net.imglib2.interpolation.randomaccess import NLinearInterpolatorFactory
from net.imglib2.converter import Converters
from net.imglib2 import FinalInterval, Cursor
from net.imglib2.algorithm.dog import DogDetection
from net.imglib2.type.numeric.real import DoubleType
from net.imglib2.type.numeric.integer import UnsignedByteType
from net.imglib2.img.array import ArrayImgs
from fiji.scripting import Weaver

# Fetch the "first-instar-brain.tif" file
#imp = IJ.getImage()
imp = IJ.openImage("http://downloads.imagej.net/fiji/snapshots/samples/first-instar-brain.zip")

cal = imp.getCalibration()
img = IJF.wrap(imp)

# Extract the red channel
red = Converters.argbChannel(img, 1)

# Characteristics of the cells we are looking for
cell_radius = 5.0 # microns in diameter
min_peak = 40.0 # min intensity for a peak to be considered

# Approach 1: Run the difference of Gaussian on the raw red channel
dog = DogDetection(Views.extendMirrorSingle(red), img,
                   [cal.pixelWidth, cal.pixelHeight, cal.pixelDepth],
                   cell_radius, cell_radius / 2,
                   min_peak, False,
peaks = dog.getPeaks()

print "Raw:", len(peaks)

# Approach 2: Run the differece of Gaussian on an isotropic view of the image

# Compute scaling for isotropy
largest = max(cal.pixelWidth, cal.pixelHeight, cal.pixelDepth)
sx = cal.pixelWidth / largest
sy = cal.pixelHeight / largest
sz = cal.pixelDepth / largest

# View of the red channel as isotropic
iso = Views.raster(
          Scale3D(sx, sy, sz)))

# Finite view
isof = Views.interval(iso, FinalInterval([int(img.dimension(0) * sx),
                                          int(img.dimension(1) * sy),
                                          int(img.dimension(2) * sz)]))


# Run difference of gaussian on the isotropic view of the red channel
dog = DogDetection(Views.extendMirrorSingle(isof), isof,
                   [1.0, 1.0, 1.0], # no calibration
                   cell_radius / largest, cell_radius / (largest * 2), # correct cell radius
                   min_peak, False,

peaks = dog.getPeaks()

print "Isotropic view:", len(peaks)

# Approach 3: Run the differece of Gaussian on a scaled-down isotropic version of the image

# Copy into a new image, much smaller
small = ArrayImgs.unsignedBytes([int(img.dimension(0) * sx),
                                 int(img.dimension(1) * sy),
                                 int(img.dimension(2) * sz)])

o = Weaver.method(
  public static void copy(final Cursor<UnsignedByteType> c1,
                          final Cursor<UnsignedByteType> c2) {
    while (c1.hasNext()) {
  """, [Cursor, UnsignedByteType])

o.copy(isof.cursor(), small.cursor())

dog = DogDetection(Views.extendMirrorSingle(small), small,
                   [1.0, 1.0, 1.0],
                   cell_radius / largest, cell_radius / (largest * 2),
                   min_peak, False,

peaks = dog.getPeaks()

print "Scaled down to isotropy:", len(peaks)


# Results: different values for method 1 and 2, 3, with 2, 3 being the same.


Raw: 35
Isotropic view: 36
Scaled down to isotropy: 36

If I’m not misunderstanding, then 2. and 3. are identical. The DogDetection does not “see through” the Views.raster or something like that. So 2. and 3. have the same size and identical pixel values, it is clear that there is the same result.

  1. has a different size, and I think that explains the different result.

No, this is the way you need to do it.

best regards,


Have you tried

net.imagej.ImageJ().op().copy().rai(out, in)


Here are the existing copy ops

and here are the static access methods


That looks like the first op I will have used!


A question: why not use:

net.imagej.ImageJ().op().copy().iterableInterval(out, in)

Any (dis)advantage over the rai() method?


I don’t think that this would have disadvantages other than you have to guarantee that the iteration order of both inputs is the same. For your use case, this works because both the output ArrayImg and the input View have flat iteration order. You can make an RAI iterate in flat order but you cannot make an IterableInterval iterate in flat order (at reasonable cost) if it doesn’t already do so.