Error on coordinates of blob centers by ImageJ

Dear all,
I am using ImageJ 2.0.0 with the TrackMate plug in, to analyze an image of a bunch of blobs, see the attached. I use the LogDetector to locate the X and Y coordinates of the center of each blob, see the screen shot attached.

Given that there is an intrinsic resolution limit on each pixel due to the finite pixel size, there will be an error on the coordinates X and Y that I obtain from ImageJ. Is there a built-in function that would allow me to get such an error on X, Y?

If there is no such a function, would you know a simple way to extract an estimate of such error?

image screenshot

Hi @cm89,

Welcome to the forum.

Algorithms are described exactly by code. Here’s the code implementing Trakmate’s LoG detector:

And here’s what (I believe) does the local maxima finding:

Algorithms are described approximately (but more succinctly) by math,
here’s the math that describes the LoG filter.

Of course, that’s just the math for the first part. As @tinevez writes in that window, there’s also the maxima-finding and non-maximal suppression.

Happy code-reading,

@bogovicj Thank you for your reply and for the codes. I realize that there is a much simpler way to formulate my question.

Indeed, I am simply interested in the errors, or confidence intervals, on the coordinates X and Y that I obtain from ImageJ. I was planning to go through the exact way in which these are computed to come up with my estimate of such error, but maybe there is already a built-in function that would allow me to get the error on X, Y?

If there is no such a function, would you know a simple way to extract an estimate of such error?

Thank you


There is no built-in method to estimate error that I’m aware of.

The process of finding centers is deterministic wrt the image it’s given, so a “simple” (but computationally expensive) way to estimate error is to generate / simulate images with a given set of blob centers, run them through the detector, and estimate the distribution of the results.

This means you’ll need a “foward model” though. Also not trivial.

If your “noise” / source of variability is simple, you may be able to get a closed-form, since LoG is “just” convolution. But any noise simple enough to do closed-form is probably not so realistic. :man_shrugging:


I am not sure what you mean by ‘forward model’, but your answer made me think that one may estimate the uncertainty resulting from the finiteness of the pixels with the following resampling method.

Consider the simple case where there is only one red pixel with a given intensity I, surrounded by black pixels. One could imagine dividing the pixel in four equal parts, and draw randomly the intensities of the four parts, with the constraint that the average intensity over the four parts is equal to I. For each sample, compute the center with ImageJ, and the standard deviation across many samples gives an estimate of the error resulting from the finiteness of the pixel.

Sorry Sir,

but you can’t divide a pixel (not even metaphorically) because a pixel has no spatial extension, it is a point and bears a number, the gray value (three numbers in case of RGB color.) This number is stored in the computer memory …




These images have been taken with a microscope, which averages the intensity within an area of (1 micron)^2, and stores the resulting gray value. So the area does certainly have a size, and is not a point.

The integration area used before the sampling is something completely different. The storage of a single number in the memory is the value of a point and this point has no spatial extension. The integration area means slight a lowpass-filtering of the object before the sampling. After the sampling a pixel has no extension it is a single value at a spatial point.


1 Like

Well, then the quantity that I am interested in is the ‘integration area’ before the sampling. I am interested in knowing the error resulting from the finite size of this integration area.

I am interested in knowing the error resulting from the finite size of this integration area.

As I’ve written, the integration area (using a digital camera) is mainly defined by the area of a single sensor element that normally is slightly smaller than the pixel spacing and one has to know the sensitivity over this area that usually isn’t perfectly constant. If you have this spatial sensitivity function, then its Fourier-transform gives you the low-pass filter function. If you prefer convolutions instead, you may use the spatial sensitivity function as the convolution kernel.

A very coarse approximation of the kernel is a square area with a side-length of about the the pixel spacing.

The low-pass filtering is to be applied to the analog image in the sensor plane, i.e. immediately before the sensor.

Good luck


Additional errors occur if the sampling distance (pixel spacing) is too large. It must be equal to or smaller than the two times the inverse frequency limit of your optics (diffraction limit). If this condition isn’t met, the captured image will suffer from aliasing artifacts that can’t post hoc be removed.

Hi all

The sub pixel accuracy localization in the detectors of TrackMate is based on parabolic interpolation that I cherry picked from ImgLib2.

Several years ago we discussed and tested the expected systematic error on the inferred position.

This technique has a bias. That makes the accuracy of this technique no better than 1/6 of a pixel.

I think you can find the discussion with the metrics on this forum. Around 4 years ago I think.