# Error on coordinates of blob centers by ImageJ

Dear all,
I am using ImageJ 2.0.0 with the TrackMate plug in, to analyze an image of a bunch of blobs, see the attached. I use the LogDetector to locate the X and Y coordinates of the center of each blob, see the screen shot attached.

Given that there is an intrinsic resolution limit on each pixel due to the finite pixel size, there will be an error on the coordinates X and Y that I obtain from ImageJ. Is there a built-in function that would allow me to get such an error on X, Y?

If there is no such a function, would you know a simple way to extract an estimate of such error?

Hi @cm89,

Welcome to the forum.

Algorithms are described exactly by code. Hereās the code implementing Trakmateās LoG detector:

And hereās what (I believe) does the local maxima finding:

Algorithms are described approximately (but more succinctly) by math,
hereās the math that describes the LoG filter.

Of course, thatās just the math for the first part. As @tinevez writes in that window, thereās also the maxima-finding and non-maximal suppression.

John

@bogovicj Thank you for your reply and for the codes. I realize that there is a much simpler way to formulate my question.

Indeed, I am simply interested in the errors, or confidence intervals, on the coordinates X and Y that I obtain from ImageJ. I was planning to go through the exact way in which these are computed to come up with my estimate of such error, but maybe there is already a built-in function that would allow me to get the error on X, Y?

If there is no such a function, would you know a simple way to extract an estimate of such error?

Thank you

There is no built-in method to estimate error that Iām aware of.

The process of finding centers is deterministic wrt the image itās given, so a āsimpleā (but computationally expensive) way to estimate error is to generate / simulate images with a given set of blob centers, run them through the detector, and estimate the distribution of the results.

This means youāll need a āfoward modelā though. Also not trivial.

If your ānoiseā / source of variability is simple, you may be able to get a closed-form, since LoG is ājustā convolution. But any noise simple enough to do closed-form is probably not so realistic.

John

I am not sure what you mean by āforward modelā, but your answer made me think that one may estimate the uncertainty resulting from the finiteness of the pixels with the following resampling method.

Consider the simple case where there is only one red pixel with a given intensity I, surrounded by black pixels. One could imagine dividing the pixel in four equal parts, and draw randomly the intensities of the four parts, with the constraint that the average intensity over the four parts is equal to I. For each sample, compute the center with ImageJ, and the standard deviation across many samples gives an estimate of the error resulting from the finiteness of the pixel.

Sorry Sir,

but you canāt divide a pixel (not even metaphorically) because a pixel has no spatial extension, it is a point and bears a number, the gray value (three numbers in case of RGB color.) This number is stored in the computer memory ā¦

Regards

Herbie

These images have been taken with a microscope, which averages the intensity within an area of (1 micron)^2, and stores the resulting gray value. So the area does certainly have a size, and is not a point.

The integration area used before the sampling is something completely different. The storage of a single number in the memory is the value of a point and this point has no spatial extension. The integration area means slight a lowpass-filtering of the object before the sampling. After the sampling a pixel has no extension it is a single value at a spatial point.

Herbie

1 Like

Well, then the quantity that I am interested in is the āintegration areaā before the sampling. I am interested in knowing the error resulting from the finite size of this integration area.

I am interested in knowing the error resulting from the finite size of this integration area.

As Iāve written, the integration area (using a digital camera) is mainly defined by the area of a single sensor element that normally is slightly smaller than the pixel spacing and one has to know the sensitivity over this area that usually isnāt perfectly constant. If you have this spatial sensitivity function, then its Fourier-transform gives you the low-pass filter function. If you prefer convolutions instead, you may use the spatial sensitivity function as the convolution kernel.

A very coarse approximation of the kernel is a square area with a side-length of about the the pixel spacing.

The low-pass filtering is to be applied to the analog image in the sensor plane, i.e. immediately before the sensor.

Good luck

Herbie

PS:
Additional errors occur if the sampling distance (pixel spacing) is too large. It must be equal to or smaller than the two times the inverse frequency limit of your optics (diffraction limit). If this condition isnāt met, the captured image will suffer from aliasing artifacts that canāt post hoc be removed.

Hi all

The sub pixel accuracy localization in the detectors of TrackMate is based on parabolic interpolation that I cherry picked from ImgLib2.

Several years ago we discussed and tested the expected systematic error on the inferred position.

This technique has a bias. That makes the accuracy of this technique no better than 1/6 of a pixel.

I think you can find the discussion with the metrics on this forum. Around 4 years ago I think.

Best
Jy

2 Likes