Threshold strategy when you can't use an algorythm

I’m working on images that need to be thresholded and I tried many things but it seems I can’t use any of the automated algorythm. Any idea of what I should do ?

In order to treat every image the same way, I was thinking of determining the ideal threshold for each image and then use the average of all the previously determined value to actually treat my images. Recently several researchers have told me that they had no problem with the idea of using a different threshold value for each image. So I would be very glad to have your opinion and perhaps solution to this question.

(Sorry if I’m the thousand one to ask this question but everywhere I look, people with this problem seem to find the perfect algorythm for their images)

Often you cannot fix a problem at the image analysis stage. The images have to be taken in such a way as to support automated analysis (the requirements are much more stringent for this), and that means the samples have to be prepared evenly, etc. If you have significant variation, it might be the way the images were acquired.

That said, you can try various methods of background subtraction or normalization on your images prior to analysis, which can help. Though normalizing your channel of interest in a fluroescent experiment can cause major problems, if, for example, you normalize your negative control to the rest.

I’m not sure any intensity values are meaningful after significant processing, but you can often do some thresholding. It would probably help to provide some images or more details if you want further help as “images that need to be thresholded” is very vague.

1 Like

Hello @MmePourquoi ,

fortunately, you’re not the first who has to segment images.There are useful information and guidelines here:

In the first document, you’ll find exact the answer to your question: no, the threshold value doesn’t have to be the same of a series of images. But your problem may be a little bit wider than that.
As @Research_Associate suggested, a few examples will help.

1 Like

Thank you for your answers.

Basically I need to threshold my images because I have to do an Analyze Particles in order to have the “white area” in the zone I selected in the images (x10). Those are fixed organs (drosophila wind imaginal discs) with form and thickness variability and I don’t really see how I could improve the imaging step.
I’ve often been told that background subtraction methods (such as “Substract Background”) are giving good looking images but induce bias so the only pretreatment I do is a median filter with a radius of 1.
The thing is that every time, there is a perfect algorythm, but it’s not always the same…

What I’ve been told is that applying the same threshold to every image dilutes the experimenter’s bias created when choosing each image ideal threshold and applying a different threshold add experimenter’s bias for each image (don’t know if it’s really clear…).

@Nicolas I briefly look at the pdf you shared (thank you), the point opposed to using the same threshold to every image is “fluctuation in intensity across images”, indeed, rencently have been told that the light source might be unstable but the person in charge of the microscopy plateform told me their lasers were stable and if such a fluctuation happen, it would be important enough to be easily noticed…

Tell me if you need more details

How did you define the yellow shape (I’m absolutely not in biology)? If it has be done automatically, I wish you good luck…
You’re right, choosing a manual threshold always induces bias from the epxerimenter. That’s why automatic thresholding methods have been developped, like Otsu, Huang, etc. They will give different values depending on the images, but always following a sound mathematical formulation.
The white zones look quite easy to segment. Unfortunately, I have very limited time, so I cannot write a macro right now. But it should probably contain:

  • background correction (may be optional)
  • denoising (1x1 or 2x2 median filter is ok for that)
  • Automatic threshold (max entropy seems to give good results)
  • Cleaning of the small particles.
    As far as I remember, all of this can be applied to a stack of images

If that doesn’t make much sense to you, and you don’t want to spend too much time learning everything, I suggest you give a try to the Weka segmentation tool. It should work great on these examples. Just feed him with various examples of your spots and of the background, train the classifier and apply it to other images.


I define the zone of interest thanks to a staining on another channel.

I did a macro to try on every image : Max entropy, Yen, Intermodes and Moments separatedly and the mean of the four values. Unfortunatly, most of the time it was way different from what I defined as the ideal threshold because the algorythms were frequently perturb by something I could define as false signal…

However, doing so, the only the only background correction I applied was a median filter with a radius of one. So if you have robust background correction technique, I’m all ear.


actualy, median is not for removing background, but noise (i.e. small scale random variations, while background refers to long scale variations).
I don’t think that background is a problem in your images, but if you want to try to correct it, I usually use Pseudo fat field correction from the Biovoxxel toolbox.
For the threshold, remember you are not looking for a particular value, rather a robust algorithm. The exact value will more or less vary from image to image, even if they were acquired in the same conditions.
But give a try to Weka, you may be surprised at how easy and robust it is.

Wouldn’t it be more logical to post those images (too), then?

If there is a logic reasoning behind ‘the ideal threshold’, like number of particles recognised, then you could try all methods in a macro and decide for the chosen one based on the reasoning. So, what is your ‘ideal threshold’ reasoning given the other channel image?