Segmentation of ambiguous grey area - similar looking areas

Hello everyone. I am a new user of Image J/ Fiji so sorry if this question sounds silly. I am having trouble trying to segment and detect the edges of the area marked in blue and was wondering if anybody had any advice. The problem I keep on having is that the orange area keeps on being included when segmenting the blue area, which is what I want to detect the edges of. I have tried using the trainable weka feature using the default settings but the classifier generated only works for some images of this type and not for others. Is there a way for ImageJ to segment images of this type accurately? Any help would be appreciated!

It looks like features that detect blur would pick up both regions.

I suspect the easiest way to deal with this would be to change the input data. In other words, if the orange region is always in the lower 1/4 of the image, and the blue region does not reach the orange region, cut off the lower 1/4 of the image before analyzing.

If you have too much variation in your images, this won’t be an option, but controlling the inputs is often necessary for clean automation downstream.

Without an original file, others will not be able to test anything on your image. That is why the initial text recommends including an original image.

Sample image and/or code

  • Upload an original image file here directly or share via a link to a file-sharing site (such as Dropbox) – (make sure however that you are allowed to share the image data publicly under the conditions of this forum).
  • Share a minimal working example of your macro code.

@Rachel1
As indicated by @Research_Associate
it seems to be possible, but with an original image.

Uploading: Bain de fusion-2.jpg…
Let us know if the red zone is exactly what you want.

1 Like

Hello MicroscopyRA,
Thank you so much for your response and help. I am quite new here so I am very sorry for not including original images. I have included them below now. I do not think that changing the input data is possible because there are multiple images that I have where the two regions overlap and I have included these images below as well.

I have 100s of images like these and I am looking to do the same thing to them. Any further help would be appreciated!
Thank you,
Rachel

1 Like

Hello Matthew,
Thank you for your response and help. The red zone is exactly what I am looking for. Since I am new here, I forgot to add original images and am sorry for that. I replied to MicroscopyRA with some of the original images and any further help would be really appreciated.
Many Thanks,
Rachel

Not exactly on topic, but my initial thought is that this looks like frames a movie. Is there any chance you could subtract the first frame of the movie (without a melt) from all subsequent images? Were they taken with the same settings?

Or perhaps you could subtract subsequent frames from each other.

If the bottom of these images was more in focus, it might be fairly easy to segment out the blurry area where the melt is. But that would require taking the initial image entirely in focus.

Hi MicroscopyRA,
Thanks so much for your help. The images are indeed frames from a movie and have not tried subtracting them from each other. Subtracting from the initial frame might work well but I am not sure if it has good focus. Would the initial frame below work well if I use it to subtract from the other images? Sorry if I am being silly!

Many thanks
Rachel

1 Like

@Rachel1

If not …

Uploading: Bain de fusion_1-2.jpg…

run("Duplicate...", " ");
run("Invert");
run("Multiply...", "value=1.650");
run("Statistical Region Merging", "q=25 showaverages");
setAutoThreshold("Percentile");
//run("Threshold...");
setOption("BlackBackground", false);
run("Convert to Mask");
run("Analyze Particles...", "size=50000-Infinity display add");

It may be necessary to play with the parameters of
run(“Statistical Region Merging”, “q=25 showaverages”);

1 Like

I would say… definitely maybe.


Quick subtraction got that, which looks like it might be easier to segment. Of course, all of the new bright spots also show up (vertical lines to the left). So it also might be better to take the last frame.

How effective either of those options are depend on how consistent the imaging conditions are, and the encoding. I think the posted images are JPEG, which is bad for a variety of reasons. In the end, whether it is sufficient will be dependent on your experiment :slight_smile:

And, @Mathew’s post looks about as good as you might expect for an individual image. If it generalizes to your whole data set, that is probably faster.

1 Like

Hi Matthew,
Thank you so much for your help. I am going to try this out and see how it goes. I really appreciate the help and hope it works for my other images.

Many Thanks
Rachel

1 Like

Hi MicroscopyRA.
I am trying to run Mathhew’s code at the moment and see how it goes. Thank you for the help so far.

Many Thanks,
Rachel

1 Like