Detect polygon shape from satellite image by giving center coordinates?

Hi guys,

I have little more difficult mission to process satellite image. But I hope you guys have a solution for me.

  1. My mission to find polygon shape from the giving satellite image and fill solid color.
  2. Giving image is as below:
  3. If I could find coordinates those are drawing by red line, I can fill polygon easily:
  4. I got below image by using ImageJ and run the below command

setAutoThreshold(“Default dark”);
setAutoThreshold(“Minimum dark”);

Now I stopped here.

Do you have any suggestion for my case ?

Thank you.

What do you mean in the title by “by giving center coordinates” ?

You wont get much by thresholding here as the pixel intensity is pretty much the same over the image.
I would advise something that relies on edge detections like Sobel or Canny.
In ImageJ a quick test (Process>Find Edges)

You can also split the RGB channels first and perform the edge detection separately.
But then you would get a lot of extra edges… You can also threshold the edge map
See Extracting window of facade with watershed algorithm?

Eventually a machine learning based segmentation like Ilastik, but I am not fully convinced that you can have a fully automated solution.

As a semi-automated solution, active contours could maybe work. You click in the center and the algorithm finds the most homogeneous regions around within a threshold.
You can play with this tool in ImageJ (right clic the icon to set the tolerance)

As an addition to finding the most homogeneous regions @LThomas mentions, a shot in the dark: is the particular area available in different images too; different in another aspect?
I’m thinking of satellite images with a different modality of the same spot: other wavelengths, maybe height map (as this image gives the impression of a terraced landscape), water content, vegetation etc. The suggestion to use different “colours” have been seen more often in the current forum…

1 Like

With the given screenshot there is not much to make of. If you have an original sized example we certainly come to better results.

If you have more layers as, spectral signatures available (@eljonco already mentioned - height map, landsat data, terrain profile) you could classify the data more correctly.

With the given image data you could also play with the ‘Convolve’ function (after splitting the image in it’s RGB components) to workout the shape:

run("Split Channels");
selectWindow("Clipboard (blue)");
run("Convolve...", "text1=[-1 -1 -1 -1 -1\n-1 -1 -1 -1 -1\n-1 -1 27 -1 -1\n-1 -1 -1 -1 -1\n-1 -1 -1 -1 -1\n] normalize");



From there you can use more filter to clean and seperate the result.

Thank you so much @eljonco, @LThomas and @Bio7.

I mean that “by giving center coordinates” is “Center of gravity coordinate of the polygon shape is giving. In other words, polygon shape is located in the center of the image. In this case, is it possible to create a method to fill polygon shape using a little pattern from the center coordinate ?”.

Thanks, I will keep posted result here.

Thanks. I will try it.

Plugins>Filters>Colour Deconvolution, Vectors From ROI, Show Matrices, selecting the ‘hight ground’, the ‘sandpit’ and the ‘tree’ in the top left, then threshold [220,255] colour 2, got me this contour which might not be perfect, but is a start. The Edit>Selection>Convex Hull is just a bit too large.

Thank you so much. @eljonco
I will try this.

That is a reflectance image, if colour deconvolution returns some “result” it is by pure chance, but it cannot be scientifically explained. Colour deconvolution expects subtractive colours images. The colours of the sandpit, tree and grass do not mix subtractively.

Interesting @gabriel. Sincerely: does your comment also hold when in the ‘top field’ oat is grown and in the surrounding terraces rice is grown? I mean, the image might reflect (boom, boom) properties of the crop grown.

If one crop reflects ‘more blue’ and another crop reflects ‘more red’, I thought you could span a 2D plane as the colour vectors of the crops have different coordinates, albeit their main component is along the green axis. Likewise the sand pit has an entirely different vector in colour space.
Do I miss the (scientific) point of colour deconvolution?

It still applies because the image is not subtractive colour. Colour deconvolution unmixes the “mixed subtractive colours”. Reflected light does not mix subtractively so no pixel in that image is the result of subtractively mixing other colours.
See here:

If you were analysing a printed image or a watercolour or a stained slide, then CD could be used to unmix the inks (if they behaved subtractively) but in the image above that is not the case, so I can’t see that is the appropriate method to use; it cannot be logically explained.

Hi @gabriel, not begging to differ per sé, I like to fathom the subtile differences in the theory behind this. I am aware of the subtractive (ink on paper filtering out wavelengths when reflecting, dyes in tissue filtering out wavelengths from the incident light) vs additive (pixels on a screen adding certain wavelengths, maybe reflecting light is also considered additive?) colour systems.

Am I correct in assuming that you disqualify colour deconvolution as a means to classify crops because you classify a reflectance image as additive instead of subtractive?

As a crop illuminated by white (sun)light absorbs certain wavelengths and reflects others, a crop also acts as a filter, imho, and does not fundamentally differ from a dye in a tissue or ink on paper. It is therefore hard to grasp for me that unmixing can only be done in images that are originating from subtractive and not from an additive(?) colour system, or that crops can’t be described in a subtractive colour system.

And just to get my nomenclature correct: what is the name for the (ImageJ) method/command where vectors in one coordinate system (RGB) can be rewritten into an (orthogonal) different coordinate system (e.g. crops, stained tissues), if each crop or tissue has its distinct RGB properties?

My comment was on what was written before… CD will assume that the image pixels are made by a subtractive mix of high ground, soundpit and tree. To me, it sounds not quite right and I bet others might see the problem as well. Beer-Lambert law–Lambert_law and hence subtractive mixing do not apply in the way the crop image was originally generated. It was not illuminated from the background and the colours (objects) are not “transparent”.
A crop does not act as a filter in the sense of the transmittance implied in the Beer-Lambert law mentioned above.
It might be useful to read the original paper by Ruifrok and Johnston ( ) to clarify this.

Thanks @gabriel, I’ll just deduce from your arguments, and wholeheartedly agree, that it will never be possible to obtain a quantitative result from the image as ment in the Beer-Lambert sense. Therefore I’m the first to drop the ‘quantitative’ from the exchange of views, if ever it was part of it.

Let’s see if others can chip in on the vector-rewriting part of the (im)possibility to qualitatively classify crops/parts of the image.

at a quick first sight, I think you should search for Machine Learning/Deep Learning approaches.

I think it is a bit of a “mission impossible” but who knows. Maybe infrared imaging provides more contrast.

1 Like

This looks pretty good…and simple


Hi @rondespain

Thank you so much for your solution. Actually I am a newer one for ImageJ.
Can you command list as step by step ?

It looks awesome!

I would not say it is “pretty good”, it does not look like the target region in the OP above with a red outline. The original is a smooth outline which follows a brighter (and discontinued) line.

run(“Duplicate…”, " ");
call(“Versatile_Wand_Tool.doWand”, 305, 249, 25.0, 0.0, 1.0, “8-connected include”);
run(“Restore Selection”);

The ridge detection plugin also detects the line around the target pretty well.