Detect polygon shape from satellite image by giving center coordinates?

With the given screenshot there is not much to make of. If you have an original sized example we certainly come to better results.

If you have more layers as, spectral signatures available (@eljonco already mentioned - height map, landsat data, terrain profile) you could classify the data more correctly.

With the given image data you could also play with the ‘Convolve’ function (after splitting the image in it’s RGB components) to workout the shape:

run("Split Channels");
selectWindow("Clipboard (blue)");
run("Convolve...", "text1=[-1 -1 -1 -1 -1\n-1 -1 -1 -1 -1\n-1 -1 27 -1 -1\n-1 -1 -1 -1 -1\n-1 -1 -1 -1 -1\n] normalize");

Result:

grafik

From there you can use more filter to clean and seperate the result.

Thank you so much @eljonco, @LThomas and @Bio7.

@LThomas
I mean that “by giving center coordinates” is “Center of gravity coordinate of the polygon shape is giving. In other words, polygon shape is located in the center of the image. In this case, is it possible to create a method to fill polygon shape using a little pattern from the center coordinate ?”.

@eljonco
Thanks, I will keep posted result here.

@Bio7
Thanks. I will try it.

Plugins>Filters>Colour Deconvolution, Vectors From ROI, Show Matrices, selecting the ‘hight ground’, the ‘sandpit’ and the ‘tree’ in the top left, then threshold [220,255] colour 2, got me this contour which might not be perfect, but is a start. The Edit>Selection>Convex Hull is just a bit too large.

Thank you so much. @eljonco
I will try this.

That is a reflectance image, if colour deconvolution returns some “result” it is by pure chance, but it cannot be scientifically explained. Colour deconvolution expects subtractive colours images. The colours of the sandpit, tree and grass do not mix subtractively.

Interesting @gabriel. Sincerely: does your comment also hold when in the ‘top field’ oat is grown and in the surrounding terraces rice is grown? I mean, the image might reflect (boom, boom) properties of the crop grown.

If one crop reflects ‘more blue’ and another crop reflects ‘more red’, I thought you could span a 2D plane as the colour vectors of the crops have different coordinates, albeit their main component is along the green axis. Likewise the sand pit has an entirely different vector in colour space.
Do I miss the (scientific) point of colour deconvolution?

It still applies because the image is not subtractive colour. Colour deconvolution unmixes the “mixed subtractive colours”. Reflected light does not mix subtractively so no pixel in that image is the result of subtractively mixing other colours.
See here: https://en.wikipedia.org/wiki/Subtractive_color

If you were analysing a printed image or a watercolour or a stained slide, then CD could be used to unmix the inks (if they behaved subtractively) but in the image above that is not the case, so I can’t see that is the appropriate method to use; it cannot be logically explained.

Hi @gabriel, not begging to differ per sé, I like to fathom the subtile differences in the theory behind this. I am aware of the subtractive (ink on paper filtering out wavelengths when reflecting, dyes in tissue filtering out wavelengths from the incident light) vs additive (pixels on a screen adding certain wavelengths, maybe reflecting light is also considered additive?) colour systems.

Am I correct in assuming that you disqualify colour deconvolution as a means to classify crops because you classify a reflectance image as additive instead of subtractive?

As a crop illuminated by white (sun)light absorbs certain wavelengths and reflects others, a crop also acts as a filter, imho, and does not fundamentally differ from a dye in a tissue or ink on paper. It is therefore hard to grasp for me that unmixing can only be done in images that are originating from subtractive and not from an additive(?) colour system, or that crops can’t be described in a subtractive colour system.

And just to get my nomenclature correct: what is the name for the (ImageJ) method/command where vectors in one coordinate system (RGB) can be rewritten into an (orthogonal) different coordinate system (e.g. crops, stained tissues), if each crop or tissue has its distinct RGB properties?

My comment was on what was written before… CD will assume that the image pixels are made by a subtractive mix of high ground, soundpit and tree. To me, it sounds not quite right and I bet others might see the problem as well. Beer-Lambert law https://en.wikipedia.org/wiki/Beer–Lambert_law and hence subtractive mixing do not apply in the way the crop image was originally generated. It was not illuminated from the background and the colours (objects) are not “transparent”.
A crop does not act as a filter in the sense of the transmittance implied in the Beer-Lambert law mentioned above.
It might be useful to read the original paper by Ruifrok and Johnston (https://www.researchgate.net/publication/319879820_Quantification_of_histochemical_staining_by_color_deconvolution ) to clarify this.

Thanks @gabriel, I’ll just deduce from your arguments, and wholeheartedly agree, that it will never be possible to obtain a quantitative result from the image as ment in the Beer-Lambert sense. Therefore I’m the first to drop the ‘quantitative’ from the exchange of views, if ever it was part of it.

Let’s see if others can chip in on the vector-rewriting part of the (im)possibility to qualitatively classify crops/parts of the image.

at a quick first sight, I think you should search for Machine Learning/Deep Learning approaches.

I think it is a bit of a “mission impossible” but who knows. Maybe infrared imaging provides more contrast.

1 Like

This looks pretty good…and simple

Ron

Hi @rondespain

Thank you so much for your solution. Actually I am a newer one for ImageJ.
Can you command list as step by step ?

It looks awesome!

I would not say it is “pretty good”, it does not look like the target region in the OP above with a red outline. The original is a smooth outline which follows a brighter (and discontinued) line.

open(“sat.png”);
selectWindow(“sat.png”);
run(“Duplicate…”, " ");
selectWindow(“sat.png”);
run(“8-bit”);
call(“Versatile_Wand_Tool.doWand”, 305, 249, 25.0, 0.0, 1.0, “8-connected include”);
selectWindow(“sat-1.png”);
run(“Restore Selection”);

The ridge detection plugin also detects the line around the target pretty well.

Ron

@rondespain you are awesome.
This is it.

Thank you so much.

Which solution did you like? The ridge detector, or the magic wand?

Ron

@rondespain
Both of solutions are used for my case.
Thank you so much.