Hi I am trying to extract windows of a facade (asnad.jpg) if I want to mention more about what I am trying to do I should say I want to do something like those watershed segmented pictures that I have sent you.I mean I want the image will be over segmented except those regions (windows) that I want to extract them…( I cropped my window and tried to write a code for extracting that window which it worked as I sent you but when I implement that code for the entire facade it does not work)in the next step I want to extract every regions by connected component labeling (8 connected) or with snake contour then I will extract those regions that are biggest which will be my windows then I will count them and calculate its area .My challenge is affording those segmentation as I mentioned somehow like those pictures that I have sentThank you
I think WEKA segmentation would be more for you: https://imagej.net/Trainable_Weka_Segmentation
Or segmentation using ilastik: https://www.ilastik.org/
You see that you have differences in texture mostly between your different images. I think it should be possible to select good feature sets to be as general as possible: https://imagej.net/Trainable_Weka_Segmentation#Training_features_.282D.29
Otherwise you need to go the Deep Learning Route…
Maybe there are already pretrained networks for your problem…
“Otherwise you need to go the Deep Learning Route…
Maybe there are already pretrained networks for your problem…”
If looking for a combination of various colors of windows, and partially/fully filled windows (blinds etc), then I suspect deep learning will really be the only way to go without a massive data set.
If it was only blank windows, I would say an edge filter combined with a standard deviation or other variation type filter might work well… but you would need enough context to tie those together. And being able to avoid windows in a reflection would require special effort.
Thank you so much for your help .but Actually I prefer to try morphological methods because I am new in this field and training will be so complicated .
Thank you as I said I deep learning will be so complicated and also more than one or two picture will be needed.
Wait there are other stuff to try first ! Eventhough I agree that with such variability of image it might be tough to find a generic method.
But you can try those:
- Edge detections (Sobel, Canny)
- Hough lines https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html
First you can split the RGB channel and try some thresholding on separate channels using different thresholding method.
You can also run an edge detection like Sobel (or Canny) to get an edge map. You can run separately the vertical component and horizontal component of the sobel to reduce the response of non vertical/horizontal edge, and then threshold the edge map to recover the strongest edge (possibly your window).
From that you can try to find lines with the Hough line transform
You should also give a try to Maximally Stable Extremal Region (MSER) like illustrated here .
It is available freely in opencv if you can do some python, it will return the area that are most conserved over a range of thresholding level.
People used it to find road signs, or input field in a form.
Among the parameter for the function are the stability of the region but also the min and max size of the region.
Thank you so much. Actually my field of study is architecture and I am not really familiar with these methods .But I try to learn them.Actually I want something like this image that I have uploaded.I mean I want the place of windows will be determined after the segmentation.because I have wasted lots of time in watershed algorithm and It did not work so this time I want to be sure to come to conclusion.so are you sure if I follow these methods I finally come to conclusion?
Well watershed is not really the best for that indeed.
I cant guarantee it would work, this type of development can take several weeks to have something robust.
Here an example in Fiji with the single window:
Process>Filter>Mean to reduce noise
Image>Type>8-bit to convert to Grayscale image
Image>Adjust Threshold>Moments in the drop down menu
As last step you could run hough line detection to have the lines in the image only
MSER could work too
Thank you Laurent .So you recommend edge command .But what should I do to remove redundant lines ?I mean when my facade has lots of lines which is not necessary what should I do to not count these lines ?After to reaching to this structure then what should I do to extract windows?should I label them then extract each label by its number?Sorry for asking too much question
From a binary image with edge in white and background black, you can run a connected component analysis to find individual edges or group of touching edges, and since now they are all isolated you can loop over each item and remove the smaller ones.
I cant help much more, if you need more advice the best would be to find someone to directly discuss that
Thank you so much for taking time