Specifying number of superpixels in an object

Hi I am looking for a way to divide simple shapes such as a circle or a rectangle into specified number of superpixels.
I understand we can specify number of superpixels for an image, but is it possible to specify number of superpixels in an object within the image? For example, the attached image has an oval shaped object. I am looking for a way to divide that area into let’s say 30 superpixels with similar shape and equal area.
Thank you.
image

Hi Stephen,

welcome to the forum!

I made an attempt at solving your problem. I just estimated the area of the object, the expected superpixel size and then split the object using a regular grid of seeds and a watershed. This is the results:

You can find the entire code at this gist: split_object_superpixels.ipynb · GitHub

I hope that helps,

Guillaume

3 Likes

Dear Guillaume,
Thank you very much for the quick reply, with the code!
Your answer will work, and while going through the answer it gave me some other thoughts that will help me move forward. Thank you very much.

2 Likes

One more suggestion: we also added masked-slic to scikit-image in the latest version (0.18). It takes the number of superpixels and a mask as input:

https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_mask_slic.html

3 Likes

Thanks @jni, this is much simpler, I hadn’t realized one could use SLIC without colors too! @Stephen I added a section using that approach in the gist notebook.

1 Like

Thank you for sharing the code. I have been trying for some time but could not figure out because I just started (installed anaconda less than a week ago).
And I am having value error that says this:

ValueError Traceback (most recent call last)
in
3
4 #image = io.imread(“E:/temp/sample02.jpg”)
----> 5 m_slic = segmentation.slic(image, n_segments=10, mask=image>0, start_label=1)
6
7 fig, ax = plt.subplots(figsize=(10, 6))

C:\ProgramData\Anaconda3\lib\site-packages\skimage\segmentation\slic_superpixels.py in slic(image, n_segments, compactness, max_iter, sigma, spacing, multichannel, convert2lab, enforce_connectivity, min_size_factor, max_size_factor, slic_zero, start_label, mask)
249 mask = np.ascontiguousarray(mask[np.newaxis, …])
250 if mask.shape != image.shape[:3]:
→ 251 raise ValueError(“image and mask should have the same shape.”)
252 centroids, steps = _get_mask_centroids(mask, n_segments)
253 update_centroids = True

ValueError: image and mask should have the same shape.
Below is my code.
#Load the image and convert to a floating point data type
image = img_as_float(io.imread(“E:/temp/sample02.jpg”))

m_slic = segmentation.slic(image, n_segments=10, mask=image>0, start_label=1)

fig, ax = plt.subplots(figsize=(10, 6))
ax.imshow(image)
ax[1].imshow(m_slic, cmap=cmap);
plt.tight_layout()

I am trying to figure out where/how to define mask. I will keep on looking but if you can take a look, please let me know. Thank you in advance.

I changed my image to back and while to reduce complexity in my learning…
sample02

Here you create a mask in order to create superpixels only in the area that is interesting for you. In my example the image is just a simple black and white 2D array, so image>0 creates the appropriate 2D mask. However you import a JPG file which creates an RGB image i.e. a 3D array. So you first need to segmentation your image to obtain the necessary mask.

2 Likes

I got it. Your comment was just what I needed.
Thank you again.

image

Hi Juan,
I tried several shapes with various number of superpixels (n_segments parameter in slic) and it is not generating the exact number of superpixels in many cases. For example, when I specify n_segments = 672, slic only generates 660 superpixels.
Is there a way to have exact number of superpixels generated as specified as n_segments?

Not currently, no. The logic with where the initial points are laid out is specified here:

There could be a future where instead of initialising on a regular grid (which imposes constraints that mean that you can never put in a prime number of segments :wink: ), we use e.g. poisson disk sampling for the initial segment locations. But currently that’s not possible.

I tried some different numbers and tried 672 again, and it is generating 672 superpixels now.
I wonder if there were any changes on the code since yesterday and now. Otherwise I am not sure why it created only 660 superpixels half a day ago, and now it is creating 672 superpixels.
What can be the reason? And will I be able to rely on the result I am seeing now (meaning, can I trust that it will generate exactly n_segments in the future?
If not, I will need to keep the current code. Do you know if there is an instruction to download so that can keep current version?

I can only imagine. :joy: Did you update your scikit-image version between yesterday and today? Are you using masked SLIC? Did you change the shape of the input image, or the shape of the mask, or…?

Yes you should write down the current version, using e.g. pip list | grep scikit-image. You can always install specific versions of packages using pip install scikit-image==0.18.1, for example.

Generally, we follow a two version deprecation path, meaning if we change the meaning/output of a particular function, unless it needed to be changed because of a bug, we will raise a warning for two versions and then change the function. So, you should definitely write down your current version, and, most importantly, write tests so that you know when something has changed. I wrote a series on how to write tests long ago that you can find here, and more recently Jacob Tomlinson has done a series on creating a scientific Python project from scratch, which includes testing and which you can find here.

Once we release scikit-image 1.0 (should happen later this year), we will guarantee that inputs and outputs won’t change except in the case of bug fixes.

2 Likes

Thank you for posting this interesting question, Stephen. I spent a bit too much time thinking about it last night, and implemented Lloyd’s algorithm for generating approximately equally spaced points across a given area.

lloyd

(I used a slightly different mask here, to see if it worked on concave shapes.)

You can use these seed points as inputs to watershed to generate the superpixels themselves, or you can grab the Voronoi tesselation directly (as shown in the plot). One advantage of this approach is that you can specify an exact number of points.

The code is on GitHub.

1 Like