IDR Python API for ROIs?

from idr import connection
conn = connection('idr.openmicroscopy.org', 'public', 'public')

# %matplotlib inline
imageId = 1229801
# Pixels and Channels will be loaded automatically as needed
image = conn.getObject("Image", imageId)
roi_service = conn.getRoiService()
result = roi_service.findByImage(imageId, None)

roi = result.rois[0]
roi_image = roi.getImage() # UnloadedEntityException: Object unloaded:object #0 (::omero::model::Image)
roi_shape = roi.getPrimaryShape()
roi_points = roi_shape.getPoints()

string_points = roi_points.getValue()

Hi all,

Using the idr-py package in python I have been extract images from the IDR succesfully. As far as I understand idr-py is a wrapper for the omero blitz-gateway to the IDR (please correct me if I’ve wrong). Now, firstly I’m struggling to find much decent documentation on using the idr-py api and the blitzgateway documentation either I can’t find or it’s slim (if anyone could help with that I’d grateful).

With this in mind I can’t seem to pull ROIs off of the IDR in a sensible format, see code above). Following the obvious trail of get functions, getImage on an ROI object fails # UnloadedEntityException: Object unloaded:object #0 (::omero::model::Image)
Using getPrimary shape leads down a trail where you get a coordinate pair string, I’m happy to parse this and make my own ROI masks from it, but it feels like I’ve gone the wrong way if this is what i’ll end up doing.

Any advice welcome on how to properly extract ROIs from IDR images.

Best

Craig R

Hi Craig,

You can get coordinates for ROIs as shown at https://docs.openmicroscopy.org/latest/omero/developers/Python.html#rois
(look under the heading “Retrieve ROIs linked to an Image”.

If you’re using Masks, you can get the png for a particular Shape ID using
http://idr.openmicroscopy.org/webgateway/render_shape_mask/1301072/
which comes from this image: http://idr.openmicroscopy.org/webclient/?show=image-4496763

The code that generates this mask can be found at https://github.com/ome/omero-web/blob/71bd342b9a554fb23e465b19a86aa132cf1a5bdd/omeroweb/webgateway/views.py#L708 (python 3).

Hope that helps?

Will

That is helpful thank you, is this functionality documented somewhere and I’ve missed it?

Is there no way to retrieve ROI image masks using the Python ROI?

https://docs.openmicroscopy.org/latest/omero/developers/Python.html#rois
This page for interacting with ROIs only seems to return strings of coordinates rather than arrays, which seems odd.


I found this notebook which does what I’m doing.

This line converts the string into an array, so I’m guessing that is the expected behaviour

pts = [int(xx) for x in pts.split(' ') for xx in x.split(',') ]
pts = np.reshape(pts, (len(pts)/2, 2))

This line needs to be:

pts = [int(xx) for x in pts.split(' ') for xx in x.split(',') ]
pts = np.reshape(pts, int((len(pts)/2), 2))

Hi,

The ROI service doesn’t give you a mask directly. You have to use shape.getBytes(). This gives you the binary mask as a stream of bytes, with each byte encoding 8 bits (pixels).
So you need to do a bit more work to get the mask.

The link above has some example code https://github.com/ome/omero-web/blob/71bd342b9a554fb23e465b19a86aa132cf1a5bdd/omeroweb/webgateway/views.py#L708 which creates a PIL Image, but you could convert that code to numpy array if preferred.

    mask_packed = shape.getBytes()
    width = int(shape.getWidth().getValue())
    height = int(shape.getHeight().getValue())
    # convert bytearray into something we can use
    intarray = numpy.fromstring(mask_packed, dtype=numpy.uint8)
    binarray = numpy.unpackbits(intarray)
    img = Image.new("RGBA", size=(width, height), color=(0, 0, 0, 0))
    x = 0
    y = 0
    for pix in binarray:
        if pix == 1:
            img.putpixel((x, y), fill)
        x += 1
        if x > width - 1:
            x = 0
            y += 1

The notebook you linked to does something a bit different. It’s working with Polygons where points are adjacent pixels, and converting to a skimage label https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label.

But you are correct that len(pts)/2 needs to be cast to an integer in Python3. It hasn’t been updated from Python 2 yet, so len(pts)/2 was previously using integer division.

The Python page would certainly benefit from having this example code (or we could port the code to the BlitzGateway itself).

Hope that helps,
Will.

A getMask() function on the BlitzGateway would be exceptionally useful. Having the option for the mask to return an image of all the masks where integer colour refers to a different segmentation would be wonderful too.

This is my solution so far for getting masks:

from idr import connection
import numpy as np
from matplotlib.patches import PathPatch
from matplotlib.path import Path
from PIL import Image, ImageDraw
import matplotlib.pyplot as plt

conn = connection('idr.openmicroscopy.org', 'public', 'public')

# %matplotlib inline
imageId = 1229801
# Pixels and Channels will be loaded automatically as needed
image = conn.getObject("Image", imageId)
width = image.getSizeX()
height = image.getSizeY()
roi_service = conn.getRoiService()
result = roi_service.findByImage(imageId, None)

roi = result.rois[0]
roi_image = roi.getImage() # UnloadedEntityException: Object unloaded:object #0 (::omero::model::Image)
roi_shape = roi.getPrimaryShape()
roi_points = roi_shape.getPoints()

roi_shape = roi.copyShapes()[0]

pts = roi_points.getValue()
pts = [int(xx) for x in pts.split(' ') for xx in x.split(',') ]
pts = np.reshape(pts, (int(len(pts)/2), 2))

plt.scatter(pts[:,0],pts[:,1])
plt.show()
polygon = tuple(map(tuple, pts))
img = Image.new('L', (width, height), 0)
draw = ImageDraw.Draw(img).polygon(polygon, outline=1, fill=1)
mask = numpy.array(img)
plt.imshow(mask)
plt.show()

Ah, OK. So that image in IDR doesn’t actually have any Masks (omero.model.Mask objects). They are Polygons. For example, you can see that in the ROI/Shapes JSON for that image: http://idr.openmicroscopy.org/api/v0/m/rois/?image=1229801

An example Image that has Masks is http://idr.openmicroscopy.org/webclient/img_detail/4496763/

So, your code is really just custom Polygon processing which doesn’t really belong in the BlitzGateway.

The simplified code below (no numpy, matplotlib etc) gives you the attached image:

from omero.gateway import BlitzGateway
from PIL import Image, ImageDraw
from random import random
import omero

c = omero.client(host='idr.openmicroscopy.org', port=4064)
c.enableKeepAlive(300)
c.createSession('public', 'public')
conn = BlitzGateway(client_obj=c)

# %matplotlib inline
imageId = 1229801
image = conn.getObject("Image", imageId)
width = image.getSizeX()
height = image.getSizeY()
roi_service = conn.getRoiService()
result = roi_service.findByImage(imageId, None)

img = Image.new('RGB', (width, height), (0,0,0))
draw = ImageDraw.Draw(img)

def r():
    return int(random() * 256)

for roi in result.rois:
    roi_shape = roi.getPrimaryShape()
    pts = roi_shape.getPoints().getValue()
    points = []
    for point in pts.split(' '):
        points.append(tuple([int(p) for p in point.split(',')]))
    color = (r(), r(), r())
    draw.polygon(points, outline=color, fill=color)

img.show()

Will.