OMERO: default queue size

Hi Guys/Girls,

Your website is down and I can’t reference your documentation. Could you please look into it?

Thanks.

Paul

1 Like

Hi @paulkorir,

many OME services are currently unavailable due to a major power disruption that happened across the whole University of Dundee on Saturday. Unfortunately, this is totally outside our control and affects many internal and external OME services including notably:

  • the OME website
  • the OME demo server
  • the downloads and documentation for the sofware
  • the OME artifactory

The University IT is working on the resolution and updating the status at https://www.uodit.info/ periodically. We will also notify this thread as soon as our services are back up an running.

Update: thread now migrated to https://forum.image.sc/t/ome-resources-down-due-to-uod-outage/43957n

2 Likes

I have one question regarding OMERO. What is the server’s default queue size and how do I change the queue size?

Thanks.

P

Hi @paulkorir.

Since @s.besson has now posted OME Resources Down due to UoD outage, I’ve updated the title of this topic to focus on your question.

By queue size, you mean the number of threads that generate pixel data in the background? If so, the property is “omero.pixeldata.threads” and defaults to 2:

omero-model $grep -B 4 pixeldata.threads src/main/resources/omero-model.properties
# How many pixel pyramids will be generated
# at a single time. The value should typically
# not be set to higher than the number of
# cores on the server machine.
omero.pixeldata.threads=2

though experiences have been mixed. What version of OMERO are you using? Also, what data type are you looking to import?

All the best,
~Josh

Hi Josh,

Thanks as always for your quick replies.

By queue size I’m referring to the number of requests that can be kept waiting for OMERO server. I’m wondering whether this is something you expose or whether it’s dynamically assigned.

I’m using 5.6.2.

Thanks.

Paul

Hi @paulkorir,

I’m still a bit confused about what you are looking for. Are you referring to raw web requests? i.e. how many HTTP GETs or POSTs can be performed against OMERO.web at one time? If so, there are two levels which may require configuration. One is nginx. Can you share your current configuration with us? The other is the number of gunicorn processes that OMERO.web starts. In general, it’s better to start with the former. You can additionally also have multiple OMERO.web instances all talking to the same nginx if that is the bottleneck.

~Josh

Hi Josh,

Sorry to bother you with this.

Here’s my reasoning: OMERO is serving requests for whatever it holds but I imagine there would be a limit on the number of requests it can handle at any given moment. Underneath OMERO I’ve seen that Ice does have queue limits for TCP buffers but I thought that on the application layer OMERO might have such. Basically, if I flood OMERO with N requests (where N is quite large) what is the value M < N beyond which requests are dropped (gracefully or not)? That’s what I mean.

What you’re referring to in OMERO.web’s case are for a different stack where the application is nginx (which has it’s own queue limits) which hands over to Django, which hands over to OMERO.

I don’t know if I’m making any sense…

Regards,

1 Like

Hi Paul,

Ice handles all the TCP connections for the OMERO backend so those are one in the same. In the JVM, likely the thread sizes are more critical for what you’re looking at.

I depends on what the tasks are doing and which of a few parameters (N, P, T, etc.) are the smallest. For example, if all of the queries are DB-bound, then the application will hang while trying to acquire a postgres connection. If there are a large number of short queries, likely the Ice thread pool is limiting.

OMERO tends to not just drop connections, but to try to block them until they can be handled. Various timeouts define how long they will wait under each of the conditions.

You are, and now I understand: you’re thinking about the whole kaboodle. Sounds great, but it’s obviously a longish conversation. So as a starter, here are the limits that I know of (or can remember offhand) from the client to the backend:

  • nginx connections (along with timeouts, etc.) in nginx.conf
  • django processes (along with timeouts, etc.) in etc/grid
  • icegridnode (etc/grid)
    • thread pools (client & server)
    • max message size in etc/grid
    • session timeout
  • OMERO
    • ice thread pool (client & server)
    • max message size in etc/grid
    • omero config …
      • session timeout
      • max heap size
      • maximum servant usage
      • java thread pools (system, user, background)
  • Postgres (see https://postgresqlco.nf/ for more)
    • max connections (in postgres.conf)
    • buffer size
    • query timeout
  • operating system open file limit (usually in system defaults or systemd)

~Josh

1 Like

Thanks, Josh. Those are very helpful to be aware of. I’ll poke around.

1 Like