PYME camera ROI changes via protocol and camera maps

Hi PYME developers,

I have a question motivated by the desire to make acquiring camera maps near trivial.
For that purpose I would like to have the ability to change the camera ROI via protocol in an acqusition. Is this currently possible?

How is this related to camera maps?

In my experience, right now, taking camera maps is too hard. It needs to be full chip size frames and this is either not possible or slows down the system unacceptably at the frame rates we want to characterise. So nobody does it…

The thought is to make a protocol that tiles over the chip by taking frames in small ROIs which are advanced over the whole chip in one sequence. The analysis routine then uses the events/metadata in the sequence to recosntruct the full chip maps.

If there is a better solution I’d be keen to hear about it.

Many thanks,

You can, using

scope.state.update({'Camera.ROI' : [x0, y0, x1, y1]})

in a protocol task. This is, however, very much a case of here be dragons (on multiple fronts).

If you do this you have to ensure that the size of the ROI doesn’t change (the spoolers assume that frame size remains constant throughout the series). Moving frames back where they belong is also going to be super hacky due to the (general) lack of hard synchronisation in protocols (you can mitigate this somewhat on the sCMOS cameras by using a triggered acquisition - see the tile_triggered protocol for inspiration). You will also potentially run into edge effects with the ROIs (the edge rows of an ROI tend to have aberrant noise behaviour). To date, the advice has been to settle on a fixed ROI for high speed imaging and pad the remaining region of the sensor.

Many thanks. I see the issues, I could see ways to work around them though (overlapping tiles of identical size, leaving an edge area that is not evaluated and taken care of by sufficient overlap, ignoring frames around the switch time in some time window of suitable size etc).

But is there a better way, i.e. a way to run something akin to a macro, that takes several series and does the tiling in an “outer loop” that iterates over individual sequences? This would take several of the issues you describe out of the picture. I seem to recall there was some notion of tasks or similar that may be able to accomplish this?

Whatever the answer, I’d maintain the view that the current ways of taking maps (e.g. just for a ROI) make it too awkward, there should be a better way if we are serious about it. This is not a complaint, just an observation and I would like to help to get to something better than the status quo.

Hi Christian,

I would love the camera maps to be one-click to acquire/install.

Following up on David’s comment, the protocols are sufficiently flexible you can do pretty much anything in them, but a large protocol is maybe not the cleanest solution

protocol notes it would probably require software triggering for the camera (see `PointScanner` and the `tile_triggered` protocol as an example), some interaction with the spool controller/frame wrangler if the ROI size changes and you need to save to different files(/table nodes), and will require untangling the events on the backend as you mention.

In-line with your ‘outer-loop’ thought, this is something I’ve been working on lately. I just posted this PR so you can have a look if you like - I’ll add some description/maybe some screen shots to it at some point.

We have a recipe module for creating dark/variance maps (and could add one for flatfields). Soon you’ll be able to launch arbitrarily long chains of localization/recipe tasks to the cluster from PYMEAcquire (e.g. run a recipe, then localize, then run another recipe). While there’s a ‘default’ analysis chain, you can also associate a protocol with an analysis chain, so my thought would be: add PYMEAcquire menu option which queues tasks to the action manager interweaving camera setting changes with acquisitions of dark/variance and flatfield protocols. These protocols could be paired with chained recipes that generate respective maps and save them (using the ImageOutput module). A user could then click the menu option and everything would be taken care of using only simple recipes / acquisition protocols

Many thanks @DavidBaddeley and @barentine for the clear info! I am pleased to see how well the forum with the pyme tag seems to work for these kinds of questions.

Following up from Andrews comment, you could use scope.actions.QueueAction() to line up interleaved ROI changes and acquisitions. On the downstream side there are still a couple of things missing in the recipe modules - namely a way to output a mask of the valid region within the camera map and an easy way to combine the maps from the individual ROIs into something useful. this is probably just a weighted sum of individual maps - with some trickery you might be able to use one of the aggregate recipe endpoints to drop all the maps as frames into one HDF file giving you a couple of pseudo z-stacks (for the map and it’s validness) which you could multiply and sum-project.

Good points David.

I guess I mostly have in mind the CMOS readout case where you can just queue an acquisition for each feasible readout speed for each possible vertical (or horizontal, whichever) ROI and full-chip in the other direction. For flatfield maps I suppose that assumes you can illuminate the whole thing, and for CCDs… maybe you need cropping around.