Fast Time Series Acquisition for 10GigE Camera?

Hi Everyone,

We are looking into acquiring a fast time series (>500 fps) over the course of about 10 seconds with a FLIR Oryx 10GigE Camera. We have demonstrated on our system that this is possible without losing frames in the native Spinnaker software. However, we wish to pair this with a few other types of acquisitions (z-stacks at wider FOV) through custom acquisition scripts with pycromanager/MM core commands. I was wondering if anyone has successfully done an acquisition at this speed through micromanager? Here are the issues I have been running into thus far:

To test, I would run an MDA with 10,000 frames, at 0 ms interval so acquire the data as fast as possible (10 sec time series at 1ms exposure). Micromanager is seemingly able to capture about 6000 frames but for the remaining frames, it is unable to grab the images from the camera, then MM times out, and the acquisition fails.

I am able to do this fine on a camera at the same fps but with a USB3.0 connection. It seems to not be an issue with the buffer size, as this acquisition is only ~11 GB in total and the buffer size was set to 50 GB (64 GB RAM computer). I also changed the Java Memory under: ImageJ > Edit > Options > Memory & Threads to 40 GB. I also monitored both this memory and it did not ever send images to the sequence buffer, but got up to 23% memory in the memory monitor. I tried this acquisiton in pycromanager, and received similar issues where the acquisition would freeze and never complete. It is not an issue with where the data is being saved, as there is over 1TB of free space on the disk.

For smaller acquisitions (say 500 frames) the acquisition will sometimes complete successfully, however unreliably, and in this case, it will start sending images to the sequence buffer. It is not an issue with the Camera, nor the 10GigE connection, as running the same acquisition in its Native software seems to work just fine. I have attached some images of the what it looks like when the Acquisition fails, and below is the core log:

2020-10-22T11:04:27.574758 tid19272 [IFO,dev:Oryx ORX-10G-51S5M] Spinnaker: Failed waiting for EventData on NEW_BUFFER_DATA event. [-1011]

2020-10-22T11:04:27.574758 tid19272 [IFO,dev:Oryx ORX-10G-51S5M] SeqAcquisition interrupted by the user

2020-10-22T11:04:28.290828 tid19272 [IFO,dev:Oryx ORX-10G-51S5M] Sequence thread exiting

2020-10-22T11:04:28.440560 tid17256 [IFO,App] Acquisition failed.
[ ] java.lang.Exception: Timed out waiting for image to arrive from camera. in Thread[clojure-agent-send-off-pool-0,6,main]
[ ] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
[ ] at sun.reflect.NativeConstructorAccessorImpl.newInstance(
[ ] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
[ ] at java.lang.reflect.Constructor.newInstance(
[ ] at clojure.lang.Reflector.invokeConstructor(
[ ] at org.micromanager.acq_engine$throw_exception.invoke(acq_engine.clj:77)
[ ] at org.micromanager.acq_engine$pop_tagged_image_timeout.invoke(acq_engine.clj:380)
[ ] at org.micromanager.acq_engine$pop_burst_image.invoke(acq_engine.clj:390)
[ ] at org.micromanager.acq_engine$pop_burst_images$fn__1034.invoke(acq_engine.clj:416)
[ ] at org.micromanager.acq_engine$queuify$fn__1027$fn__1028.invoke(acq_engine.clj:407)
[ ] at org.micromanager.acq_engine$queuify$fn__1027.invoke(acq_engine.clj:407)
[ ] at clojure.core$binding_conveyor_fn$fn__3713.invoke(core.clj:1817)
[ ] at
[ ] at
[ ] at java.util.concurrent.ThreadPoolExecutor.runWorker(
[ ] at java.util.concurrent.ThreadPoolExecutor$
[ ] at

2020-10-22T11:04:28.768601 tid17184 [IFO,App] 6883 images stored in 83245 ms.

2020-10-22T11:08:35.377993 tid13912 [IFO,App] EDTHangLogger: Stopping monitoring of EDT hangs

2020-10-22T11:08:35.940350 tid13912 [IFO,dev:COM4] TERM_TIMEOUT error occured!

2020-10-22T11:08:35.940350 tid13912 [ERR,Core] Error occurred in device COM4: Error in device "COM4": (Error message unavailable) (107)

2020-10-22T11:08:36.932530 tid13912 [ERR,Core] Attempt to log message from unregistered device: Destructing MonitoringThread


Thanks for this summary @cfoltz.

I am looping in @henrypinkard, @nicost.
Hi Henry, Nico, the experiment is to acquire a fast time-series, do a z-stack in two channels, move to next position, and repeat.

We will appreciate your input in diagnosing where we are losing the data throughput. Our interim idea is to use the camera API (pyspin) for acquiring the data at high speed, and use pycro-manager/micro-manager for the rest of the coordination.


There is a simultaneous discussion at github ( where @marktsuchida provided some ideas/things to test.

Where to best conduct these kind of discussions is often a bit nebulous. I suspect that will get more eyes, and if it is not directly a bug/clear issue in the MM code, this forum may be the better place.

If you think this may be a better avenue for discussion, I am happy to remove the github issue and continue the conversation here.

You choose (it mainly is confusing to have the same issue in different places).

Replying to a point made by @marktsuchida on Github:

This sounds like a tough problem and I only have vague suggestions.

The fact that it works with USB3, together with the error being generated by Spinnaker, suggests that it is something happening at the driver or OS level, whereas the fact that it works with the vendor software suggests otherwise. So one possibility is that there is some long-range interaction going on due to Micro-Manager using more resources than the vendor software.

It might be worth trying with both the sequence buffer and the ImageJ memory set to smaller values (just enough to not run out, and in total less than your physical RAM). These settings, when huge, can affect the layout and management of memory by the operating system in significant ways. (Although very unlikely to be related, it also happens that the Java Virtual Machine tends to perform slightly better when its memory (ImageJ memory) is set to 32 GiB or less.)

Another possibility is that Micro-Manager is using the CPU (or, more likely, memory bandwidth) too much, or simply not retrieving images from the driver fast enough because of doing too much work on the same thread in between, such that the driver’s buffer is overflowing. Maybe disable display of MDA images (Tools > Options) or try running the acquisition from a script using only MMCore methods. This would have to be a subtle effect, though, because Micro-Manager should be doing the same things when using USB3. Nonetheless, it might help to get an idea of the time scale. If you decrease the frame rate instead of the frame count, at what point does the acquisition start to run reliably?

I have retried the Acquisition by lowering the Java-side memory to 20GB, and the sequence buffer to 40GB at a variety of framerates. I also disabled the option that displays MDA images. The Acquisition completes at 150 fps with 500 images at 1 ms exposure (~4GB total). The Acquisition fails at 170 fps with 500 images at 1ms exposure (changing down pixel value to 8bit with same ROI).I can explore using straight core commands, but this acquisition also failed using pycromanager MDA.

The reproducibility of this error seems to vary for each attempt over time, where sometimes it can acquire a time series at 300 fps and other attempts only 100 fps. I was able to workaround this issue by using a python wrapper for the vendor software for the time being.

@Elliot_Steele, any ideas?

Hi all,

Not too surprised this isn’t working, the adapters were written for much slower cameras than the 10GigE.

I’ve had a quick look over the source code and they’re its currently doing some unnecessary allocations and deallocations when frames come in. That could definitely slow it down enough to drop frames, particularly with large images. I’ve also spotted a bug in that area of the code so I’ll try making a few tweaks. Unfortunately, I don’t have a point grey camera at the moment so I’m going to need someone to test any changes before we merge them

In the meantime, the only other thing that I can think of off the top of my head is to avoid the 12-bit modes (if they’re available). They require some extra processing of images before they can be handed off to micromanager, that could also easily cause missed frames

Hi Elliot,

I would be happy to test any changes that are made to the device adapter and report back. Thanks for looking into this! We can correspond over email if thats easier for you.

@Elliot_Steele. We currently work with a modified version of the Spinnaker adapter courtesy of @ieivanov to which he added a few important functions. You can find the code that we work with here:

Ah, that’s awesome. I’ll use the modified version as a starting point for any changes I make. I’ll let you know once I’ve had a chance to play with it

@ieivanov are you happy to have your changes merged back into the main micromanager branch, they look like really useful additions even if we don’t manage to fix the missing frame issues


I would try with a 6GiB sequence buffer if testing with ~4GiB images, just in case. Chances are not high that this will actually help, but it’s good to eliminate the possibility that it has to do with the overhead of allocating and accessing a large block of memory. (It is generally counterproductive to make the buffer larger than necessary, because the operating system then needs to shuffle around memory pages (4KiB blocks) upon first access. If the sequence buffer is larger than the available physical RAM, it may even need to swap to disk, or swap some other important memory to disk, which makes behavior even more unpredictable. The MMCore sequence buffer uses the frame buffers in cyclic order (reusing the most recently used buffers would have been better, at least for the case where the total buffer size is large), so this can have a surprisingly large impact under certain conditions.)

Running straight core commands is perhaps still worth trying, because pycro-manager will have much higher overhead. Also, I would compare the results from identical settings (for things like pixel bit depth) with only exposure changed, although you’ve perhaps already tried that.

@Elliot_Steele I’ll work on merging my changes into the main micromanager branch. In the meantime, you’re welcome to work off of my fork.


@Elliot_Steele I was going to mention that to make the Oryx camera work we’d need to update the Spinnaker SDK version that the device adapter relies on. The instructions on the micro-manager website ( call for version, which is available through Cairn Research (I’m not sure what their connection here is). Through the FLIR website ( we can access archived versions and Version is what I have been using (maybe you asked them to add it to the archive?) but I’m happy to update to a more current version of the SDK if FLIR would keep an archived copy of it for us.

What do you think? I’m happy to test the latest version of the SDK to make sure there are no problem and then we can ask FLIR to add it to their archive. We’d then update the instruction for the device adapter on the micro-manager website. Does that sounds like a good plan?

@Elliot_Steele Did you ever get a chance to take a look at the device adapter? We just finished a round of experiments where we used a combination of the SpinView (Flir) Software + MicroManager, which inevitably created a lot of potential fail points from the user side of the acquisition. It would very much unblock our experiment to be able to do this solely in MicroManager. I have a feeling that this may be a device adapter issue-- let me know if you have any more insights into this. Thanks for helping out!

Hi @cfoltz Sorry for taking a while to get back about this, things started to get a bit hectic in the run-up to Christmas. I finally managed to put aside some time to take a crack at it over the past couple of days

My updated version is here: it includes the changes from @ieivanov and has removed the allocations and a couple of copies that were happening in the acquisition loop, hopefully improving performance. I kept the setup pointing to v1.20 of the SDK to make it easier to integrate back into the main build but I tested with v2.0 and a Blackfly and everything seemed to work as expected

As for the SDK versions @ieivanov, SDK updates caused a lot of pain for me when I was originally developing the adapters. I think when I started Spinnaker was relatively new, I think it was version 1.8 or something like that. At the time they would remove links to the old SDKs as soon as the new ones were released (which was happening every month or so) and the updates would contain subtle breaking changes for some but not all cameras (e.g., “AcquisitionFrameRateEnabled” getting changed to “AcquisitionFrameRateEnable” which you spotted). The reason we settled on v1.20 was because Cairn managed to get in contact with them and convince them to keep an archived copy of v1.20 available, although it seems that they forgot that conversation… (Incidentally, Cairn’s involvement was twofold, they were sponsoring my Master’s project at the time and were also interested in integrating the Point Grey cameras into some of their systems). Since all of this happened they’ve published this on their website, which seems to imply that it would be possible to ship some extra DLLs with micromanager and avoid the need for end users to have the SDK installed, although I didn’t have much luck with that when I tried it today. If anyone can work that out, that would be awesome, it would allow the micromanager build to use whatever version it wants and update whenever. As it stands, its probably worth updating to v2.0 now anyway since it seems to work and is actually available (for now…).

@nicost is it easiest for me to submit a pull request on github or are you guys still preferring SVN?


Hi @Elliot_Steele,

It is still svn, then to github. I had a quick look at the FLIR SDK redistribution page, and read the legal agreement linked from there. One could argue that MM is an OEM, but even then, it would subject itself to several obligations it can not fulfill. Also, the number of files needed is very large, so I am not sure if I would want to include all of these (which also brings with the it the chance of dll clashes with other device adapters, even though the end user does not use a FLIR camera).

I noted on the FLIR website, that things are a bit more organized. If we build against the current 2.0 SDK, will that code work with newer 2.0 versions installed on user’s systems? If I understand things correctly - as long as you do not use the C++ SDK - that should be the case, but let me know whether or not that would be a path forward.

Hi @nicost,

Ah, that’s a shame, I’d assumed that the license for the redistributables would have been a bit more lenient. I also agree the number of DLLs you’d need to ship is bizarre

Unfortunatley, its using the C++ SDK at the moment and I imagine it would be non-trivial to change it over to the C one. If I’m remembering correctly, they would guarantee compatibility between versions with the same major and minor versions (i.e., the C++ 2.0.x versions are all compatible with one another). The most recent version seems to be 2.2, however, so I doubt that would be compatible. I guess the question is, which do we think they’ll keep archived longest, 2.0 or 2.2?


Hi @Elliot_Steele,

Thanks a ton for looking into this and making the necessary changes! I have built your version successfully using 2.0, but I am running into an error when loading my Oryx Camera in MM. It seems that it has to do with setting the “AcquisitionFrameRate” property. Can you point me somewhere in the code for a quick fix on this? I have attached the MM error here.

I have been unable to locate this “FloatT.h” file to look further into this issue.

Edit: I get a similar error when trying with our Blackfly Camera:

I was also wondering if this section of code can be explained:

SpinnakerAcquisitionThread::Start(long numImages, double intervalMs)
	MMThreadGuard g1(this->m_stopLock);
	MMThreadGuard g2(this->m_suspendLock);
	m_numImages = numImages;
	m_intervalMs = intervalMs;
	m_imageCounter = 0;
	m_stop = false;
	m_suspend = false;
	m_actualDuration = 0;
	m_startTime = m_spkrCam->GetCurrentMMTime();
	m_lastFrameTime = 0;
	m_spkrCam->allocateImageBuffer(m_spkrCam->GetImageWidth() * m_spkrCam->GetImageHeight() * m_spkrCam->GetImageBytesPerPixel());

	if (numImages == -1)

It is line 1636 of SpinnakerCamera.cpp

In my experience of using PySpin for fast time-series acquisitions, the camera AcquisitionMode needs to be set to “Continuous” in order to perform correctly. While this is not directly set by the user in the current MM adapter, it does seem to get changed depending on the numImages variable. Is there a chance that when I try to take a time series through MM MDA acquisition that the camera is not actually in “Continuous” AcquisitionMode? What exactly is this numImages variable?