Device adapter communication through pipe

Hello everyone!

We are considering using Micro-Manager for controlling our microscope. I’m in the process of learning how to write the device adapter. So far I’ve read all the guides and setup visual studio 2010 as per instructions for writing the device adapter. I’ve also looked through dozens of examples of already written codes for device adapters. From this I’ve come to understand that all them communicate through serial port using built in MMCore functions.

We however have a situation where our microscope control boards FPGA receives commands through terminal application. We thought that it would be nice and easy to write another application which would write directly to terminal application through pipe. Then for this application we could pipe the commands as strings from MMDevice adapter. For example snap_image from MM_Device_Adapter and it would be converted to a set of commands for terminal to write to FPGA.

I’m in very novice level with C++, so I was hoping that by looking at the examples I could easily implement this kind of functionality to device adapter.

My question is that does this sound at all easily implementable thing to do with regards to writing the device adapter? Also we thought that when getting the image binaries and converting them to image it could just be stored to some file location and MM could then check the file and get the image from there when it appears.

I would greatly appreciate an input and opinions on that does this sound reasonable at all.
Thanks a lot if someone can help!

Can you describe the FPGA communication pipeline a bit more? How is the FPGA physically connected to the computer? How do the terminal commands you type end up in the FPGA?

I assume that you are doing this on linux (please correct me if I am wrong). The FPGA may appear as a device (i.e. /dev/fpga1 or something like that) that you can write to an read from. If so, you can do that directly from your device adapter (no need to use the MMCore functions to communicate through a port, those are there to make it easier for you, and to help make your code cross-platform). If that is indeed how things work, I would also grab the images directly rather than going through the file system, but I am making a lot of unwarranted assumptions about your system.

We are trying to make this work in windows. Connection is done with Opal Kelly FPGA integration module in between the computer and the microscope, with microscope having it’s own FPGA. Opal Kelly is used in between so that microscope internal design could be made more simple and compact and it will handle some data conversion. Opal Kelly is connected to computer with usb. Also the desired working of the equipment is that you enter parameters and find the area of inteterest and then disconnect the device from the computer leaving microscope device making the measurements autonomously. After it’s done the PC is connected and images downloaded for further analysis, hopefully this can done in micromanager.

Analysis is not really Micro-Manager’s strength, it is used to acquire images and to communicate the desires of the user to the hardware. If your FPGA acquires images autonomously, why not open them directly in your analysis pipeline?

Also, if I understand this correctly, the terminal is provided by the company that designed the FPGA software, and that terminal communicates with the FPGA through a driver, that sends its commands through the USB bus. The company may provide an API that you can use to communicate from your own application, and you would want to use that API. It is possible that the company exposes the FPGA through an emulated serial port (which is how many microscope devices are configured) to avoid the need to work with a special driver and API. As far as I know, Windows does not have facilities to communicate between terminal windows (the concept of a terminal window is a bit mysterious, Windows currently calls the DOS shell “Command Prompt”, git for windows provides a very nice bash shell, and there are probably many other things that could be called “terminal”).

Communicating the desires of the user to the hardware is really what we need the most and by analysis I mainly ment that with MM images could be easily accessible to the end user dealing with only the MM window. Once they have images in MM they they could then export them and analyse them where they desire.

FPGA software design is actually whats currently just being started, we just recently had the board ready. Terminal application what I’m talking about just takes ready text files with series of registery commands and for example moves xy-table for certain amount or takes picture or changes led choice. It has been only tested with the camera with another test board. MM could just pass the arguments and wanted action command through the pipeline. And yes I guess in windows we are talking about cmd prompt.

Maybe this https://docs.microsoft.com/en-gb/windows/win32/procthread/creating-a-child-process-with-redirected-input-and-output?redirectedfrom=MSDN could be adapted so that MM Device adapter would be the parent with ability to just to pass commands as strings to our fpga control where they could just be linked to right sets of commands to be sent to the hardware.

I’ve managed to get pipeline working. Now for example you can set values for properties LED power -> move preset slider to 1000 and something like following will be written to driver program “-w -i2c -a 34 -d 36536”. Parameters to be sent for the device can now be conveniently collected to Configuration setting groups and presets.

Another question has come up:
For now image coming from the device only consists of 12 bit depth greyscale pixeldata.
Can micromanager be easily made to receive this data for example just as an array with 12-bit series as elements in a row from up left corner to bottom right. (Or that elements are unsigned ints from 0-4095.) Are there some ready made utilities in the code to use this? I’ve found writeCompactTiffRGB method but it seemed that it cannot easily be modified for this purpose. I’ve also tried to look into how micromanager core uses the device adapter implementation of the MM::CameraDevice and to figure out how the image data is supposed to come in and be handled, but haven’t got the hang of it quite yet.

In reply to my own question: I managed to implement CameraDevice class in the device adapter and using the ImgBuffer class succeeded in building a test image. Also I got a test image thats come from microscope to show in micromanager.

But the problem of setting pixels to contain incoming data with bit depth 12 is still a mystery. It is said in the GetBitDepth comments that it gives the client application a guideline on how to interpret pixel values. But in GetImageBytesPerPixel() you can only have integer values. Wouldn’t it be the case in 12 bit per pixel that bytes per pixel would be 1.5. Or does micromanager handle this somehow under the hood?

Now the test image which has 12 bits per pixel and is of size 2592x1944. GetBitDepth() returns 12, but GetImageBytesPerPixel() returns unsigned int 2.

Resulting picture is 2592x1944 pixel image with the real image being shown twice side by side and 1/5 of below those two is filled with black color as image was initialized with all values 0.

Is there a way to set parameters so that micromanager can show 12 bit depth image right or do I need to manually shift bits to make images to be 8-bit depth fitting 1 pixel = 1 byte? Reducing the greyscale from 0-4095 to 0-255.

Sorry, meant to reply, but slipped through the cracks. Happy to see that you got quite far along!

Micro-Manager has two types of grey scale images, 1 byte per pixel and 2 byte per pixel. Bit depth is an indication to the application what the actual bit depth of the camera is (i.e., the highest pixel value that can be expected). So, if your raw data somehow are packed 12 it image, you would need to unpack it, by adding 4 (zero) bits at the high intensity bits.

I am still a bit puzzled why your image was shown twice. Just to be sure, is this grey scale or RGB?

We are dealing with greyscale images. I’ll try to do the unpacking tomorrow, maybe that will fix everything. :slight_smile: Heres the duplicate picture I was talking about:

Also if you zoom in or out it goes to complete blurry mess.

And heres the code inside SnapImage() function that I’m currently using to test how to load the test image and to figure out how it will have to be modified to be received correctly by the micro-manager.

FILE* file = fopen(“raw_takka.bin”, “rb”);
if (file == NULL) return ERR_SOMETHING_WENT_WRONG;
fseek(file, 0, SEEK_END);
const long int size = ftell(file);
fclose(file);

std::cout << "Raw size in bytes: " << size << std::endl;
std::cout << "ImgBuffer size in bytes: " << GetImageBufferSize() << std::endl;

// Reading data to array of unsigned chars
file = fopen(“raw_takka.bin”, “rb”);
unsigned char* in = (unsigned char*)malloc(size);
int bytes_read = fread(in, sizeof(unsigned char), size, file);

std::cout << "Bytes read from fread: " << bytes_read << std::endl;

fclose(file);

unsigned char* pBuf = const_cast<unsigned char*>(img_.GetPixels());

memcpy(pBuf, in, size);

Yes, first try the unpacking. It could be that rows and columns are swapped, but that will become more clear after unpacking. Good luck!

Okay, I’ve now unpacked the raw data to 2 bytes per pixel format with high intensity pixels padded with zeros. So now the array goes something like this:

  1. byte: padded bits + pixel1 high nibble = 0000 1011
  2. byte: pixel 1 middle nibble + pixel 1 low nibble = 0101 1010

making the first pixel to be in 2 byte depth, 16-bit format: 0000 1011 0101 1010 (= 2906 grayscale value)

and so on…

Is this the format micro-manager is expecting? Because image is still not showing correctly.

This is the code where I unpack raw data being in the form (see below) to the form described above.

  1. byte: P1H P1M
  2. byte: P2H P2M
  3. byte: P1L P2L

unsigned char* pix = (unsigned char*)malloc(GetImageBufferSize());
unsigned int i,j;
for (i = 0, j = 0; j<GetImageBufferSize(); i+=3, j+=4)
{
pix[j] = (in[i] >> 4) & 0x0F;
pix[j+1] = ((in[i] << 4) & 0xF0) + ((in[i+2] >> 4) & 0x0F);
pix[j+2] = (in[i+1] >> 4) & 0x0F;
pix[j+3] = ((in[i+1] << 4) & 0xF0) + (in[i+2] & 0x0F);
}

Heres what image looks like now:

The image buffer passed from the camera device adapter to MMCore via InsertImage() needs to be in native endian, i.e., little endian, so you probably need to byteswap – or more simply, construct the pixel value (0 to 4095) as an unsigned short (or uint16_t) before storing it at the appropriate 2-byte array element.

A thousand thanks for you both! Simple byte swap did the trick and image is finally showing as it should. :smiley:

I only needed to change index numbers to be like this:

pix[j+1] = (in[i] >> 4) & 0x0F;
pix[j] = ((in[i] << 4) & 0xF0) + ((in[i+2] >> 4) & 0x0F);
pix[j+3] = (in[i+1] >> 4) & 0x0F;
pix[j+2] = ((in[i+1] << 4) & 0xF0) + (in[i+2] & 0x0F);