Projector Plugin Calibration

Hi all,

i am working on device adaptor that can image and FRAP sequentially. i have noticed there is a projector plugin that i can utilise. However, as i understand the calibration of the projector plugin, it requires the illuminated spot from FRAP and imaging simultaneously. Therefore is there a way of using the same plugin to calibrate the FRAP without imaging at the same time? For example, detecting where the dark spots are after photobleaching, rather than looking for bright spots from FRAP?

Also, once the calibration is complete, where does the coordinates store? Is there a way to alter the calibration coordinates?

Kind Regards

Can you explain your device in a little more detail? The projector plugin indeed makes the assumption that spot illumination and imaging can happen at the same time, which seems a reasonable assumption (until it is not;)

The plugin stores the calibration as affine transforms. Since a single affine transform did not perform satisfactorily, the image is divided in squares (I believe 49 squares), and there is an affine transform for each one of them. These are stored in the user profile. There is a class “MappingStorage” in the projector plugin code exclusively handling storage and retrieval of the mapping functions. You could write your own code to interface with it, or possibly even override it.

Hi Nico,

The device that I am working on is a multi-point array scanning confocal with built-in FRAP functionality. It uses the same galvo’s (but slightly different light path) for imaging as it does for photo targeting. Hence, we are looking for a solution to calibrate FRAP without having to image the FRAP spot simultaneously in µManger.

Since the calibration process detects the bright spot from FRAP and maps the phototargeter coordinates to the camera’s pixel coordinates. Is there a way for the calibration to detect the bleached points (i.e. dark spots) on a sample of uniform fluorescence and use these bleached points for calibration rather than the bright spot?

It would be much appreciated if you have any ideas on how to achieve FRAP calibration with devices that can only perform FRAP and imaging sequentially.

In theory that is certainly possible. However, it requires modifying the Projector Plugin code. This should probably be an option at the start of the calibration (i.e., detect a spot during the bleach or after, and detect a bright spot or a dark spot). Almost all the needed code is already present. The main issue is that I found timing to be critical during these types of experiments, and to debug the code it is essential to have access to the hardware. Do you have anyone interested in modifying the projector plugin to do this? I’d be happy to assist and push the modified code into the MM distribution.

Hi,
Not normally one to add my two cents but I have spent some time in the past getting the Projector plugin to work with the Andor FRAPPA (in MM1.4 but Nico’s redo in MM2g looks equivalent).
The projector plugin works exactly as you want it to. It snaps an image of a uniformly fluorescent slide, then bleaches a spot, sleeps for a bit and then takes another image.
The difference between these two images is calculated (the black ‘hole’) and then the centre of this region is found (findPeak).
Take a look at the measureSpotOnCamera function in Calibrator.java.

I did have to play around with the timings slightly for the FRAPPA as, like for you, the galvo mirrors sit in the imaging path but the plugin functioned as described above. I did need to hack the calibration routine a fair bit though as the FRAPPA needs the actual calibration point pairs internally to trace the shapes correctly.

Best wishes,
John

Hey John/@mert0739 , Great to hear from you here, and your monetary contributions are highly appreciated;)
I may be guilty of having changes that behavior in MM2g, don’t fully remember… In any case, I think that access to both methods would be nice. Sometimes, the illumination spot is easily visible in the image, and can be better localized than the bleach spot. Other times (such as here), it is simply not possible. I also messed quite a bit with timing in the plugin, and ended up letting the user input one delay variable that is used everywhere a wait is in order (not the super fastest solution, but easiest to explain).

@mert0739 Thanks for your reply. This is the exact idea of what I have in mind as well.

In my understanding of the current calibration process, it captures a background image first, followed by a second capture with the FRAP beam on. Then it does a subtraction of the two images to remove the background noise before findPEAK().

But for the scenario here, if we reverse the order of subtraction, essentially what we have is white image subtract a white image with a bleached point. The resultant image should leave a white spot where the bleached mark is and with a black background.

Is the plugin that you modified available on a standard installation of MM1.4? Ideally, we want to work with MM2G, but if I can get a copy of the jar file and code for initial testing on MM1.4 that would be great.

@nicost it would be great if you can assist. Even though I have a rough idea of what I want to do, I am new to JAVA and currently I am struggling to set up the JAVA environment. I was able to follow: https://micro-manager.org/wiki/Version_2.0_Plugins to add the “example” plugin on intellij. But when i add the “projector” plugin instead of “example” for modification, I am getting a bunch of importing error messages.

Hi, have you actually tried the plugin yet with your device? I had a good look through the MM2g calibration code just now, and it’s really the same as in 1.4.
As long as you set a decent delay in the plugin window to make sure your device finishes the bleaching before the second image, it will work as you want already. I didn’t change this part of the code and it found the centre of my bleached spot with no issues at all.
It’s the same routine that is used for the affine transform routine too. We are still in lockdown so can’t get on scope to reconfirm!
Cheers,
John

@mert0739 this is interesting because as i understand the code, it does the subtraction to remove the background noise, and not for detecting when the bleaching point is. But i will certainly give it a test.

@nicost would it be possible for you to provide some guidance/ procedures on how to modify and compile some of the existing µManager plugins on Netbeans / IntelliJ?

Kind Regards

I am currently using InteliJ for Java development and like it a lot. It takes time to get used to (as does every development environment). There is a bit of a guide setting it up (https://micro-manager.org/wiki/Version_2.0_Plugins#Building_a_Plugin_jar_file_with_IntelliJ), but InteliJ can do things in many different ways. Last time I set it up, I pointed InteliJ to the top layer directory in the Micro-Manager source (after running ant -f buildscripts/fetchdeps.xlm), and it figured more or less everything out by itself. There still will likely be issues, especially with respect to dependencies and what not. It is difficult to provide exact guidance, since things are different for everyone. Maybe the best approach is to follow the instructions in that link, and keep notes so that you can modify the instructions (request an account on the micro-manager.org website, and send me an email once you have done so).

@nicost we have some success in compiling the code with the reverse order of subtraction as described above. Below is an image of post-calibration, it has a uniform point pattern bleached on the sample, is this something that you would expect to see for post-calibration?

Also, we seem to have some issue with performing FRAP on ROI, when i look up into the core log, the x and y position send to the galvo is outside of the range that I have set in the device adaptor, some of the x and y position is even negative. The system that we are using has a dual cam and utilising image flipper plugin to flip the image. Are you aware of any bugs that the imaging flipper will the wrong x and y positions send to the galvo?

Yes, that is the expected pattern, and I expect the code to find the bleach spots as it compares images before and after the bleach. However, it looks like your field of view is much smaller than what the camera shows. The projector code will try to find bleach spots in the black areas there as well, and will do strange things by interpreting noise.
They way around this is to set an ROI slightly smaller than your field of view before calibrating. The projector code is aware of the ROI, and when you change the ROI (or binning) later on, the code will do the right thing (I hope!).
It looks like the Projector plugin code does not work as expected with the Image Processing Pipeline (which includes the Image Flipper). It would be good to open a ticket about this on github, but for the time being I would use it without the Image Flipper.

Hi @nicost,
Just want to provide a further update on the projector plugin. It seems even without using the image flipper, the coordinates would go outside of the range that i have set, whenever i want to perform FRAP on ROI.

I would assume two possibilities can cause this issue from the high level, either the device adaptor misinterprets the coordinates from the projector plugin, or the coordinates are somehow corrupted before passing to the device adaptor. To test this, i have used mmc_.addGalvoPolygonVertex(this.galvo_, roiCount_, x_, y_); to pass hardcoded x and y values to the device adaptor, and the result of this is the correct x and y values can pass through.This makes me think that the coordinates are corrupted before passing to the device adaptor.

i think we are very close in modifying the projector plugin for FRAP device that cant perform imaging and FRAP simultaneously, is there any way that you can assist for resolving the issue above?

Kind Regards

Most likely, the calibration failed. The usual test is to point a spot to a specific location after calibration. If the galvo device does not point to the correct spot in the image, the calibration did not work as expected and all bets are off. Test a point first, then proceed to polygons.

@nicost ok, i will test the point an shoot function and see how it works out. Also, you have mentioned the mapping is stored in the user profile. Where in the user profile is the mapping stored? And what file format it is in? I am just wondering if i can transfer the mapping from the imaging workstation to another PC, so that i can perform some benchtop testing when i don’t have access to imaging workstation.

Kind Regards