I’m trying to align microscopy images by using “Feature Extraction”. I’m getting bit by the MOPS bug (crashes with out of bounds variable), so MOPS doesn’t work, and SIFT doesn’t seem to be able to find landmarks. I get very sparse marks, with many outside the tissue section in blank areas. The only method that works somewhat is “Extract Block Matching Correspondences”. However, I really don’t know what these parameters are or what the method is. I can’t find any documentation.
Does anyone have anything written on what method is being used and how to set the parameters?
Thanks for replying, but I really shouldn’t post images at this time. They’re phase contrast white light images of brain tissue.
My immediate problem is that the documentation online seems to be for a previous version of the software, and parameters like “resolution” and “layer scale” don’t seem to be defined anywhere. So, I’ve been flying blind trying to control the plugin. I went looking for the code, but the Feature Extraction plugin appears to have a large number of files that don’t have any apparent order.
What is “scale factor” and what are the effects of setting it to something other than 1? Why is it there? How do I choose the optimum value? I’ve notice that setting it to “2” causes a lot more computer time to be required, and for some reason it improves the feature finding in my data.
“resolution” is said to be “the number of vertices in a long row of the spring mesh”. How do I see an overlay of the spring mesh? What does this number of vertices actually control for feature finding? Are features supposed to be of a certain number of spring nodes?
What does PMCC stand for?
There seems to be a model for PMCC that has an r factor. What is that model? What purpose does this model serve? We are being asked to select minimal r, max curvature ratio (of what to what?) and max “second best” r. What are these things, and what is the effect of raising or lowering from defaults?
The parameters for local smoothness filter (sigma, max displacement (absolute), max displacement (relative)) are not defined anywhere. I sort-of have an idea what they should do, but what do they actually do?
I see the option to export point correspondence, but I never get the option to save. To where are these points being exported?
Can this plugin find features in an image stack? The documentation discusses stacks, but I’ve only been able to get it to work between two non-stack images.
That’s pretty much the entire dialog that is undocumented or underdocumented.
It is exactly what it says. By what should you scale your data before running the computation. A scale of 2 means that they will work on a 2x larger image than your original. The idea is that if your features are very large, you might not need to do the block matching on your full resolution data but that you could downsample it by 2 or more (scale factor 0.5 or less) the fact that oversampling works on your data would suggest that the other settings are not set properly for your needs.
Did you try clicking on “Export Colorized Displacement Vectors” and seeing the differences when you change the resolution.
This number does not control anything for the features. It simply controls how many interconnected transformation models will be applied to your image (or how large an area on your image is affected by a single transformation. The higher the resolution, the more mini-transforms.
There is a misunderstanding on your side here… this ‘r factor’ is the PMCC value for each block pair match. The PMCC computation will output a single value per block-pair and if it is above the value that is set in the plugin, it will be considered as a valid correspondence.
The max curvature ratio is explained in detail enough under the “correlation filters” section…
Second best r is just what it says. You find an r value for each permutation of your blocks, so sometimes one block pair (say block 1 from image 1 and block 10 from image 2) will have an r value above the threshold (say 0.3) but another set (block 1 from image 1 and block 32 from image 2) will have a relatively high r value as well (say 0.2) their ratio is then 0.66~ in this case. If you set the max second best r/best r to 1.0 then the detection will be accepted. Set to 1.0 basically means that as long as the second best R is lower than the best, it will keep it. If you get a lot of detections and feel you could filter them out, set this value to lower.
Again they are on that page I sent you under “Local Smoothness Filter”, what more would you need?
The points are placed as a point ROI each of the two images. If you save the images as TIF then these ROI will be saved as well. Another way to save them is to add them to the ROI manager with proper names. Once for each image. Are you scripting something or just using the plugin by hand?
I am unsure if it works with stacks, however stacks and registration tend to scream TrakEM2 in my opinion. It makes use of these very plugins in a nice graphical interface. There are also a large amount of video tutorials available on working out how to register and tile 3D data.
I’ve answered pretty much all your questions by reading through the page that I had sent you. I believe that this is a fairly well documented plugin, with figures, example data and information on what happens when you change the values.
For further details, did you bother reading the publication on which this plugin is based, which is located at the bottom of the wiki page?
Finally if after reading the full paper you still have questions, I am sure that the main author @axtimwalde will be happy to answer them and correct me if some of the things I said here are incorrect.