Register Virtual Stack Slices: 0 data points in Model error

I am trying to register a stack of ~11,000 images using Register Virtual Stack Slices (RVSS).

I have “Shrinkage constrain” and “save transforms” checked. I am using “Translation” for the feature extraction model and the registration model.

After it finishes “Extracting features”, which takes a few hours, I get a “NotEnoughDataPointsException”:

mpicbg.models.NotEnoughDataPointsException: 0 data points are not enough to solve the Model, at least 1 data points required.

Does anyone know what causes this error? Is it a single picture, or multiple ones? If it’s just a couple images causing the problem, is it possible to handle the error by skipping the “bad” image instead of throwing an uncaught exception?

If there is a more appropriate way to register such a large stack of images, please let me know.

Hello ajw,

As far as I know, one image with 0 features will stop the registration. So I can’t say if it was more than one. I am not sure if you can skip it, but are you sure you want to? To answer this we need some more information.

What are your images like? Are they very similar? Do they have a lot of visible features or just a few specks? Are some of the images just empty, or are they just slightly dimmer than the others?

How sensitive are your SIFT feature detector settings? Have you played around with a few images to see how many features are detected? (Plugins >> Feature Extraction >> Extract SIFT correspondences)


Hi @Sverre,

Thanks for your help.

The issue is that I’m trying to register a bunch of webcam-quality images of an outdoor location, so in general there are a lot of features. However, because its outdoors, the image quality varies a lot (sun in the morning can cause glare, some days there were snowstorms which dramatically reduced contrast, etc.).

Because of the variability and the large number of images, I’m not sure how helpful it would be to play around with a subset. I’ve tried removing the images that were taken during the snowstorm, but I’m still getting the error.

This is why it would be useful to at least know which images are causing the problem. Then maybe I could remove them.

I think in my case I do want to skip the problematic images, since they would only be a few out of ~11,000, which I plan to play back at 60 fps or so. But again, knowing which ones are causing the problem would allow me to just remove them in the first place.

Ok, I agree that it makes sense to remove them in your case. How can you do this? Perhaps there is a way you can tell the RVSL plugin to ignore images with no features, I don’t know. Is there an easy way to do this @iarganda?

An alternative strategy would be to iterate over all your images, extract SIFT features and then simply discard any image with very few or 0 features… The catch is I don’t know the best way to do this. You can write a macro or script for the feature extraction plugin, but AFAIK it does not do shrinkage constrain. I don’t know if this is a big issue for you. But if, as you say, there are many features. Then an image that has 0 features in common with the source image should be discarded, no?


There is no option for that in the plugin. RVSS assumes all images contain relevant information. Maybe I should change the code so at least it shows a list of problematic images.

SIFT is very sensitive to the contrast. What you can do is to pre-process all your images using something like CLAHE, apply the registration to the contrast-enhanced images saving the transforms, and finally apply the transforms to the original images.



This would be perfect!

I figured out that some days there was fog in the morning which basically gets rid of any discernable features. To find these images, I have to wait for the plugin to extract features, which takes 2-3 hours. Then it goes through the “Matching features” part. I have to wait for it to crash, look up which image was causing the problem, delete the offending images, and start over. Then, I repeat the whole thing, it gets a little farther, and stops at the next problematic image.

It does print out a list of “model not found” errors to the log window, but these don’t seem to prevent it from continuing, and these images are not the ones that cause it to throw fatal errors.


I have identified and removed the problematic images. Now I am noticing a different problem. The plugin seems to throw that error at apparently random places (judging by where in the stack the plugin stopped). That is, if I run the plugin multiple times, it will fail at different images in the stack.

Is this behavior you have seen before? Or perhaps the “Extracting features” progress bar is misleading?

That’s not expected. The plugin should find that problem always at the same images if you use the same input parameters and images.