Looking for a Program, Model, or Architecture to predict Intensity Slope

Hello, I hope this is the right place to post this.

I am currently working with a program which attempts to predict intermediate frames from a source movie to increase temporal resolution. This program works rather well and the inserted frames seem accurate to the human eye, but I am experiencing one major problem. These predictions are created by “warping and linearly fusing” two key frames. This means that all of the points which decrease in brightness between the key frames decrease at the same time instead of decreasing at different times over the course of the time between the Key frames. This leads to some bizarre measurements when I look at intensity slope, which I would like to fix. I know what the intensity slope should be at all points in the frames between the key frames, but I’m unsure of how I can apply this to my existing predictions.

I thought of trying to use a U-net image restoration model, such as CBSDeep’s package for a CARE model, but given that CARE models work primarily to denoise an image with poor spatial resolution, and I need a model which predicts the correct intensity for a spatially well defined object over the course of several frames, I’m unsure of how well this would work out.

Does anyone have any suggestions for programs which could help, models which I could train to deal with this problem, or Architectures which are best equipped to deal with this kind of issue if I have to create my own solution?

thanks and best regards

The slope should be linear then.

If you want a different slope (a different brightness transition) how should it look like? And why?
What is your expectation, your assumption and your model?

Hi there,

Sorry for not being more specific. Our software is measuring the intensity slope of each object, and finds the percentage of slopes in each bin (± 0-0.05, 0.05 - 0.1, 0.1 - 0.15). From our original movies we see a near constant frequency of slopes in each bin, but with my prediction movies we see a huge peak in the percentage of large slopes about halfway between the key frames.

I hope this makes things cleared.

What is right and what is wrong?
What are you looking for?
Are the constant frequencies of slopes in each bin from the original movie correct? Then the measurement is correct, yes? And the prediction movies are wrong, or?
If the prediction movie are correct then the measurement is wrong? Or is something wrong with the original movies?

Sorry, I don’t understand it.
Maybe you should post some example images (frames) and describe your measurement and your problem in detail.

Hello Again,

Sorry again for being unclear. The whole point of this project is to try to predict the original movies from a few key frames, so the originals are correct. We wish to do this so that we can image our sample less often and retain the same level of temporal resolution. We wish to do this so that our measurements do less damage to the sample over the same amount of time (as we are doing Laser-induced fluorescence imaging).

In our sample, fluorescent spots (CCS’s in this case) increase and then decrease in brightness over time according to their own lifetime distributions. Thus between any given frames there are some of these objects increasing in brightness and some decreasing, so the frequencies of intensity slope remain approximately constant. The Problem with our prediction movies is thus: the prediction movie decreases the intensity of each object that decreased in brightness between the key frames at the same time, instead of distributing their change of brightness uniformly between the frames. It makes since that our prediction software would make this error given how it works, but I would like to fix it. I would like to create or train a model such that when the model is fed the prediction movies the model will predict how the intensity of these objects actually changed in the original movies. This should be possible as in reality our fluorescent objects follow predictable lifetime distributions for known max brightness and approximate lifetime length.

This is more of a general question about your approach: you seem to already know how the intensity changes between two frames:

and you also know the distribution of lifetimes:

Therefore I’m not sure to understand what you gain by generating the intermediary frames. It sounds a bit like you have a model with which you simulate data that you then analyse again, which seems to be a circular logic. Maybe what you are trying to achieve is still not clear. Maybe you can explain why you need the intermediate frames ? In any case, you’d probably need some generative deep learning approach, and as you have a time-component you’d probably have to use a “video-preciction” approach like this: https://github.com/NVIDIA/vid2vid. But I would be very very careful in trying to get quantitative information from a generative approach as this could easily lead to artefacts.

Guillaume

Hello,

So we have a data set of our desired temporal resolution. in this data set we know the intensity change between each frame and the lifetime distribution of our objects. From this data set we choose key frames that are, say, ten frames apart. We then attempt to predict the ten frames between them with our program and analyze these. You are right that this is rather backwards, but we are doing this so that we can check the accuracy of our predictions. If we can confirm that our model produces accurate results with our known data set we will move on to testing it in different conditions.Eventually we wish to collect data of a lower temporal resolution than we desire, and apply our programs to it so that we get data of the desired temporal resolution. In this new data we would not know the actual intensity change or exact lifetime distribution.

Thank you for your suggestion. I’ll look into it. It sounds like it could be helpful. Thank you also for your concern about artifacts. I am very concerned about biasing our models in some way or getting unusual artifacts, but we are checking our current results against a known data set to check for these, and we are collecting data on the distributions of objects, (the frequency of lifetimes and the frequencies of intensity slope namely) so I hope that small artifacts will have a minimal effect.

Thanks, and Best Regards

Ok I see better what you want to do but I think there’s a major problem with this. Essentially you are fitting a model (even deep learning in the end is just a very complicated fit) to a complete dataset and then use that model to “interpolate” missing frames in another sparser dataset. If I understand correctly, that fit (how ever you do create it, analytical, deep learning…) gives you the right parameters for the slope and lifetime. While you can then use that trained model to predict frames in new data acquired in the same conditions (which, as you wrote is not very useful), I really don’t think that you can then use it to predict frames acquired in different conditions for which the slopes and distributions are also different. If you want to predict missing frames in different conditions, you have to train (fit) your model for each condition separately, which of course defeats the purpose of doing all this.

Maybe I’m still missing some important point, but I hope the discussion is at least useful…

The discussion is very helpful. Thank you.

The Objects we are trying to analyze follow a plateau lifetime distribution, meaning that they rise to a certain brightness, remain at roughly that brightness, and then disappear. If one knows the plateau brightness and rough plateau lifetime of an object then one should be able to fit this to a full lifetime distribution for the object (or at least find the most likely lifetime distribution for any object). Different environments change the frequency of lifetimes and slope intensities, but (if I’m remembering correctly) this is because they change the frequency at which certain types of objects are created, rather than changing the behavior of existing objects. I will also note that in our training data set includes samples from a wide variety of environments. We hope to avoid bias to one type of idealized condition by including samples from several conditions in our training movie. Thus as the behavior of the individual objects themselves are not changing in different conditions and we are training in a variety of conditions we hope that are model will be able to accurately predict new data sets.

The only thing we can not account for when training our model is the fact that the new samples we eventually want it to predict from will have less cellular damage from laser fluorescence imaging, as they are being exposed to the laser far less often. However, we don’t yet have any reason to suspect that this will change the lifetime distributions of individual objects and as such hope that our model will still be accurate.

Best regards