@Larisa acquired these gorgeous multi-channel 3D time-series. We developed a pipeline to pre-process them before analysis.
Now recently (a couple of weeks ago) we noticed unusual and unexpected results after correcting the time-drift with the “Correct 3D drift” python script.
We usually use the options “Multi time scale computation…” and “Sub pixel drift correction …” and compute the drift on the channel 2.
After the computation the intensity values are changed proportionally, and the stack looks like there was a lot of bleed-through between the channels. In other words the signal from the two channels seems to get mixed up.
I checked the changes of the script, but they date as far back as March this year:
This is suggesting that it is not directly a bug in the script itself but rather on in one of its dependencies.
There are two cases where it still works more or less:
- the trivial one being to work only on a single channel stack. But then it’s cumbersome to transform the other channel separatly
- without the “Sub pixel drift correction …” the result resembles more the original stack (in terms of image intensities) but the correction does not correct much.
I put an example stack on my ftp-server:
Could one of the developers maybe try to reproduce the problem?