The idea is to generate training data points on many patches, that all feed into a single classifier. Both of your proposed methods (timeseries, and as separate images) will achieve the same thing :). There is the limitation for “time” series that all “frames” need to be the same size.
But all in all it all evolves around the concept, that you limit the visible area, so that training prediction is faster. You could also zoom in and achieve the same thing.
So in ilastik the idea is in general to train a classifier and then to export the result for further analysis.
Furthermore, the trained classifier can be run on unseen images in various ways, e.g. via the batch processing applet, usually the last step in the workflow, via the fiji plugin, or the headless mode.
If you give ilastik an image with multiple channels, the features will be calculated on all of them, so all channels are taken into account by the classifier. Note that these channels have to be present in all images you train on (and also in those that you want to process with the trained project).
By the way, did any of the measures discussed earlier in this thread, result in any speedup?