Method to append ImageStacks (or opposite of .sel)?

In some of my image processing, I break up ImageStacks using .sel() and process the stacks separately. When I have finished processing, I want to re-append these stacks along an axis, but as far as I can tell there is no method to do this included in the ImageStack class.

Right now I work around this by making a synthetic_stack with the final dimensions of the full stack and fill in the values with set_slice. Although this does make the final ImageStack that I am looking for, I worry about losing some of the metadata or logs associated with the stack. Is there a better way to go about this?

Can you provide more specifics of what you’re trying to do? Like along which dimension you’re breaking the ImageStack and how you’re processing the pieces. There may be a way to process a slice of the ImageStack without having to create a separate instance.

This came up because I’ve found that I run into memory issues if I try to run full images through spot detection. In short, to get around this I split the ImageStack into one stack per ZPlane and pickle all but the one plane that I’m working with.

This shouldn’t compromise the quality of the spot detection, as it compares images from different rounds and channels (and not zplanes), correct?

More specifically, when running starfish.spots.FindSpots.BlobDetector on a 15GB dataset, python crashed with an out of memory error on a machine with 128GB of RAM. To get around this, I load one ImageStack containing a single ZPlane into memory, run BlobDetector on that ImageStack, then save the result to disk and clear memory before moving onto the next ZPlane. With this method the BlobDetector runs to completion, using around 58~63 GB of RAM the whole time.

Just to clarify, you are setting is_volume to True when you run BlobDetector and get a memory error? Are you using a reference image (i.e. only find spots in one round and then measure intensities at those locations in all other rounds)?

Your workaround seems similar to running BlobDetector with is_volume=False, which would look for spots in all the 2D ZPlanes. If you’re using BlobDetector with a reference image then you can give that a try. If you need to find spots in every round, then we need to fix issue #1870.

Spot detection doesn’t compare images at all. If you meant “spot decoding” methods like starfish.spots.DecodeSpots.PerRoundMaxChannel, then yes it only compares to spots found in other rounds and channels. But, there could be a compromise in quality with your method if your z-resolution is smaller than the PSF of an RNA spot. You might double count one spot as two (if it is detected in two adjacent z-planes).

I hadn’t set is_volume on my first runs of FindSpots.BlobDetector, but it does still crash if I set it to True. I am not currently using a reference image.

(Additionally, if I try to apply the alternate spot detection method, starfish.spots.DetectPixels.PixelSpotDecoder, I also have an out-of-memory error on this machine.)

And yes, that is what I meant. I hadn’t thought about accidentally double-counting hits using this method, I will have to think about how to work around it. It might work to project into one plane, but I don’t think my data is sparse enough for that to really make sense.

By default, FindSpots.BlobDetector uses is_volume=True so it’s not surprising it still crashes. If it’s possible, maybe you can try setting is_volume=False and using a reference image.

Alternatively, you could try LocalMaxPeakFinder with is_volume=True.

1 Like