Lightsheet microscopy distance between 2 type of cells

Dear @haesleinhuepf
I would like your opinion about one of my project/analysis. I didn’t start to use CLIJ, I prefer to ask you if it’s possible before.

I acquired larges images (lungs from mice) on a light-sheet Z1 microscope, and I stitch them with Arivis.

  • Files size 200Go - 400Go
  • 8000 x 15000 x 700 (X, Y, Z)
  • pixels size X &Y 0.6µm, Z =1.52 µm)

I will have 3 stainings and 2 populations of cells (1 double positive and the second simple).

  1. Could I process file with this size with CLIJ (I have a Quadro M6000)
    https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/NV-DS-Quadro-M6000-24GB-US-NV-fnl-HR.pdf

  2. Could I measure a distance between 2 type of cells with CLIJ? And what is the strategy ?
    I suppose I could use twice this workflow. But how to measure the distance between 2 spots on 2 different channels / images?

  1. Could I create a distance map as you explain in your Neubias video?

Best regards,

1 Like

Hey @Alex.h,

sounds like a very interesting project!

I recently did some projects on big image data counting cell nuclei marked in different channels. That’s definitely doable on the GPU.

In order to manage that amount of data, I’d recommend two strategies (do both! :wink: ):

  • Downsampling: A voxel size of 0.6x0.6x1.5 microns is likely not necessary to detect cells (correct me if I’m wrong or feel free to provide an example image). Furthermore, downstream analysis and visualisation is simplified if your voxels are isotropic. In my workflows I often make isotropic voxels with size 1x1x1 microns. Enough to see nuclei and trivial to compute distances / areas / volumes in physical units later on.
  • Process your data set in tiles. Tiling is implemented in CLIJx, just saying, that’s a bit expermental so handle it with care.

Sure, see above. I haven’t tested this one in particular. CLIJ is compatbile with the very most NVidia graphics cards I’ve seen.

Yes, if you manage to make spot-lists out of your detected cells in both channels, you can use generateDistanceMatrix.
Alternatively: If you have cell segmentations as label maps (it’s much harder to get those, I know) you could use generateJaccardIndexMatrix to find out which cells are overlapping between the two channels as demonstrated here.

Yes, the only difference is that in the NEUBIAS video I show distances of all spots in one image to each other. You’re measuring distances between spots in two images. But as you will see, generateDistanceMatrix takes two pointlists as input. It should be straight-forward. One little warning though: If you have a distance matrix of 20000x10000 cells, you can process it on the GPU. However, you can not pull it to ImageJ as it is limited to maximum size of 2D planar images.

Last but not least a general hint independent from CLIJ. You may know this one already, but for others reading here: When processing big data or 3D data, develop your workflow on a small representative 2D crop of your data. When you’re happy with the performance of the workflow, translate it to 3D. As many image processing operations take ages in 3D, it may make sense to translate the workflow to CLIJ in the same step.

Let me know how it goes and if you need further hints.

Cheers,
Robert

2 Likes

Thank you Robert,
I will start to learn how to use CLIJ !
Sorry I found a way with arivis to decrease the Z to 1, I will try with x, y, z 1, 1, 1

1 Like

The downsample3D method does have three factor parameters for sampling in X, Y and Z. It will upsample in Z in your case. Alternatively, you could try resample. The difference between both is: downsample uses nearest-neighbors and resample does linear interpolation. I could imagine that the differences don’t matter if you do background subtraction, blurring and spot-detection in the following steps.

Hard to answer without seeing any image. We downsample right at the light sheet microscope with this method because it’s fast, reduces noise and spares hard-drive space. I haven’t stored images in original resolution/pixel-size for two years now and I’m happy with it. Your IT-department may also be happy if you store 75 GB instead of 300 GB;-) Again: It depends on your image data.

1 Like

dear @haesleinhuepf,
Did you try to deconvolve the lightsheet dataset? And could you share some advice?
If I remember correctly what you told me in Porto (Neubias school 2019). We can rotate the object in the lightsheet and fuse the view. But it’s impossible with my samples. (it’s too big for agarose embedding)

kind regards,

1 Like

Hey @Alex.h,

I usually don’t deconvolve my data because I can solve my scientific riddles without.

I would be happy to give you appropriate hints, but this is very hard without knowing how your data looks like and what you’re trying to achieve. Can you share some details?

Best,
Robert