3D segmentation blurred line scaning confocal images (deconvolution?)

Hi, this is my first post, I apologize in advance if I make a mistake in the formatting or content…

Sample image and/or code

Example Images from Imagej ortho view

(note voxel dimensions are 0.108x0.108x1 microns).

Background

I have 3D z-stacks of organoids generated using an INCELL 6000 analyzer (INCELL 6000 overview), which uses a line scanning method to generate confocal images (Line scanning confocal overview).

I have a nuclei channel and a cell membrane channel. However there is a substantial amount of blur in the nuclei channel and I am having difficulty segmenting the nuclei into separate objects in 3D.

In the past I have used Huygens deconvolution software on laser scanning confocal images to correct for the PSF.
I am wondering if I can apply deconvolution to line scanning confocal images? I read a post on image.sc recommending uploading example images to a public forum to ask whether they think deconvolution could help me (here), so here I am!

Would this help to reduce the blur so that I can segment the nuclei?
If so, how would I calculate or measure the PSF of the INCELL 6000?

Are there any other techniques I could try?

Thank you for your help

Analysis goals

  • I would like to accurately segment individual nuclei in 3D from line-scanning confocal images.

Challenges

  • I am unable to accurately segment the nuclei into separate objects due to blur
  • I have tried segmenting both in 2D and 3D using cell profiler identify primary objects and threshold modules, with watershed, respectively. I have also tried training an Ilastik model to identify out of focus and in focus nuclei pixels in z-stack layers. These attempts were unsuccessful.
1 Like

Hi @NachoDave

The best to get a PSF is to measure it from subresolution beads. Do you have such an image? If so are you able to share it? There is a script in the Fiji script editor, under Templates->Deconvolution->Tools that extracts the PSF. Here is a recent thread about extracting the PSF and a couple of workarounds to problems.

If you cannot get a bead image we could try to create a theoretical PSF. After briefly reading the documentation about the line scanning confocal, it seems the level of “confocality” can be adjusted. So depending on the level of confocality the PSF is somewhere between Widefield and Confocal.

If you have the meta-data (NA of system, emission wavelength, voxel spacings) we can try to generate a widefiled, widefiled squared (an approximation of confocal), and a Gaussian PSF.

The generated PSF would be an approximation of the true PSF, however it may still result in an image which can be segmented. It’s worth a try.

Are you able to share the original image?

Brian

Hi @bnorthan,

Thank you for getting back to me. We don’t have images of beads, but we are happy to get some. Are there any specific protocols we should follow?

In the mean time can we try generating a PSF? The NA is 0.95, the emission wavelength is 461nm and the voxel size is 0.1083x0.1083x1 microns. There is no overlap between z-slices.

I can share the image but it is too large to upload. Can I email you a one-drive link?

Thanks again

Hi @NachoDave

You can e-mail me a link at bnorthan@gmail.com.

Brian

Hi @bnorthan

I’ve sent you a link. Please let me know if you have received it.

Thanks again

David

1 Like

Hi @NachoDave

I got the image. It only had 18 z slices. Is this expected. Is this the same as the image you showed above? It is hard to tell from the screen shot. It looks like there are more z slices in the example you showed.

Brian

Hi @bnorthan

The original image had about 50 slices, but the organoid was not present in most of them, so I have removed them. The image shown in the screen shot might be a different organoid, but imaged in the same run with the same settings. However, the image in the screen shot above only had 17 or 18 slices.

Do you need to complete stack? I can upload both organoids with all z-slices.

Thanks

Hi @NachoDave

Yes more slices is better. In the image you sent it looked as if the objects were too close to the edge laterally to get good deblurring. Ideally you want several slices above and below the objects of interest.

Brian

1 Like

Hi @bnorthan

I’ve shared two organoid images with you with the full z-stacks. Unfortunately, there is only empty slices above the organoid, as the z-stack is not deep enough to reach the bottom of it.

Thanks

David

Hi @NachoDave

Thank you for sending the images. I wrote a script that uses imagej-ops theoretical PSF, and CLIJ GPU Deconvolution to process your images. The script can be found here. You will need CLIJ and CLIJ2 installed to be able to run the script.

To try different theoretical PSF parameters change the parameters at the top. There is a flag specifying “confocal”. If that flag is true the PSF is squared (a squared widefield PSF is an approximation of a confocal PSF). In either case I don’t think we have the exact PSF of your device, as it is probably “partially” confocal. So it may still be a good idea to measure beads.

Here are some XY projections of the results I obtained.

First the original ROI I used

Second deconvolved with widefield mode (confocal=false)

Third deconvolved with confocal = true

Finally confocal = true, with total variation regularization to reduce noise (this option is in development so isn’t currently in Fiji, but will be in a few weeks).

In my mind the last result is the best. I tried an auto threshold on it and it seemed to segment the nuclei a bit better then on the original, though still not optimal.

To get optimal deconvolution you probably need a measured PSF. Even with good deconvolution you will need a good segmentation pipeline (with watershed to separate objects) or a deep learning approach to separate the nuclei. Sounds like you already tried this, but maybe it would be worth it to report what you tried and how it failed, and others may have advice on how to improve the segmentation step.

Brian

2 Likes

Hi @bnorthan,

Thank you so much for doing this, I will try running the script myself and see if I can segment afterwards.

I have tried segmenting using both 2D and 3D watershed in cell profiler and also used Ilastik to see if I can identify out of focus pixels. I’m not sure what deep learning approaches are available? Presumably you need a lot of segmented image data to train the model? Are there any pre-trained models you know of?

I’ll try segmenting again after deconvolution and see if I can get a better result, then ask again for further help!

Thanks again

David

Sorry to be pedantic, but the original ROI looks to be different from the deconvolved ROI’s.

Thanks

You are right. Looks like I took a screen shot of the first slice of the ROI for the orignal, but the 6th slice of all the others. I’ve edited it so that all the ROIs should match now, but let me know if you still see any issues.

Thank you.

We’re going to try imaging beads. Do you have any recommendations on the bead size or any particular product. How closely do we need to match the excitation and emission wavelengths of our sample

Would the following beads be sutiable?

Hi @NachoDave

I work 100% on the digital signal processing side of things and not on the microscopy side, so I’m not qualified to give any advice about the type of beads, other than they should be smaller than the voxel size and produce as strong a signal as possible.

This would be a good question for the Microforum.

Brian

1 Like

Hi @bnorthan ,

Thank you, I’ll give them a try.

David

Hello again,

I’ve tried running the script, but it doesn’t seem to change the image at all, all it does is convert it to 32 bit (I see you convert it to go on the GPU). There are no errors or warnings reported in the script editor console, or the imagej console.

It is reporting a deconvolve time of 14161 (which I assume is mill seconds) on an RTX 2080ti GPU. Does this seem too short?

I haven’t changed any parameters and I’ve used the same files as I shared. The CLIJ2 test macro seems to run fine.

Any thoughts on what’s going on?

Thanks!

David

Deconvolve time is for 100 iterations

How much memory is on the card? Does a smaller ROI work?

I recently fixed an issue that occurred when accessing memory locations > 4G. You could try copying the files in the link below to your installation (copy from Windows directory to your Fiji.app, and keep the same folder structure). Also start up Fiji from a terminal, there are a bunch of extra diagnostic messages that are sent to the terminal, and they might provide some clues.

The card has 11GB.

I just tried with a smaller ROI… and success! Well at least in that it did something to the image. Now need to play with parameters.

Do I need to add the the Linux so and jar files to my lib and plugin directories respectively (I’m running on Ubuntu)? Should I remove the old versions?

FYI, the output in the bash shell is as follows (for 10 iterations).

create memory for reblurred 0

create PSF FFT 0

create Object FFT 0

create program 0

build program 0

create KERNEL in GPU 0

create KERNEL in GPU 0

create Divide KERNEL in GPU 0

create Divide KERNEL in GPU 0
clfft setup 0
Create Default Plan 0
clfft precision 0
clfft set layout real hermittian interveaved 0
clfft set result location 0
clfft set instride 0
clfft set out stride 0
Bake 0
Finish Command Queue 0
clfft setup 0
Create Default Plan 0
clfft precision 0
clfft set layout real hermittian interveaved 0
clfft set result location 0
clfft set instride 0
clfft set out stride 0
Bake 0
Finish Command Queue 0
nFreq 140097600 glbalItemSizeFreq 140098624
FFT of PSF 0
correlate 0
Finished iteration 0
correlate 0
Finished iteration 1
correlate 0
Finished iteration 2
correlate 0
Finished iteration 3
correlate 0
Finished iteration 4
correlate 0
Finished iteration 5
correlate 0
Finished iteration 6
correlate 0
Finished iteration 7
correlate 0
Finished iteration 8
correlate 0
Finished iteration 9
Deconvolve time 10945