How do I detect points of interest when reconstructing SPIM data using MVR

Hi everyone,

I’m new to microscopy image reconstruction. so I am at the point of defining points of interest on the XML data set as defined here (https://imagej.net/Multiview-Reconstruction) for multiview reconstruction (MVR). however, the link to the info on how to define these points of interest is broken, I was hoping anyone could give me a lead on this?

The acquisition was done without beads.

Thank you.

Gideon

Hi,

I think the link never worked :wink:

But there is a video from Stephan that describes the Multiview Reconstruction on that same page.

Here is also a video Protocol for reconstructing multiview stacks of zebrafish: https://www.jove.com/video/53966/using-light-sheet-fluorescence-microscopy-to-image-zebrafish-eye

Thank you so much:raised_hands:

Hi Schmiedc,

So I am working on image reconstruction now using the automated workflow for cluster processing described here: https://imagej.net/Automated_workflow_for_parallel_Multiview_Reconstruction

Quite unsure about a few things. Sorry if my questions are too basic

  1. The CUDA libraries which I should specify its path to software dependencies of processing (in the config.yaml file, section 7 under software directories) does not seem to be in the MVR files I downloaded. I was wondering if I am to get this from the cluster admin?

  2. I understand the fiji resources specified in the config.yaml file should be the same as those specified in the cluster.json file. However, when I downloaded the MVR files and looked up the cluster.json file, there were two concerns I had

First, it appears to me as though there are two possible sets commands per processing step (defined in the cluster.json file), as opposed to the information on the web page above? the HPC I intend using uses SLURM, so do I stick to just the SLURM commands and delete off the lsf commands? or keep both. I have attached screenshots (contents of the cluster file I download, the one specified in the webpage above, and the fiji reseource settings in the config.yaml file).

Secondly, in confirming if the number of cores specified in the Fiji resource settings (on the config.yaml file) and the cluster.json files are the same, do I consider the lsf or slurm syntax? or both. The information on the web page above specifies just for the lsf commands, and for the timelapse registration step, the number of cores do not match that that on the fiji resource setting (also on the web page)

  1. On the format for the data img_TL{{t}}_Angle{{a}}.tif, can you confirm if a good example of file name following this format is img_TL1_Angle60.tif i.e. for timepoint 1 and view 60 degrees. or should the timepoint be zero padded i.e. img_TL01_Angle60.tif

  2. In specifying the channel in the config.yaml file, we used one channel for acquisition (green). Data was acquired in ome.tif (on a home made SPIM) and i converted to .tif. was wondering if I should specify channel as ‘green’ or as ‘0’. since we converted from ome.tif to tif

Thank you.

Gideon

fiji rescource settings config file

Hi,

thanks for writing me and welcome to the world of parallel image processing.

For your questions:

1) Cuda libraries

the cuda libraries for either the difference of Gaussian Detection or the Deconvolution need to compiled for your specific OS and GPU hardware. The process for generating the libraries for the deconvolution is described here: https://c4science.ch/w/bioimaging_and_optics_platform_biop/image-processing/deconvolution/cuda-deconvolution/

Help from a IT person would be advised here.

2. Cluster Resources

You can specify which queuing system you use when executing snakemake. How these settings can be defined is described here: https://snakemake.readthedocs.io/en/stable/snakefiles/configuration.html

What this boils down to is that snakemake uses the correct queuing settings for each job using the flags of your queue system. You do not necessarily delete the lsf commands you just need to specify which settings you want to use in the call to snakemake. For slurm you would then call:

/path/to/snakemake/snakemake -j 2 -d /path/to/data/ --cluster-config ./cluster.json --cluster "sbatch {cluster.slurm_extra} {cluster.slurm_extra}"

The Fiji resources in the config.yaml need to match the resource settings you ultimately use. So in your case the slurm settings. The syntax in either is specific for Fiji or Slurm. “20g” in Fiji means in slurm then “–mem-per-cpu=20000”. If it is not matched in the posted examples it is by mistake. The slurm queuing settings were just added after I left, since they upgraded the cluster then and we did not match it to the default config.yaml.

3. naming pattern

All patterns should be fine for the workflow as long as you specify the correct pattern in the inputs. So for the time with padding of 3 i.e. img_TL001_Angle60.tif then specify img{{ttt}}_Angle{{a}}.tif as in padding of 3 = ttt.

The padding is just nice when you want to have the timepoints ordered in the terminal or file browser. They serve no other need.

4. Channel name

If your data is single channel, then this setting is not important and you can leave it as you want.

If you have any other questions do not hesitate to write. I can put you in touch with the admin people at the institute I created this. Since I currently do not have access to a cluster anymore its a bit hard to trouble shoot things.

Hi Schmied,

Thank you very much! I’ll get in touch with you…will let you know as it goes.

Gideon