Hi everyone, I just finished setting up a Jupyter notebook server, I can run the tutorials on it, and I just found out to load pictures in the Zeiss .czi format. Thing is, I don’t know how to process my confocal Z-stack pictures (3 colors +DIC).
I see a lot of Ops dealing with mathematical processing of binary/RGB pictures and transformation, but I am at a loss about how to do the following operations:
splitting an image into different channels
assigning a LUT to a channel
merging channels together
extracting a substack (several sections, different channels)
making a maximum intensity projections (Z project)
displaying picture metadata
how to combine ops and apply them to an iterable object (list of image files)
Could you please point me to the relevant documentation?
Thanks for your help and your work on those notebooks!
Many of the ops are now documented (with a Jupyter notebook each) in imagej/tutorials, specifically in the notebooks/1-Using-ImageJ/Ops folder (thanks to the great effort of @gselzer, @ctrueden and others):
I’ll try to link to an example, or provide short code snippets, for each of your questions:
Hi, thanks for your answers. I did take note of the very complete Ops folder, which is very nice to have.
I do still have some issues:
the transform/hyperSliceView.ipynb notebook throws an error (pastebin) when I run it with the Fiji classpath options suggested for BioFormat compatibility: 1-Using-ImageJ/1-Fundamentals.ipynb#Bio-Formats. IE:
%classpath config resolver imagej.public https://maven.imagej.net/content/groups/public
%classpath add mvn sc.fiji fiji 2.0.0-pre-9
ij = new net.imagej.ImageJ()
This is a problem but I can avoid it by starting the imageJ gateway from a locally installed, latest Fijii installation as described in the commented section of 1-Fundamentals.
you addressed my other questions about z-projects etc… Thank you! Applying a LUT is now the last big remaining issue for me
For metadata, I meant some Zeiss .czi specific details like the image scale, stage position, laser powers used etc… I will do my own digging with SCIFIO. I did notice that I can call ij.notebook().methods() on my picture to get some useful details… EDIT: I can access a large (5000) bunch of relevant Zeiss info by using the following code. I wish I could get it in some OME format…
If you want to use a custom LUT, make one with new net.imglib2.display.ColorTable8(myArrayOfBytes).
If you have an image that is not a net.imagej.Dataset, then you’ll have to wrap it as one first with ij.dataset().create(myImgPlus), where myImgPlus is a net.imagej.ImgPlus. If you have an image that is not an ImgPlus, such as a raw net.imglib2.RandomAccessibleInterval, then you have to wrap it first as an Img, and then an ImgPlus.
Some small changes to the core libraries could improve a lot of the above rigamarole.
Thanks for all your answers, this will allow me to do most of what I want for now. I am very grateful not to have to apply the same ‘adjust channel/ crop / project’ workflow on a couple hundred pictures manually, and to be able to use the Table view to look at many pictures together.
At a later point I would be interested in more tutorials about adjusting brightness/contrast/gamma and dealing with ROI. ImageJ1 should already allow me do do some of that.