How can I work with channels/zstacks in the ImageJ2 Jupyter notebooks?

Hi everyone, I just finished setting up a Jupyter notebook server, I can run the tutorials on it, and I just found out to load pictures in the Zeiss .czi format. Thing is, I don’t know how to process my confocal Z-stack pictures (3 colors +DIC).

I see a lot of Ops dealing with mathematical processing of binary/RGB pictures and transformation, but I am at a loss about how to do the following operations:

  • splitting an image into different channels
  • assigning a LUT to a channel
  • merging channels together
  • extracting a substack (several sections, different channels)
  • making a maximum intensity projections (Z project)
  • displaying picture metadata
  • how to combine ops and apply them to an iterable object (list of image files)

Could you please point me to the relevant documentation?
Thanks for your help and your work on those notebooks!

2 Likes

Many of the ops are now documented (with a Jupyter notebook each) in imagej/tutorials, specifically in the notebooks/1-Using-ImageJ/Ops folder (thanks to the great effort of @gselzer, @ctrueden and others):

I’ll try to link to an example, or provide short code snippets, for each of your questions:

This can be done with the transform.hyperSliceView op (cut out one “slice” in the Channel dimension):

output = ij.op().run("hyperSliceView", input, dimension, pos)

… don’t know an easy example here, maybe others can help …


Since channels are just a dimension as any other dimension, you can stack together your single channels into a multi-channel image using the transform.stackView op:

stack = ij.op().run("stackView", [channel1Image, channel2Image])

Use transform.crop with a suitable min and max for each dimension:

import net.imglib2.FinalInterval
interval = FinalInterval.createMinMax(dim1min, dim2min, dim3min, dim1max, dim2max, dim3max)
cropped = ij.op().transform().crop(input, interval, true)

Combine stats.max with transform.project.


What metadata do you mean?

  • ImgLib2 doesn’t carry a lot of metadata with images, you have to keep track of metadata on your own.
  • If you have an ImageJ2 ImgPlus, you can query for its numDimensions(), the axis types (axes()), averageScale() and other getProperties().
  • For other metadata depending on the file format, you might need to turn to SCIFIO (e.g. via DatasetIOService) to directly query for the format-specific metadata.

That’s also something I’d be interested in. I’ll defer to @ctrueden here: is there some documentation how to do ops chaining and apply it in batch?

1 Like

Hi, thanks for your answers. I did take note of the very complete Ops folder, which is very nice to have.
I do still have some issues:

  • the transform/hyperSliceView.ipynb notebook throws an error (pastebin) when I run it with the Fiji classpath options suggested for BioFormat compatibility: 1-Using-ImageJ/1-Fundamentals.ipynb#Bio-Formats. IE:
%classpath config resolver imagej.public https://maven.imagej.net/content/groups/public
%classpath add mvn sc.fiji fiji 2.0.0-pre-9
ij = new net.imagej.ImageJ()

This is a problem but I can avoid it by starting the imageJ gateway from a locally installed, latest Fijii installation as described in the commented section of 1-Fundamentals.

  • you addressed my other questions about z-projects etc… Thank you! Applying a LUT is now the last big remaining issue for me

  • For metadata, I meant some Zeiss .czi specific details like the image scale, stage position, laser powers used etc… I will do my own digging with SCIFIO. I did notice that I can call ij.notebook().methods() on my picture to get some useful details…
    EDIT: I can access a large (5000) bunch of relevant Zeiss info by using the following code. I wish I could get it in some OME format…

formatBF = ij.scifio().format().getFormat(string_absolute_path)
parser = formatBF.createParser();
metadata = parser.parse(string_absolute_path)
metadata.getTable()

EDIT v2 Found what I wanted with @ctrueden 's gist:

import io.scif.ome.OMEMetadata
globalMeta = ij.scifio().initializer().parseMetadata(string)
// Convert globalMeta above to omeMeta
omeMeta = new OMEMetadata(ij.context())
ij.scifio().translator().translate(globalMeta, omeMeta, true)
omexml = omeMeta.getRoot()
omexml.dumpXML()

Thanks for the prompt replies (even more so on a Sunday)

1 Like

The situation with LUTs in a BeakerX Groovy notebook is kind of stupid right now, sorry. Here is some code that applies a LUT to an image and displays the result:

image = ij.io().open("/path/to/my/image.tif")
imageView = ij.imageDisplay().createDataView(image)
imageView.rebuild()
imageView.setColorTable(net.imagej.display.ColorTables.FIRE, 0)
imageView.rebuild()
imageView

If you want to use a custom LUT, make one with new net.imglib2.display.ColorTable8(myArrayOfBytes).

If you have an image that is not a net.imagej.Dataset, then you’ll have to wrap it as one first with ij.dataset().create(myImgPlus), where myImgPlus is a net.imagej.ImgPlus. If you have an image that is not an ImgPlus, such as a raw net.imglib2.RandomAccessibleInterval, then you have to wrap it first as an Img, and then an ImgPlus.

Some small changes to the core libraries could improve a lot of the above rigamarole.

1 Like

Thanks for all your answers, this will allow me to do most of what I want for now. I am very grateful not to have to apply the same ‘adjust channel/ crop / project’ workflow on a couple hundred pictures manually, and to be able to use the Table view to look at many pictures together.

At a later point I would be interested in more tutorials about adjusting brightness/contrast/gamma and dealing with ROI. ImageJ1 should already allow me do do some of that.

1 Like