the recordings and slides of the two webinar sessions Introduction to KNIME for Image Processing in the scope of NEUBIAS Academy are now available here:
Thank you for attending the webinar, and also big thanks to the organizers and panelists (@RoccoDAntuono, @Julien_Colombelli, @romainGuiet, @aklemm and Franka Voigt) at NEUBIAS Academy for having us: Keep up the amazing work for the community!
- Installation / Extensions
- Technical Issues / User Interface
- Getting Help
- Workflow Control
- Data Input / Output
- (Multi-dimensional) Image Manipulation
- Deep Learning
I do not have the Image reader node…how can I get it?
You have to install KNIME Image Processing. One way: Go to KNIME Image Processing – KNIME Hub and drag and drop the icon in the header onto your KNIME. This will trigger the installation process.
How do I install the ImageJ extension after starting the software?
On the page of the extension on KNIME Hub, you have a node icon in the small grey area below the header: Take this node and drop it onto your KNIME window. You’ll have to restart your KNIME application after the installation of new extensions.
What is in the BigDataViewer tab?
It is an integrated BigDataViewer panel as an alternative image viewer. If you have attended the NEUBIAS Academy seminars in January/February 2021, you should have heard quite a bit about BigDataViewer already, otherwise you can rewatch https://youtu.be/LHI7vXiUUms.
Do you need to install used programs on your computer (e.g.ImageJ)? And then, if you share the workfolw, will the other person know which ImageJ you used, and which plugins were installed?
Native KNIME Image Processing nodes work out of the box and standalone. In case you are using the ImageJ Integration, a pretty minimal ImageJ installation is shipped. If you stick to this configuration, others should just make sure to run the same KNIME version. You can, however, also point the integration at a local ImageJ/Fiji installation on your machine. In that case, you have to manually make sure that the other person uses the same installation for workflow execution.
Why is there only 1GB RAM available on the bar?
You can configure the maximum amount of memory in knime.ini file.
In addition, the memory is allocated dynamically, and the max value of the heap bar shown at the bottom will adapt once the memory usage approaches it (within the limit of the preference setting).
I tried to connect to OMERO using the KNIME OMERO Reader node (5.2), but the login failed. Is this node compatible with the latest version of OMERO.server (5.6.3)?
An OMERO integration is available but is outdated at the moment. You can install a newer (not thoroughly tested) version of the OMERO Integration if you manually the Nightly Software Site of the Community Extensions: Community Extensions | KNIME.
Should the conda env be in the defined KNIME workspace ? or outside ?
KNIME leverages a local Anaconda installation. That is, the environments created from within KNIME will be regular
conda environments and, hence, are located in the conda default folder (set in the preferences under KNIME > Python).
How to change the size of the header in Row ID
You can increase the font size by pressing Ctrl++(cmd++ on Mac OSX). Otherwise, you can drag and drop the borders of the columns to change the sizes.
Stupid question - I have KNIME open but all the windows (explorer, console, repository etc.) are all squished to the left half. I can’t seem to resize to take up the full KNIME window. Anyone else have this problem?
If all of your views are squashed into one half of your screen, the quickest solution is to go to File > Switch Workspace and select a new directory as workspace.
I would like to see how to use a slider to adjust node parameters interactively. For example the min segment area in the labeling filter.
That is currently not possible in an interactive manner: You can do a hyperparameter optimization (automatically) with respect to an optimization goal to automate this, though.
Why is the substring that starts at -3 for a length of 2 still “ab”? Does it start at 0 as negative indices do not exist?
That is an interesting observation: The documentation for the substr() method says “A negative value of start is treated as zero.” – which explains what you are describing.
It’s not super important, but is there something like “align nodes” to order the workflow? My inner monk is itching
The two icons with two vertical or horizontal squares (top left of the GUI, close to the zoom) will help tidying up your workflow:
What is the best way to contact KNIME developers?
Since KNIME is not only about image processing, they’re not joining the image.sc forum as a partner. Anyhow, for image-related questions about KNIME, we recommend using image.sc with the #knime tag. Several KNIME employees as well as KNIME users also watch that forum, and there’s a better chance to get a reply specific to the image analysis domain.
Questions that touch KNIME core functionality can still be linked/forwarded to the KNIME forum if necessary.
Does the KNIME node zoo encompass functions for communicating data to other machines through a network (e.g. feedback microscopy application)?
KNIME itself doesn’t support direct control of experimental devices. We have, however, recently worked on a proof of concept to integrate the SiLA 2 standard. By integrating this lab automation standard, we were able to show that it’s possible to retrieve data from devices and also send computation results back. You can find additional details in a recently published blog post.
Any chance that I could use my own .ijm macros as nodes in Knime?
Not for macros directly.
- Option 1: Use the ImageJ1 Macro node, include your macro, and make it a Component node that you can share, e.g. via the KNIME Hub;
- Option 2 (advanced): Convert your macro into an ImageJ2 (Java) plugin that will get auto-converted into a node (see
fmi-ij2-pluginsfor some examples).
How do I make a node from a script, if that’s possible?
In general, you can use for example a Python Script node (see KNIME Python Integration Guide for documentation around it). This is also possible with R and Java (granted, not really a script anymore).
Is there any listener node for waiting image data flow in a given directory and processed them on the fly?
There is a Wait… node that allows you to wait for some time or for an event on the filesystem (file creating, for instance). But the general idea is that the data is available before processing is triggered.
Can you show an original image with its split channels in the same table? Would it require a duplication or is there a simpler solution?
You can either append the results (which keeps the result around) or you can join the results together afterwards.
To optimize a function like thresholding, is there a way to automatically iterate/loop the function with different values for a particular parameter? For example local thresholding by varying the radius over a range with a certain step.
Good question! There actually is a component that helps you with this task (see https://kni.me/c/A_91QC387NtvJ6g8). But you can also use a loop construct that allows you to optimize the parameters (see https://kni.me/n/UiUFT6HMsaqGweoq).
Is it in purpose that Jan cross-connected the nodes? top/bottom?
The unique columns of the “upper” input of the Joiner will appear first in the output table.
There is not meaning to “crossing” connections in a workflow. Take care of connecting the right tables to the right ports, though: There is meaning to the order of the ports!
Is there a recommended way of doing version control for workflows?
Versioning (or creating snapshots) of workflows is a feature of KNIME Server (requiring a license), since it acts as a central repository. In general, you can also use Git for version control of your workflows, but make sure to check in your worklfow in reset state (right click on the workflow in the KNIME Explorer to do this quickly), or configure your repository with a suitable
.gitignore file. Otherwise, you will end up with temporary data in your Git repository which blows up quite quickly in case of images.
Can you read multiple .csv files with one node and filter all files based on the same parameter (e.g. range values)? Or do you need multiple CSV Reader nodes?
You can immediately read in multiple CSV files with the CSV Reader node by defining which files you’d like to import with a filter expression. The output table contains all values from the files just like it would if coming from a single file.
Can one filter csv based on information from another csv?
Yes, you can change the configuration of nodes based on the results obtained from other nodes. The concept that we use for this are so-called flow variables (very much like like variables in other programming languages). See KNIME Flow Control Guide for more information.
Is there a node to download image from Zenodo? from Omero ?
There is no dedicated node for downloading from Zenodo but you can use the GET Request node not only to access APIs, but also to download files. This example shows how that can be done if the download URL is already known. In case it’s not known, the Zenodo API would have to be used (which requires an access token AFAIK). An OMERO integration is available (https://kni.me/e/o-GDQ3EBR53N3qyY) but is outdated at the moment. You can install a newer (not thoroughly tested) version of the OMERO Integration if you manually activate the Nightly Software Site of the Community Extensions: Community Extensions | KNIME.
In ImageJ, we always have to duplicate images to keep an unprocessed version of the working image. In the case of KNIME, do the nodes serve the same function as duplicate, or do they alter the initial file ?
KNIME Image Processing nodes usually allocate new memory for the results, independent of the choice of ‘Creation Mode’ (New Table, Append, or Replace). That makes sure that you never inadvertently change image data that’s used by other nodes. (There can be exceptions if you use e.g. the Java Snippet node to execute your own image-processing code using #imglib2 to manipulate data in place.)
An important implication of this is that workflows can get quite large when saved with data (in executed state) and they contain many nodes. Reset your workflow before saving to avoid long saving times (and/or take a look at the Don’t Save nodes: https://kni.me/w/mw7mlY9frej5rsw8).
Connected component analysis is a somewhat strange way to segment nuclei. Is there a standard watershed segmentation node ? I could only find a seeded segmentation node.
You can use ImageJ1’s Watershed implementation. Native implementations are available as well: Seeded Watershed (example: https://kni.me/w/bcxWQyRdpnzgvPXd) or the Waehlby Cell Clump Splitter (example: https://kni.me/w/WrUSmzAGeBOzCYSZ).
Can I run CLIJ using the macro command?
I have never tried. But since it’s pretty specific, you might run into some technical challenges.
The ImageJ Macro binarized the image, though, it is binary already. Is that required?
Good point. You don’t actually have to binarize in the ImageJ macro. The ImageJ-legacy layer takes care of the conversions for you in the background.
For the ImageJ Macro, I suppose, the image data get’s copied, or is ImageJ using a view on the image data?
Yes, the intensity values get copied over into an ImageJ1 image (see implementation).
Is there a headless ilastik node?
We do have an ilastik integration. It does not cover all the project types that ilastik natively supports, though.
I made triple labeling with border of the cells, nuclei and histone modification antibody. In some case analysis work very well in the DRPPP plus other plugins.
That’s a use case worth being discussed on the image.sc forum. We’re happy to follow up here on the forum in a separate thread.
How would you input multiple .tif per image (i.e. each .tif represents one channel of the image)?
You would first read in all the .tif files from a folder into a table, and the group the single images such that all channels belonging to the same image are in the same group.
In order to achieve these groups, you will have to determine the common information for the individual channel images using e.g. a String Manipulation on the file names.
You can then create multi-channel images using the GroupBy node, in two possible ways:
- As aggregation method, choose Merge Image, and as new axis label, set
- Choose a List aggregation (to create a list of the channel images) and subsequently use a Split Collection Column node.
This will put the channels in different columns. You can then use the Merger node to generate the multi-channel images.
One question about the result of the assignment workflow: can the individual segments also be represented as “well known text” polygons instead of that little bitmasks?
that would be helpful with portability… So a single segment would look like this:POLYGON ((213.6358642578125000 326.0000000000000000, 213.3648376464843750 326.6693115234375000, 212.9915313720703125 327.2391357421875000, … ))
Segments are not necessarily always polygons (for segmentations other than connected-component analysis, they can be disconnected); the FMI KNIME Plugins update site provides a node (Get Polygon Points from 2D Binary Mask) to convert segment outlines to a list of polygon coordinates.
Is the threshold for 3D(xyz) segmentation calculated from the histogram of the whole stack, or from each slice individually?
This depends on how you configure the dimensions of the node. If you have a 3d image, but only select x and y, it will be processed slicewise. If z is additionally select, it will take the histogram from the entire stack.
Can you extend the nuclear mask to measure something in the cytoplasm?
I take it that there is an equivalent for exclude on edges?
Yes, the Labeling Filter node provides an option to exclude objects on the image border (on any or all of the dimensions x,y,z,channel,time). Be careful when selecting the
time dimension, since all objects in the first and last frame are on edges in this case.
How would you now declump the cells? I tried seeded watershed and Waehlby cell clump splitter, but get an error message (that the input column has the wrong name, or similar), and it refuses to open the configure window.
The error message is very likely because the node requires a labeling image and an intensity image. If you don’t have both around, you will not be able to configure the node. See https://kni.me/w/WrUSmzAGeBOzCYSZ for an example of how to use the nodes.
Is it possible to export masked parts of the images as individual image files? So that say 100 masked nuclei will be exported as 100 individual images, each of which contains image of each nuclei? Theoretically?
That is also practically possible. The binary masks generated by the (Image) Segment Features node are images themselves. Hence, you can use an Image Writer node to write them to disk. Make sure that you have a unique filename for each mask so that you don’t overwrite files in the process!
I assume that it is possible to plot mean intensity (other properties) class A vs. B? If yes, then can you select/defined subpopulations (as in flow cytometry; gating)?
You can use the interactivity features of composite visualizations. Please take a look at the documentation.
Is there a possibility to generate a violin plot?
Can I use an R script for the visualization?
I missed if you can use GPU training/inference?
This is possible, if you have your Conda environments set up properly with the respective packages supporting GPU. This also works with the native Deep Learning integrations of KNIME.
How easy would it be to introduce StarDist at this point for nuclei segmentation?
You can easily replace the segmentation method with pre-built components from the community or your own. Watch the recording of Session 2 to see how to integrate Cellpose as well as StarDist.
The Cellpose integration is great, but also really, really slow. It takes many minutes to segment a few images for me. I suppose this is due to cellpose itself being not the fastest?
When choosing one of the pre-trained models ‘nuclei’ or ‘cyto’, cellpose by default averages the results of four (!) different networks. When running on the CPU, this can indeed take some time, also dependent on the dimensions of the input images. We plan to update the ‘Cellpose Segmentation’ component to allow more configuration options in the future, but for now you can also Ctrl-click (Cmd-click on a Mac) the component node and adapt the settings directly in the Python script.