This topic is a split.
It started originally in @haesleinhuepf post here
Some interesting information are
Then I asked some questions:
@Alex.h: Are you working with build-in color vectors or with your individual but ‘stable’ color vectors?
Or do you use unique color vector for each image?
The background of my question is that when you always use the same color vectors you can pre-calculate the color deconvolution, store it as an image and apply as a lookup table. This is much faster then processing every pixel of an image and even faster as using a GPU.
The color deconvolution plugin works with RGB color images with 8bit samples per color channel. So there exist 256x256x256 color combinations - in total 16777216 combinations.
An image with 8500x8000 pixels has in total 68000000 pixels.
So to pre-calculated the CD values and using them as LUT is beneficial already for a single image.
If you assume that your image with 8000x8500 only contains much less different color combinations then 16777216 then you can assume that there exists additional strategies to speed up Color Deconvolution without using GPU processing.
@phaub: Usually, I build a macro for one staining. So I process all my H-DAB with one macro and all my Masson Trichrome staining with a second macro.
The macro is made like this:
I use NDPI-tools to extract the 10x data from .ndpi and convert it in .ome.tiff If .ome.tiff is too big I cut the file in 4 parts or in 9 parts (With bioformat extension) because ImageJ can’t handle more than 2Go of pixels I correct the White balance because I have often some color variation in the background in my slice I measure the total area with color threshold I split the .ome.tiff in 3 with the plugin colour deconvolution (Masson Trichrome) Set measurement and measure color 1 area, color 2 area with a classical approach like Ostu threshold IN total area (with analyse particles and image calculator).
So, here we are.