Concatenate processed tiles (TIFF) in qupath

Hi !
I am trying to make overlays on whole tissue. For the moment, I managed to export tiles from the original file, and process them on ImageJ. I get tif files (see below) named header_[x=…, y = …, w = 3200, h = 3200]composite.tif

However, at this stage I am stuck when it comes to open the whole image on QuPath.
I tried to run the macro from the following post :

but it is too long for whole tissue analysis. As well as that, I loose the color information from the overlay and all signals are processed the same.

I would thus like to concatenate tiff files directly instead of superimposing annotations on raw image.

I think it is possible on python or directly with command line, but I’m not sure whether it is suitable for TIF format and I’m afraid of loosing resolution and metadata

Also I had a question on how to carry on with the image analysis.
Is it advisable to use classifiers on this kind of overlay images instead of the raw image ? Would it decrease the rate of misclassification ?

Thank you in advance for your help ! I can provide further information if needed.

If that is for the Vectra, have you tried the script specifically intended for Vectra files?

There are a few other posts as well, and the most updated version of the script should be on @petebankhead’s gist page. The previous posting also indicates what can be done if you run into some stitching problems.
Note that if your images are large, this can be time and computationally intensive. Others have run into issues attempting to stitch across networks as well.

Thank you for your answer. No, it’s actually pseudo fluorescence. The concatenation is not meant to build a pyramid but rather to reconstitute horizontal position of the tiles defined with the TileExporter tool.

If your image is of any decent size, the pyramid is probably the way to go anyway for doing anything in QuPath. The image source matters less than whether it is contiguous tiles, though other sources may requires editing the code so that the concatenation uses the correct locations.

@Egl Can you explain the end goal of your analysis, and the processing steps you want to apply to get there?

For example, what exactly does your ImageJ processing do – and is it working well enough, or do you need something else anyway?

Thank you for your message.

Here are the processing steps :

  • the whole images (> 350 MB), one for each staining, are splitted into small tiles.
  • those tiles are processed on ImageJ (alignment, applying colors, removing background, making a composite image of all stainings) Apparently, the alignment is more precise with ImageJ so the overlay is not performed on QuPath directly.
  • after that I would like to build the composite image again from the tiles and visualize the whole image on QuPath.

So one goal is just whole pseudo color image visualization. Another goal would be to measure the area of the purple region in the raw image/white zone in the processed image, and count all cell types (each with a different color on the processed image), measuring distances … I wondered if it should be done one raw images or if the pseudo color image could be useful for classifiers to discriminate different cells and regions. So for the moment, the cell clusters in the white holes are misclassified as stroma, but they are almost invisible in the pseudo color image, so this is why I thought the overlays might be useful for further analysis.

Could you explain what this alignment is doing? If each tile of each channel is being translated/rotated individually, then you cannot simply use the original locations in the filenames to put them back together. You would need to computationally stitch them back together with some sort of global optimization. Or maybe I misunderstand what’s happening here? I’m not really clear on what the different images in your experiment are.

2 Likes

The different images correspond to whole tissue IHC, each image correspond to the same tissue, with different stainings (this kind of results is obtained with MICSSS).

For the moment, the idea is to perform a global pre-alignment with QuPath (affine) and then alignment with SIFT algorithm on tiles (ImageJ). I have checked the outputs on imageJ, I think that the correspondence between tiles from each staining are okay.

I am still not sure whether data analysis (cell densites etc) will be performed on tiles, whole image or both. The tiles have currently 10% overlap to avoid border effects.

Ah, OK. I understand. SIFT is able to do non-linear alignment, so in small regions it will often be more accurate than QuPath’s affine alignment. Whether you want to use tiles or the whole image for your analysis depends on a lot of downstream factors (computational power, whether you want ImageJ or QuPath algorithms, biological question, etc). But, if you ever want to view the whole slide, you need to stitch together the tiles. I recommend BigStitcher. Since you already know the approximate location of your tiles, it should be easy to set up. From there you can view the final stitched image directly in ImageJ, or exports Tiffs to put into QuPath, though I don’t know if it keeps all the metadata. Either way, make sure you pay attention to where your tiles meet- the (elastic?) alignment applied to each tile may make the overlapping edges incompatible and you might get odd artifacts when they are stitched together.

There is also a lot of work going on to try to do accurate alignment of large tissues directly in QuPath, without having to break the image into tiles and then recreate it. For some references, please see

  1. alignment of objects, though not images
  2. non-linear alignment of sequential images [still in progress]

Either way, let us know what you come up with!

3 Likes

Hi !
Thank you very much for these suggestions and sorry for my late answer.
I am a bit confused as several plugins are available such as Stiching, BigSticher and MosaicExplorerJ. I am not sure which one would be most suitable knowing that the tiles are overlapping and result from the alignment of raw images. From what I could understand, MosaicExplorerJ would not perform any transformation at the borders while BigSticher could perform an alignment between neighboring tiles. However I have approximatively 200 tiles each 3000*3000 pixels so I am not sure whether BigSticher could handle it.

I tried Plugins > Big Stitcher > Big Sticher > Define a new dataset (automatic loader) > then default parameters in the dialog box but I really do not understand the error message I get (I’m not sure it’s a memory problem as I tried on a few tiles as well).
The headless mode doesn’t work either. Is it due to the properties of the files ?

Here is the error message I get:

[ERROR] Module threw error
java.lang.NoClassDefFoundError: bdv/util/BehaviourTransformEventHandlerPlanar$BehaviourTransformEventHandlerPlanarFactory
	at net.preibisch.stitcher.gui.popup.BDVPopupStitching.createBDV(BDVPopupStitching.java:298)
	at net.preibisch.stitcher.gui.StitchingExplorerPanel.<init>(StitchingExplorerPanel.java:205)
	at net.preibisch.stitcher.gui.StitchingExplorer.<init>(StitchingExplorer.java:113)
	at net.preibisch.stitcher.plugin.BigStitcher.run(BigStitcher.java:80)
	at org.scijava.command.CommandModule.run(CommandModule.java:196)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:165)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:124)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:63)
	at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:225)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: bdv.util.BehaviourTransformEventHandlerPlanar$BehaviourTransformEventHandlerPlanarFactory
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 13 more

I’m sorry, I don’t know enough about these plugins to help you debug. I recommend you make a new post on the forum with the bigstitcher or ImageJ tags to pull in the people who are experts in this. I do know that it was made for 3D lightsheet data, so it can probably handle your dataset, though obviously your RAM might be limiting.

Sorry I can’t be more help!

2 Likes

Okay, thank you for your help so far !

Hi , and sorry to bother you again. I have tried both BigSticher and the QuPath-Merge unmixed files to pyramid.groovy script from @petebankhead (which I think would be best for this analysis).
I get an error message when trying to parse the files and build a server. The files are named header_[x=…,y=…,w=3000,h=3000]modified.tif as when they’re exported with TileExporterTool. I don’t know if the parsing error comes from possible ambiguity in the file name or the format.
Do you know how I could change the script or the input to make it work ?

The script Pete wrote is specific for the Vectra input file format (ok, not technically specific to Vectra, but accesses the metadata fields expecting the same formatting as is used in the Vectra outputs), as it uses metadata in the tif files.


It does not use the file name, though you might be able to rewrite that function to access parts of the file name instead.

And, as @smcardle pointed out, after you have deformably altered your tiles, this stitching would not work even if the metadata were the same as the Vectra .tiff files.

1 Like

Thank you for your answer !
BigSticher was too long and the output was not easy to handle.
At this stage, I would just like to replace the tiles on a mosaic, without aligning overlapping regions, which I think should be enough for the next analysis.
I tried to rewrite this part of the code, but I’m quite struggling :

import qupath.imagej.tools.IJTools
import java.util.regex.Matcher
import java.util.regex.Pattern

def parsedXY = parseFilename(GeneralTools.getNameWithoutExtension(path))

int[] parseFilename(String File) {
    def p = Pattern.compile("\\[x=(.+?),y=(.+?),")
    parsedXY = []
    Matcher m = p.matcher(File)
    if (!m.find())
        throw new IOException("Filename does not contain tile position")
            
    parsedXY << (m.group(1) as double)
    parsedXY << (m.group(2) as double)
    
    return parsedXY
}
 
static ImageRegion parseRegionFromTIFF(File file, int z = 0, int t = 0) {
    int x, y, width, height
    file.withInputStream {
        def reader = ImageIO.getImageReadersByFormatName("TIFF").next()
        reader.setInput(ImageIO.createImageInputStream(it))
        def metadata = reader.getImageMetadata(0)
        def tiffDir = TIFFDirectory.createFromMetadata(metadata)

        double xRes = getRational(tiffDir, BaselineTIFFTagSet.TAG_X_RESOLUTION)
        double yRes = getRational(tiffDir, BaselineTIFFTagSet.TAG_Y_RESOLUTION)

        double xPos = getRational(tiffDir, BaselineTIFFTagSet.TAG_X_POSITION)
        double yPos = getRational(tiffDir, BaselineTIFFTagSet.TAG_Y_POSITION)

        //width = tiffDir.getTIFFField(BaselineTIFFTagSet.TAG_IMAGE_WIDTH).getAsLong(0) as int
        //height = tiffDir.getTIFFField(BaselineTIFFTagSet.TAG_IMAGE_LENGTH).getAsLong(0) as int

        //x = Math.round(xRes * xPos) as int
        //y = Math.round(yRes * yPos) as int
	x = parsedXY[0]
    	y = parsedXY[1]
   	width = 3200
    	height = 3200


    }
    return ImageRegion.createInstance(x, y, width, height, z, t)
}

Could you explain a little bit more what isn’t working with this code? Is it throwing an error or not giving the results you expect or something else?

I get this error message, which is the same as before I changed the script :

NFO: Parsing regions from 190 files...
INFO: WARN: Could not parse region for mypath/370_[x=0,y=5760,w=3200,h=3200].tif 

etc...

INFO: Building server...
ERROR: NullPointerException at line 81: null

ERROR: qupath.lib.images.servers.ImageServerMetadata$Builder.<init>(ImageServerMetadata.java:158)
    qupath.lib.images.servers.SparseImageServer.<init>(SparseImageServer.java:152)
    qupath.lib.images.servers.SparseImageServer$Builder.build(SparseImageServer.java:333)
    qupath.lib.images.servers.SparseImageServer$Builder$build.call(Unknown Source)
    org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
    org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
    org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:130)
    Script3.run(Script3.groovy:82)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:155)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:926)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:859)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:782)
    qupath.lib.gui.scripting.DefaultScriptEditor$2.run(DefaultScriptEditor.java:1271)
    java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
    java.base/java.util.concurrent.FutureTask.run(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    java.base/java.lang.Thread.run(Unknown Source)

Hi Egl,
Since you only posted part of your code, I’m not sure how the parseRegionFromTIFF is being called. But, here’s a few things I would check:

  1. Are the X and Y positions being set correctly in the parsedXY variable? Specifically, are you calling this for each file one at a time, so that there is only ever 2 values, or are you accumulating all the X and Y positions into a single list first? Also, check whether the function is expecting pixels or microns and if you are giving it the correct value.

  2. Does the parseRegionFromTIFF function have access to that variable? I tend to get a bit confused with how Groovy handles variable scopes, but I would recommend passing parsedXY as an input to the function.

  3. Are you very sure that the width and height are correct? If you have performed any deformable alignment on any of the tiles, then this will be different for each tile and it could throw an error.

  4. You don’t seem to be using the xRes and xPos variables within parseRegionFromTiff. If the metadata of those files doesn’t have the information the function is expecting (BaselineTIFFTagSet.TAG_X_RESOLUTION) it could be throwing an error there.

2 Likes

Thank you for your anwswer, it was really helpful !
The script seems to be able to parse the files. The problem was coming from paralellization , thus parsedXY contained varying numbers of elements (point 1).
Now the script is stuck when building the OME pyramid:

INFO: Building server...
INFO: Writing Sparse image (19 regions) to path/output.ome.tif (series 1/1)
INFO: Writing resolution 1 of 3 (downsample=1.0, 20 tiles)
INFO: Writing plane 1/1
ERROR: UnsupportedOperationException at line 124: This method is not supported by this color model

ERROR: java.desktop/java.awt.image.ColorModel.createCompatibleWritableRaster(Unknown Source)
    qupath.lib.images.servers.AbstractTileableImageServer.createEmptyTile(AbstractTileableImageServer.java:128)
    qupath.lib.images.servers.AbstractTileableImageServer.getEmptyTile(AbstractTileableImageServer.java:93)
    qupath.lib.images.servers.SparseImageServer.readTile(SparseImageServer.java:287)
    qupath.lib.images.servers.AbstractTileableImageServer.getTile(AbstractTileableImageServer.java:184)
    qupath.lib.images.servers.AbstractTileableImageServer.readBufferedImage(AbstractTileableImageServer.java:238)
    qupath.lib.images.servers.AbstractTileableImageServer.readBufferedImage(AbstractTileableImageServer.java:56)
    qupath.lib.images.servers.PyramidGeneratingImageServer.readTile(PyramidGeneratingImageServer.java:87)
    qupath.lib.images.servers.AbstractTileableImageServer.getTile(AbstractTileableImageServer.java:184)
    qupath.lib.images.servers.AbstractTileableImageServer.readBufferedImage(AbstractTileableImageServer.java:319)
    qupath.lib.images.servers.AbstractTileableImageServer.readBufferedImage(AbstractTileableImageServer.java:56)
    qupath.lib.images.writers.ome.OMEPyramidWriter$OMEPyramidSeries.writeRegion(OMEPyramidWriter.java:679)
    qupath.lib.images.writers.ome.OMEPyramidWriter$OMEPyramidSeries.writePyramid(OMEPyramidWriter.java:599)
    qupath.lib.images.writers.ome.OMEPyramidWriter.writeImage(OMEPyramidWriter.java:296)
    qupath.lib.images.writers.ome.OMEPyramidWriter$OMEPyramidSeries.writePyramid(OMEPyramidWriter.java:469)
    java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    java.base/java.lang.reflect.Method.invoke(Unknown Source)
    org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
    org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:214)
    org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
    org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
    org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
    org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
    Script17.run(Script17.groovy:125)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:155)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:926)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:859)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:782)
    qupath.lib.gui.scripting.DefaultScriptEditor$2.run(DefaultScriptEditor.java:1271)
    java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
    java.base/java.util.concurrent.FutureTask.run(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    java.base/java.lang.Thread.run(Unknown Source)

Here is the whole script:

/**
 * Convert TIFF fields of view to a pyramidal OME-TIFF.
 *
 * Locations are parsed from the baseline TIFF tags, therefore these need to be set.
 *
 * One application of this script is to combine spectrally-unmixed images.
 * Be sure to read the script and see where default settings could be changed, e.g.
 *   - Prompting the user to select files (or using the one currently open the viewer)
 *   - Using lossy or lossless compression
 *
 * @author Pete Bankhead
 */

import qupath.lib.common.GeneralTools
import qupath.lib.images.servers.ImageServerProvider
import qupath.lib.images.servers.ImageServers
import qupath.lib.images.servers.SparseImageServer
import qupath.lib.images.writers.ome.OMEPyramidWriter
import qupath.lib.regions.ImageRegion

import javax.imageio.ImageIO
import javax.imageio.plugins.tiff.BaselineTIFFTagSet
import javax.imageio.plugins.tiff.TIFFDirectory
import java.awt.image.BufferedImage

import static qupath.lib.gui.scripting.QPEx.*



// packages script2
import qupath.lib.objects.PathObjects
import qupath.lib.regions.ImagePlane
import static qupath.lib.gui.scripting.QPEx.*
import ij.IJ
import ij.process.ColorProcessor
import qupath.imagej.processing.RoiLabeling
import qupath.imagej.tools.IJTools
import java.util.regex.Matcher
import java.util.regex.Pattern


// script 1 
boolean promptForFiles = true

File dir
List<File> files
String baseName = 'Merged image'
if (promptForFiles) {
    def qupath = getQuPath()
    files = Dialogs.promptForMultipleFiles("Choose input files", null, "TIFF files", ".tif", ".tiff")
} else {
    // Try to get the URI of the current image that is open
    def currentFile = new File(getCurrentServer().getURIs()[0])
    dir = currentFile.getParentFile()
    // This naming scheme works for me...
    String name = currentFile.getName()
    int ind = name.indexOf("_[")
    if (ind < 0)
        ind = name.toLowerCase().lastIndexOf('.tif')
    if (ind >= 0)
        baseName = currentFile.getName().substring(0, ind)
    // Get all the non-OME TIFF files in the same directory
    files = dir.listFiles().findAll {
        return it.isFile() &&
                !it.getName().endsWith('.ome.tif') &&
                (baseName == null || it.getName().startsWith(baseName))
        (it.getName().endsWith('.tiff') || it.getName().endsWith('.tif') || checkTIFF(file))
    }
}
if (!files) {
    print 'No TIFF files selected'
    return
}

File fileOutput
if (promptForFiles) {
    def qupath = getQuPath()
    fileOutput = Dialogs.promptToSaveFile("Output file", null, null, "OME-TIFF", ".ome.tif")
} else {
    // Ensure we have a unique output name
    fileOutput = new File(dir, baseName+'.ome.tif')
    int count = 1
    while (fileOutput.exists()) {
        fileOutput = new File(dir, baseName+'-'+count+'.ome.tif')
    }
}
if (fileOutput == null)
    return




// Parse image regions & create a sparse server
print 'Parsing regions from ' + files.size() + ' files...'

def builder = new SparseImageServer.Builder()
files.each { f ->
    def p = Pattern.compile("\\[x=(.+?),y=(.+?),")
    parsedXY = []
    filetested = GeneralTools.getNameWithoutExtension(f)
    print f
    Matcher m = p.matcher(filetested)
    if (!m.find())
        throw new IOException("Filename does not contain tile position")
    parsedXY << (m.group(1) as int)
    parsedXY << (m.group(2) as int)
    print parsedXY
    //def parsedXY = parseFilename(GeneralTools.getNameWithoutExtension(f))
    region = ImageRegion.createInstance(parsedXY[0], parsedXY[1], 3200, 3200, 0, 0)
    if (region == null) {
        print 'WARN: Could not parse region for ' + f
        return
    }
    def serverBuilder = ImageServerProvider.getPreferredUriImageSupport(BufferedImage.class, f.toURI().toString()).getBuilders().get(0)
    builder.jsonRegion(region, 1.0, serverBuilder)
}
print 'Building server...'
def server = builder.build()
server = ImageServers.pyramidalize(server)

long startTime = System.currentTimeMillis()
String pathOutput = fileOutput.getAbsolutePath()
new OMEPyramidWriter.Builder(server)
    .downsamples(server.getPreferredDownsamples()) // Use pyramid levels calculated in the ImageServers.pyramidalize(server) method
    .tileSize(3200)      // Requested tile size
    .channelsInterleaved()      // Because SparseImageServer returns all channels in a BufferedImage, it's more efficient to write them interleaved
    .parallelize()              // Attempt to parallelize requesting tiles (need to write sequentially)
    .losslessCompression()      // Use lossless compression (often best for fluorescence, by lossy compression may be ok for brightfield)
    .build()
    .writePyramid(pathOutput)
long endTime = System.currentTimeMillis()
print('Image written to ' + pathOutput + ' in ' + GeneralTools.formatNumber((endTime - startTime)/1000.0, 1) + ' s')
server.close()


// parse function from script 2



// back to script 1

static ImageRegion parseRegion(File file, int z = 0, int t = 0) {
    if (checkTIFF(file)) {
        try {
            return parseRegionFromTIFF(file, z, t)
        } catch (Exception e) {
            print e.getLocalizedMessage()
        }
    }
}
/**
 * Check for TIFF 'magic number'.
 * @param file
 * @return
 */
static boolean checkTIFF(File file) {
    file.withInputStream {
        def bytes = it.readNBytes(4)
        short byteOrder = toShort(bytes[0], bytes[1])
        int val
        if (byteOrder == 0x4949) {
            // Little-endian
            val = toShort(bytes[3], bytes[2])
        } else if (byteOrder == 0x4d4d) {
            val = toShort(bytes[2], bytes[3])
        } else
            return false
        return val == 42 || val == 43
    }
}

/**
 * Combine two bytes to create a short, in the given order
 * @param b1
 * @param b2
 * @return
 */
static short toShort(byte b1, byte b2) {
    return (b1 << 8) + (b2 << 0)
}

/**
 * Parse an ImageRegion from a TIFF image, using the metadata.
 * @param file image file
 * @param z index of z plane
 * @param t index of timepoint
 * @return
 */


/**
 * Helper for parsing rational from TIFF metadata.
 * @param tiffDir
 * @param tag
 * @return
 */
//static double getRational(TIFFDirectory tiffDir, int tag) {
    //long[] rational = tiffDir.getTIFFField(tag).getAsRational(0);
    //return rational[0] / (double)rational[1]; }



The topline error is this:

What type of data is in each of your images (ie, RGB or 16bit grayscale, etc)? Is it the same for all images?

2 Likes