Issue merging unmixed files to pyramid

Hello,

I am using peterbankhead´s script for this purpose. It works pretty well with my data, but when I try to merge among 50-60 files ( about 15 gigas), the ome.tif is created, however I am not able to open with Qupath. I ve tested it, reducing to 45 and it is working ( with the same files). Are there a memory space limit for that? Any potential solution?

Thanks a lot for the help.

In most of these scripts, the OME.TIF is created regardless of whether or not it is finished. Can you confirm that the final success message was generated for the larger files?

Hi Ignacio,
I see similar errors with large files (for final stitch of ~15GB or higher). I would suggest using a computer with more RAM and making sure the computer processor is not being utilized heavily by other things. Seems to solve the error stitches for me at least. Unfortunately, the true cause of this problem is still a bit beyond me. I’ll also note that I’ve successfully stitched up to 30GB size final images.

1 Like

Thanks for the rapid replies.

Yes, Final success ,message was generated.

I am using 24 GM RAM giving 20 in the setuo to Qupath :frowning:

I am fairly confident I have managed a 60GB final, but on computers ~200GB or more of RAM. I have not tested extensively, though.

In case it may be of interest, after increasing my memory RAM to 32 GB, I was able to running the script with a final file of 17 GB uncompressed size.

2 Likes

Hi Again!

Is possible use the same script for merging brightfield images. I would like to have a brightgfield WSI for each specific marker (6 markers plus DAPI) in order to compare my fluorescence object classification with the staining, so I generated all the “path_view” tiff images with inForm but when I run the script, it says:

Many thanks!

As far as I know right now, combining multiple brightfield images turns them into fluorescent images. Which absolutely is possible, but does not sound like what you want since you could easily do that with the original fluorescent images.

You might be able to alternate which DAB channel was showing through some trickery, but I’m not sure there is an easy way to show multiple brightfield colors on a white background due the the RGB-ness of the images.

@petebankhead would know best.

Thanks @Research_Associate for your rapid reply and sorry for my bad explanation.

I have in my current panel CD8, PDL1, PanCK, CD163…with almost 50 MSI regions. My idea would be merge exclusively my CD8 staining files in one WSI and repeat this process for each marker ( PDL1 staining WSI…CD163 staining WSI…)

My purpouse with that is use as quality control check for my positive threshold in the fluorescence comparing it with the staining data.

Do you know if there would be other/better option to know your positive threshold for the fluorescence?

Many thanks!

1 Like

component_data.tif has the location data that Pete’s script (for merging vectra component images) parses to put them in the right place. i.e:

XPosition: 0.810983180955747
YPosition: 3.8529770408544985

I was able to modify the script so that the location data is parsed from the component_data.tif and corresponding path_view.tif was used to feed to pyramidwriter. This produced merged path_view image (RGB) for the marker I choose (Dapi + Marker as pseudo H-DAB image). I am not sure what would it take if you want more than one marker in the resulting RGB WSI.

2 Likes

This is what I have done.

in this section I added a line to only get files with ‘component_data.tif’ string:

    // Get all the component data files in the same directory
    files = dir.listFiles().findAll {
        return it.isFile() &&
                it.getName().endsWith('component_data.tif') &&
                !it.getName().endsWith('.ome.tif') &&
                (baseName == null || it.getName().startsWith(baseName))
        (it.getName().endsWith('.tiff') || it.getName().endsWith('.tif') || checkTIFF(file))
    }

In the following section of the script:

def builder = new SparseImageServer.Builder()
files.parallelStream().forEach { f ->
    def region = parseRegion(f)

    if (region == null) {
        print 'WARN: Could not parse region for ' + f
        return
    }
    def serverBuilder = ImageServerProvider.getPreferredUriImageSupport(BufferedImage.class, f.toURI().toString()).getBuilders().get(0)
    builder.jsonRegion(region, 1.0, serverBuilder)
}
print 'Building server...'

According to my marker of interest, I modified to:

def builder = new SparseImageServer.Builder()
files.parallelStream().forEach { f ->
    def region = parseRegion(f)
    f = f.toString().substring(0, f.toString().length() - 18) + 'Opal 520_path_view.tif'
 
    if (region == null) {
        print 'WARN: Could not parse region for ' + f
        return
    }
    def serverBuilder = ImageServerProvider.getPreferredUriImageSupport(BufferedImage.class, f).getBuilders().get(0)
    builder.jsonRegion(region, 1.0, serverBuilder)
}
print 'Building server...'
3 Likes

Amazing!!
Thanks a lot for sharing

I am testing it with these modifications.

Tested! It works very well in my data!

Many thanks

1 Like

Thanks! That looks great!

1 Like