Quantification and pattern recognition of necrosis


I’m new to ImageJ and image analysis in general, and have spent the last several weeks doing as much reading up on the topic as I can; however, I haven’t been able to sort out a way to create a pattern recognition macro for tissue necrosis (central area of pallor in the attached image). I would like to take multiple images, and get a rough estimate of area of necrosis as compared to the whole tissue and whole lesion without having to manually outline every lesion.

Any help is appreciated. Thanks!

Hi Mike,

I am not too familiar with histology, but judging from your example image I would suggest for you to have a look into QuPath.
QuPath also has youtube videos for documentation available to get started.
Alternatively you can check out Orbit.
Both of theses tools are free and specifically useful for the analysis of your images like yours.

Thank you very much, I will look into both of these!


I cannot access your image for some reason… but from what you describe - you can also take a look at the Trainable Weka Segmentation plugin. It is “a Fiji plugin that combines a collection of machine learning algorithms with a set of selected image features to produce pixel-based segmentations.”

You can train it to locate regions of necrosis… and apply that classifier to multiple images to batch process them. :slight_smile:

Just another option …


@etadobson I was not able to open the image through the link, but saving the link to the desktop worked.

@Mike_Rich If you look into QuPath, I would suggest using Estimate Stain Vectors, then Simple Tissue Detection to get your overall tissue area, SLIC superpixel segmentation to generate sub-areas to classify, and Calculate+Add Intensity Features. You may also want to try Local Binary Patterns. Once you have features added to your segmentation you can create a classifier using a selected training set.

Feel free to let me know, either here or on the QuPath forum, if you are interested in any further details. I have done some similar tissue segmentation before.


I can confirm this behavior, although I had to allow “other” for discourse-cdn-sjc1.com in Chrome’s umatrix extension - so maybe this is related to some ad/script blocker in your browser…?

Still then only downloading the image worked, it did not get displayed in this thread. However, converting the file to PNG and re-uploading it eventually did the trick, so now it should be visible in the original post…

Thanks for the tips. I spent most of the day learning QuPath and it is exactly what I’m looking for. I’ll have to find my way over to the Forums for some more poignant help.

Thanks. I spent a lot of time working with trainable weka and it did not work well for my application - or I simply have no idea what I’m doing haha!

I think that you will struggle to segment a necrotic focus at that resolution and in H&E stained images. Not enough image information.

1 Like

Good day Mike,

would you mind drawing the boundary between the normal and the necrotic tissue areas?

This would help those who are not in the know!



There are many different types of necrosis and appearances depending on the tissues where this happens. You would need to see features at the cytological level, not at low power, even if the texture is slightly different from the surrounding tissue.


Here is a granuloma with a central cavitary lesion that contains necrosis (outlined in yellow). As other have pointed out, these lesions are difficult to discern from surrounding tissue because everything is pink/purple and granular. I used QuPath to create that segmentation; however, I would like to sort out a way to get a rough outine automatically done on a batch of .svs files, and then I can come in and refine the edges. I haven’t been successful in getting Trainable Weka, or other machine learning plugins to work with such lesions.

These are sections of lung tissue, the surrounding normal tissue also contains inflammatory cells which makes this even more challenging.


it appears as if the image with contour is different fromyour first sample image.

Please post typical raw images in the original TIF- or PNG-format. Screenshots are of no help, if someone wants to try analyzing your images.

In any case we need the original image of the one with the contour.

If we don’t see typical raw images of the kind you are interested in, we cannot help.

As Gabriel stated, it will be impossible to provide a generally applicable approach for the desired segmentation but perhaps your images represent are sufficiently compact group that can be analyzed without requiring images on a cytological level.



Someone has actually just posted something similar to what I think you are working on. The individual steps will differ, but it should give you a good starting point for your code:

Hi, interesting post.

May I ask you how to batch process images from a directory with the Trainable Weka Segmentation plugin and Fiji if I have trained a classifier with 2 zones ( area with cells) and free zone … ? and add results to a ROI to calculate the % of that area in Fiji ( ImageJ)

Another question ; are you slicing big images (pathology specimen) ? i get out of memory errors for big images…

Thank you in advance for your return.


The best place to start learning about scripting TWS is it’s own scripting page:

I always use the BeanShell example they provide here:

Obviously - you can adapt as you need to add further analyses… or write subsequent standalone scripts to process the output.

Hope this helps you start. If you have more specific questions on your own work as you go - post your questions to a new thread on the forum.


Thx for that, i’ll check them, stay safe.

1 Like

Hello, I’m new to QuPath and I’ve been working on a similar project - to achieve a quantitative analysis of pallor in an H&E image. I’ve been having difficulty with enabling the pixel classifier to distinguish these areas from the white background space. Do you have any advice for this? Thank you!

Hard to say much without images indicating what is going wrong, but if there is any stain color at all, you might try a Simple Threshold first. Often necrosis will have a particular texture, but the variation… I am not sure what you are dealing with. The Different Types of Necrosis and Their Histological Identifications. — Andréas Astier

Pixel classifiers can be run in multiple steps. I will frequently run one first, usually called “Tissue” to isolate the area that I am interested in and create an annotation around it (excluding blank space). Then I might use the pixel classifier within that area and add in texture features if necessary - more detail in the official documentation.

1 Like

Thank you for the reply! I’m analyzing liquefactive necrosis in brain tissue. I’ve made two different pixel classifiers, one for tissue detection and one to isolate necrotic areas, however when using the necrosis classifier within the annotation created, it still includes the blank space in the background.