I can’t comment on HALO, but have experience with both Visiopharm and QuPath.
+Visiopharm in Europe has Oncotopix or some such which can be used in clinical studies, that could be a factor.
-It is very expensive.
±If you need deformable whole slide alignment, it is fairly good at that, though even with re-stains on the same tissue slice, it was not good enough for single cell overlap.
-Multiplex brightfield is very frustrating as it does not support brightfield for it’s phenomap implementation (multiplex analysis).
-You cannot export the aligned images or really export much more than a snapshot of the screen.
-Objects you create can be exported as MLD files which can be imported into… other copies of Visiopharm. (See additional information in post below)
+Data exports to CSV/Excel as you might expect.
-----I find the whole APP design process a bit frustrating as finding the % positive of cells of class “A” out of A, B and C involves something like 5 steps, none of which can be coded. Repeating the same processing steps for 4 different labels often requires creating the same process 4 times, rather than “this is the process, apply it to these 4 labels.”
++However, the cell detection options are incredibly complex and powerful, and it has several deep learning options that are far better than QuPath’s pixel classifier at detecting complex objects (glomeruli, etc). Slight -, the deep learning options are only pixel classifiers, and do not do semantic segmentation. The post processing can be used to approximate this, but you need to be careful about how you split objects, still.
If you get Visiopharm, try and make sure your entire workflow can be done within Visiopharm, or you can accept an output that is a CSV file with your results for your next steps.
I believe @mesencephalon might have tried out Visiopharm recently, and has experience with QuPath as well, and might provide further thoughts…