I’ve attached the pipeline and some sample images that I have been trying to run the pipeline on. I think I have the pipeline pretty well setup, but I’m sure there’s always room for improvement.
One of my main concerns with the pipeline itself is that due to the images being 4x and there being only so fine an adjustment I can make on primary object size, it seems that I cannot find the correct settings for CP to distinguish between real nuclei and debris. Additionally, I tried to count the numbers of nuclei that are in one cell type or another (neurons or other, basically) without much success.
On the cluster I’m attempting to run headless and submit the job via a .lsf script. The problem I run into there is that if I submit the job as one run, it does not get split up to take advantage of the parallel processing, and CP errors out at image number 168. I would like to use the -f -l flags to group the images I’m submitting, but then the output filenames overwrite each other. I tried outputting to a database, but I got a “expected exactly 4 inputs and only received 3” error for that, so I ended up giving up. Is there a way I can break the jobs up by row and break that out in my metadata to generate unique filenames for each row of each plate?
There seem to be more flags available when submitting a job command line than I can seem to find documented. I’m not sure if it’s a lack of documentation or if I’m just doing a terrible job searching.
WT.zip (6.18 MB)
FXS.zip (6.26 MB)
WntAssayPipe-08192011.cp (17.7 KB)