)I want to eliminate from a series of time lapse images. Once the dust particles are masked, I want to be able to use the mask as a template to mask dust particles in the remaining images loaded. I have achieved this in PhotoShop but its time consuming doing it for each and every image.
As far as I know, the only way to calculate the mask once then apply it to every frame of the timelapse is to create your dust mask separately, save it (either as a binary or as an objects image), then in your downstream pipeline under “Names and Types” in the input modules load it as a “single image” and run a masking step similar to what you are doing here. Unless the dust is consistent movie-to-movie, however, it’ll mean you need to run each movie separately in your downstream pipeline so that you can give each one its own unique dust mask image, but still probably faster than doing it in photoshop depending on the number of movies you have!
I think I successfully made a mask… the black spots are the dust particles. I used this pipeline to do thatMaskMain.cppipe (15.3 KB) . I used the mask as my threshold stragetgy but I got an error there ( an assertion error)
I can’t figure out what I am doing wrong. Thanks
It looks to me like you’re trying to mask objects named Nuclei by themselves, which would probably explain your AssertionError ; overall though I don’t understand your workflow right now, and since I only have the cppipe and not the cpproj I can’t be sure exactly what the disconnect is.
This is what I suggest you do, let me know if it makes sense
Do all of this only once:
-Take one of your input images, use ApplyThreshold for masking. Feel free to play with the threshold a bit in test mode- to my eye it looks like 0.55 may be a little lenient, try maybe 0.53 or 0.52. You want this to be good because you’re going to use this for the rest of your images.
-Optional: use IdentifyPrimaryObjects to identify “dust particle objects” within your
-SaveImages - if you are coming from ApplyThreshold, save it as an Image (not a Mask, which apparently only comes from Crop- we should probably rename that, our bad); if you’re coming from IdentifyPrimaryObjects save type is Objects. Save as a tif, not as a jpg (jpg is almost never a good idea).
For every other pipeline you run:
-Load all of your images as you normally would, then in NamesAndTypes load your mask image using “Load a single image”. Its type will be either “Binary mask” or “Objects” based on what you chose above.
-Downstream of your ColorToGray, use “MaskImage”. Whether you’re using an image or objects to mask is once again based on what you chose above. Repeat on as many channels as needed; you can try doing it just once upstream of ColorToGray but I’m not sure how the behavior will propagate.
-Now identify your nuclei or whatever else using IdentifyPrimaryObjects on the masked image.
That I did. I put both the project and the pipeline in google drive at the end of the text. For my mask, I got a black and white image. My dust was white and my background was black (UsedToCreateMask.cppipe and UsedToCreateMask.cpproj). Do I need to use ImagemMath to invert the mask? After inversion does it still remain a binary image?
Do you mean if say ApplyThreshold is right beneath ColorToGray, the input for ApplyThreshold should be “Maksed Image”?
By channels do you mean RGB channels? If yes, why is it necessary to do that?
I did that both project and pipeline(DownStreamAnalysis.cpproj and DownStreamAnalysis.cippipe) are in the link at the end of the text. But it still picks out my dust particles, infact basically the same objects in the mask is what I get out
Why should I identify my objects using the masked image as input? The masked image basically has dust particles which was what the module identified.
Or should I be working with mask objects instead?
I thought a masking image is used in the ApplyThreshold module where the masking image (binary image) is used as a threshold strategy and the input image to the ApplyThresholdModule will be my image which has been converted from ColorToGray. And after that the Apply Threshold module the mask image is no longer used.
You’re right, I missed an inversion step in my mask creation; my apologies. Downstream of your ApplyThreshold step you would in fact use ImageMath to invert the image, and then save your inverted image.
ApplyThreshold shouldn’t be anywhere in your downstream pipeline at all once the mask is made. Simply use MaskImage on whatever channels you’re analyzing with the mask set as the binary image made from the revised mask creation pipeline. Then proceed to do the rest of your analysis on your dust-free masked images.
uhmm…so in my IdentifyPrimaryObjects, is the mask going to serve as input? Or will the output from ColorToGray serve as the input for the IdentifyPrimaryObjects and then in the threshold strategy in the IdentifyPrimaryObjects, I select binary image and use my mask?
By channels are you referring to RGB channels in the ColorToGray Module?
No, the mask is going to be used to make an image without dust so that you can find your (cells, nuclei, etc). The masked image will be your input.
Yup! You’ll split your channels into RGB, and then in whatever channel you’re finding your objects you mask the dust out so that it’s now excluded.
No, the idea is you use ColorToGray to split your channels, put the individual channels through MaskImage to mask out the dust that you wanted excluded, and then IdentifyPrimaryObjects on the masked images to find your objects you were doing timelapse on and do whatever other analysis you like!
As an aside, I noticed that in the ColorToGray module, if the conversion method is set to combine and a relative weight of zero is assigned to the green channel, the resulting image is essentially the dust particles. I could put this image through the ApplyThresholdModule, convert to binary and save as an image. I figured its an alternative way to find more accurately the dust particles.
Looking forward to hearing from you. Thanks a lot Beth!
Your mask is still inverted, that does not seem to have ever gotten fixed. You certainly can create a new mask image if you want and use it in the corrected pipeline I’ve attached, just make sure to pay attention to the “Invert the mask?” tick box at the bottom of MaskImages if your new mask image is correctly configured and no longer needs to be inverted.
Thanks a lot!! I see my error. When I create a mask using the ApplyThresholdModule and invert it using ImageMath, I will need to uncheck the invert mask option in the MaskImage Module in your pipeline. If I didn’t do an inversion using ImageMath then I will need to check the invert mask option in the MaskImage Module.
In the pipeline you attached, do you think its a good idea to do some image processing (say a smoothing operation and a morphological operation) right after ColorToGray? I ask because although the mask is working, a significant amount of crystals are missing. For that image, the pipeline I created 150Crystals.cppipe (11.4 KB)
will get me almost a 150 crystals including the dust particles (the dust particles are about 15 in total) but right now after the mask the crystals identified are less than 40.
Hi Beth…I think have got it all working; at least I got what I wanted. But I think there is a much better way to do it-what I did seem very crude to me:grin:. My CP skills aren’t top notch yet. Here are the pipelines that I used to make the mask and implement the mask along with the image.