Quantification of Pixel Reduction

Hello. This software was recommended to me, but before I go through the installation process, I want to confirm that it will suit my needs.

I am trying to quantify the percent-reduction of surface residue on a circuit board based on before vs after photos of identical dimension/proportion. I was told there is a way to quantify on pixelation. Does this software have this function?

Any information you could provide would be helpful.

Thanks!

Hello, the answer is “Maybe”! It’s impossible to say without seeing an example pair of images. Could you upload those? Then hopefully we can help.

Assuming it’s not terribly challenging to detect which areas have residue and which don’t, then the answer is certainly yes.

Anne

Hi Anne,

Thank you for your response.

Attached are ‘before’ vs ‘after-cleaning’ photos of a circuit board containing flux residues. As stated above, we would like to find a way to easily quantify the percent-reduction of the flux residue based on pixels.

I would also like to point out that I am aware of the significance of consistent lighting and/or sizing when comparing the two photos. This of course would be taken into account when taking similar photos for analysis in the future.

Any input you could provide would be helpful.

Thanks again!
Laura

Interesting images, not something we see everyday around here in CellProfiler world!

Can you describe what exactly is the residue? I assume it’s not simply “all white pixels” because there is white resulting from shininess in the ‘after’ photo that I imagine you’d want to exclude. Plus the white stuff at the bottom of each element - is that residue or to be ignored?

Another thing to consider - are you most concerned about how much silver is visible or how much residue is visible? The former would emphasize mainly white stuff directly ON each element (and ignore white stuff around each element). I ask because it might be easier to train the computer to recognize the silver parts (including shininess) rather than recognizing the white parts.

Hi again,

You are correct in assuming that we would like to exclude the shiny glare on the metal elements in the after photo-- as well as the white area on the bottom.

I agree with your presumption that it may be easier to train the computer to recognize the metal vs the white. However, the only problem with that is that we are concerned with the OVERALL reduction of the white residue–which includes areas both ON each element and around.

For example, in the after photo on the right, there is a slight hue of white residue remaining between the elements that was unable to be removed. I would imagine that the presence of this remaining residue would likely give us a reduction value of somewhere around 90% (vs 100).

Thoughts?

Ok, it’s hard to say how well it will work without trying, but here are two options:

  1. In CellProfiler, mask away the shiny spots and white area at the bottom based on having super-bright intensity, then total up the pixel intensity of the remaining white pixels. This would be the simplest solution but may not achieve sufficient accuracy because its seems the white area at the bottom is maybe a similar brightness as the real signal of the residue you care about. If you have trouble putting together a pipeline for this let us know - you would likely want ColorToGray, then IdentifyPrimary to find the 6 objects, then Mask to get rid of shiny areas, then MeasureObjectIntensity.
  2. Use the software “ilastik” to train a machine learning classifier to recognize the residue of interest. This allows texture AND intensity to be used to make the decision and thus likely to be more accurate. It’s pretty easy to download that software, scribble over the regions of interest, and see how it does.

Thank you so much for the input, your guidance is much appreciated.

I will explore both options and let you know how we choose to proceed.

Have a great day!