How to measure an object that is reflective


Hello. I’m Dan.

I’m working on an image analysis study using ImageJ and I’m facing a problem.

I want to extract only the “soybeans” in the image below.

But in The leftmost part of the image, the soybeans will be reflected in the frame and end up extracting more parts than the soybeans.

Sample image and/or code

Row Image

Color Threshold

Soybeans reflected in the grid*
スクリーンショット (1066)

スクリーンショット (1065)

Analyze Goal

I want to do the extraction only “soybeans” in the simplest way possible. The original photo and the ROI file should be compatible.
(Below, I’ve written about some of the things I’ve done, but it’s time-consuming and hard to do in terms of ROI file correspondence)

What I do to solve this problem

・I run several times “Color Threshold” in several Lab* values to extract only “soy bean”. I analyzed Measurement Results in terms of the values of “Circ.”, “Height”, and “Width” and select soybeans which seems to extract “the only soybean”.
(The ROI files obtained from the multiple Color Threshold runs were also matched to the analyzed results.)
→ Color Threshold takes time and needs to be done many times because soybean reflections are not unique(The results of the analysis change significantly with just a small change in the value of b*!)

・After, run Color Threshold, I pick up the soybean which does not extract well, and copies it to the new image and Threshold it by Split Color channel and b* channel.
→ It is possible but laborious. It is hard to accommodate the ROI of the base image after extracting only parts of the soybean. (In terms of coordinates.)


I am not a native English speaker. If I’m unclear on any part of the information, I’d appreciate it if you could let me know.

Hello. Perhaps look at the problem inversely.

It would be very easy to first remove the grid.

After that your protocol for the soy bean extraction should avoid these kinds of issues.

Hope this helps…



Thanks for replying.

I didn’t think about the possibility of erasing the Grid.

What methods and functions should I use to remove the gird, usually?


Hi @11137,

some general issues first.

Are you working with jpeg’s? Or have you used jpeg format only for posting the image here?
If you work with jpeg then the first option to improve your measurement is to change the image format.
Artefacts introduced by the jpeg compression strongly influence your results.

I wonder where are the horizontal perspective distortions in your image are coming from?
Perspective distortions ONLY in horizontal direction?
How have you captured the image(s)?

I would not perform Color Measurement with this kind of ‘compartment plate’. The side wall reflections possibly influence the ‘in-box illumination characteristics’.

Regarding your object segmentation:
The b-channel of the Lab* stack shows a bi-modal histogram.

A good object segmenation can be realized (on this image) with the Huang threshold.

Maybe this will give you some ideas for your next steps.


Can you remove the grid before imaging? :upside_down_face:

Some extra comments:

  • I agree with @phaub, converting your image to Lab* yields much better and faster results than using the color threshold and will be much faster.

  • While you want to get “just the soybean”, and show examples of the grid being reflective, you do nopt mention the beans that are occluded by the perspective distortion that was mentionned above. Because of these distortions, the idea to remove the grid before will not yield very good results…

Considering the error in surface area is around ±5%, what is the end goal of extracting all the soybeans? Based on that, you may need to first rethink the image acquisition protocol.

  • Completing @haesleinhuepf’s comment, one solution could be to actually paint the grid black using a non-reflective paint.

While I understant that this is not exactly what you wanted to hear (read?), it is always better to improve things from the acquisition side before trying to bend over backwards to overcome them via image analysis.


Sorry for the late reply.

My end goal of extracting all the soybeans is to measure height(thickness) precisely so even if the error in surface area is small, it will affect the height (thickness) value because I want to extracting only soybeans.

I found that there are many physical problems with this image acquisition method, such as perspective problems and reflections, so I try to solve them with a Image acquisition approach.

Thanks for detail reply

Sorry for late reply.
this image acquisition method is provisional, it is possible to remove this frame. I wanted to solve this problem with an image analysis approach.

1 Like

Sorry for late reply.

I didn’t know about the influence of JPEGs on the results. what extension should I use for the image works?

I get this image by flatbed scanner so I think this problem come from it.

Using “Huang threshold” in b* channel is greatly improve my measurement.
if you don’t mind, I want to ask how did you come up with this approach

Big thanks

Usually flatbed scanners can save raw data as tiffs. Look for lossless format options in your software (tiff, bmp, png, …)


Yes, this seems to be the reason.

This approach was found by comparing the different auto-threshold modes in ImageJ.
‘Huang’ gave the best result ON THIS PARICULAR IMAGE.
You should verify that this is also true for the majority of your typical images.

If you want to go into details you have to study the mathematics of the different threshold procedures. Every threshold mode has its strengths and weaknesses and is focused on a specific image statistics (or a specific histogram distribution).

The suggestions are useful for my work, a big thank you to everyone! :+1: