This is one of my favourite questions. I like that you pose it as “how to decide”.
You’ll have about 7 mm × 3 pixels/mm = 21 pixels per object in the worst case. You could estimate straight away that with a pixel’s worth of uncertainty at both ends of a line measurement you get 21 ± 2 pixels or about 10% measurement error.
Better is to do it empirically by scanning some objects that you can get accurate real (with a ruler, digital caliper or micrometer) measurements from. Take the measurements multiple times, ideally blinded, in the image and in reality and compare the results. Check the variation within and between modalities.
The next thing to consider is whether your measurement error is big or small relative to your effect size (the differences in the size of your feature among groups). So if you have 1mm worth of measurement error but the effect size is 5mm and consistent, it won’t matter too much. But if measurement error approaches effect size you have noisy data and you won’t readily be able to detect the effect. That is before you consider within-group variation.
If you aren’t able to measure your object of interest with a real ruler, then you have to calibrate the scanner with a calibration phantom with known dimensions and check your precision and accuracy against that.