Scaling of images

scaling of images 2.pdf (173.9 KB)

Hi everyone,

I have two magnetic resonance images of the same sample, but with different field-of-view (scaling). I need to equalize scaling of these two images. How can I implement this? Please find attached pdf file that clarifies the problem.

Hello Physicist -

You can use Image > Scale... from the Fiji / ImageJ menu.
If you simply need to rescale one (or both) of the images, this
will do what you want.

(If you need to warp or align misaligned images, you will have a
harder problem, and should look at the various image registration
plugins that are available for ImageJ.)

Thanks, mm

1 Like

Dear mountain_man,

Thank you for your answer!
I should have posted the real problem, rather than simplified. I posted simplified problem because I thought that it would be enough for me to derive the solution for real problem from the solution of simplified problem. Here is the real problem:

Magnetic resonance imaging (MRI) experiment was conducted. There are 3 different images of the same sample (sample is the object which was scanned in MR scanner). These 3 images are 3 orthogonal projections: plane (X-Y), plane (Z-X), plane (Z-Y). There was no slice selection during MRI experiment, i.e. each projection represents the sum of all anatomical slices of the sample in specific direction. I.e. the projection (X-Y) is the sum of all anatomical slices along Z axis: Z1, Z2, Z3, …, Zn. The projection (Z-X) is the sum of all anatomical slices along Y axis: Y1, Y2, Y3, …, Yk. The projection (Z-Y) is the sum of all anatomical slices along X axis: X1, X2, X3, …, Xm. Each projection (image) has size of 512 x 512 matrix (pixels). For each image, scaling was applied according to known field-of-views (FOVs) of each image. The first image (projection) has FOV = 20 mm x 30 mm. The second image (projection) has FOV = 50 mm x 40 mm. The third image (projection) has FOV = 43 mm x 36 mm. All these numbers for FOVs are arbitrary but they reflect the essence of the problem: each projection (image) has its own FOV which can be different from FOVs of two other projections (images). Aspect ratio was applied in order to show real ratio between sides of an image. Thus pixels are not square after applying aspect ratio.

In the images, dark blue area is noise area (see the attached pdf file). All images in pdf file were drawn in PowerPoint but they look like real MR images. In noise area of real images, intensity values should be zero, but actually they are not zero, so in each projection I will have to select ROI covering the whole sample and cut it (and there is another question how to determine edges of the sample in each image). The result will be the image with the sample, and in this image the noise will be zero around the sample. So in the pdf file attached, it is assumed that the intensities in the noise area are already set to zero.

As one can see from the pictures, the sample is non-uniform (three different colors demonstrate this non-uniformity), but probably it doesn’t matter.

I need to divide the intensity in each point of the sample in the left image (X-Y projection) by the corresponding length, i.e. I need to obtain normalized left image. The length for each point should be calculated from the middle image (Z-X projection). Each point (pixel) in the left image is a parallelepiped that is defined by three sides: side1=0.0390625 mm (see attached picture, pixel size under the left image), side2=0.05859375 mm, side3=Length. Lengths are in millimeters. In the attached pdf file, in the left image, vertical lines are drawn which pass through the sample: X1, X2, …, Xm (these X values are in millimeters). On each line (X1, X2, …, Xm), in the interval within the sample, violet points are drawn. The number of violet points on each line is the number of pixels belonging to the sample on this specific X-line. Since there was no slice selection in the MRI experiment, I assume that for each violet point on specific X-line (i.e. for each violet point in specific column of left image), the length is the same.

In the middle image (Z-X projection), horizontal lines are drawn which pass through the sample: X’1, X’2, …, X’m (these X’ values are in millimeters). All points on each X’-line, which belong to the sample, should be considered to calculate lengths: L1, L2, …, Lm. Length = (number of pixels which lie on specific X’-line and belong to the sample) multiplied by (pixel size along Z axis in Z-X projection).

It seems that the third image (Z-Y projection) is not necessary for calculation.

The problem is that FOVs of the first and the second projections are different. And thus X1, X2, …, Xm are not equal to X’1, X’2, …, X’m. I need to match the first and the second image (and maybe all three images, but at least two images). The first variant is to extend the side along X axis in the left image (X-Y projection) from 20 mm to 40 mm. The second variant is to compress the side along X axis in the middle image (Z-X projection) from 40 mm to 20 mm. Which variant would you recommend? And how can I implement this extension (or compression)? Or maybe there is another way to achieve the goal?

Additional information:
I don’t need to reconstruct the object from the projections. The purpose is to obtain normalized left image (normalization factor is length which is different for each pixel in the left image). It is preparation step for further quantitative analysis. (Length should be calculated from the middle image as shown in the attached pdf file)

Regarding matching. I need to perform matching of the left and middle projections, since I need to know what length should I take for violet points on X20-line in the left image (for example). If the scaling for the side along X-axis in the left and in the middle images were the same (for example, horizontal side = 20 mm in the left image, and vertical side = 20 mm in the middle image), then it would be possible to say that for the violet points on X20-line in the left image, the length can be calculated based on the number of pixels along horizontal X’20-line in the middle image (in this case, X’20 would be equal to X20).

I will be very appreciate for your help.
Presentation_Imagescforum_scaling_images.pdf (185.8 KB)

Hi, I have been following the discussion of this problem on IgorExchange. Scaling the image the way that Mountain Man suggests should get you to your solution. I think I understand what you are asking (maybe not!) but you don’t seem satisfied with the answer.
In answer to which variant would you recommend: I think extension is better, but i don’t think it matters.
How to do it? Use the scale command.
Other ways to achieve the goal? Yes, you can leave the images as they are and calculate the positions of the lines you want to analyse in pixels and do that. In Igor there are a family of Interpolate functions that will do what (I think) you want to do. In FIJI you can specify a line and generate a line profile.

1 Like

Dear quantixed,

Thank you!
I’d like to do image processing in Igor. Here I wrote because I wanted to hear other opinions.
I tried to perform scaling using ImageJ, but again the result is not the one I expected. ImageJ compressed the image, the sample on the left now looks like the sample on the right (I mean the size of the yellow square), but the matrix (number of pixels) is changed. Maybe, after this compression, I have to extend this Picture1-1 to the original size of matrix by adding zeros around the existing 264x264 matrix? Please find attached three printscreens of ImageJ demonstrating the issue.

There’s no other data outside of the original image for ImageJ to make the compressed version the same size as Picture 2. Since that is the result you want, yes you’ll need to fill it in with zeros or something. The converse is that if you expand Picture 2 so that the yellow square is the same size as Picture 1, you’ll need to crop that. image to get it the same size.

1 Like