Dear mountain_man,

Thank you for your answer!

I should have posted the real problem, rather than simplified. I posted simplified problem because I thought that it would be enough for me to derive the solution for real problem from the solution of simplified problem. Here is the real problem:

Magnetic resonance imaging (MRI) experiment was conducted. There are 3 different images of the same sample (sample is the object which was scanned in MR scanner). These 3 images are 3 orthogonal projections: plane (X-Y), plane (Z-X), plane (Z-Y). There was no slice selection during MRI experiment, i.e. each projection represents the sum of all anatomical slices of the sample in specific direction. I.e. the projection (X-Y) is the sum of all anatomical slices along Z axis: Z1, Z2, Z3, …, Zn. The projection (Z-X) is the sum of all anatomical slices along Y axis: Y1, Y2, Y3, …, Yk. The projection (Z-Y) is the sum of all anatomical slices along X axis: X1, X2, X3, …, Xm. Each projection (image) has size of 512 x 512 matrix (pixels). For each image, scaling was applied according to known field-of-views (FOVs) of each image. The first image (projection) has FOV = 20 mm x 30 mm. The second image (projection) has FOV = 50 mm x 40 mm. The third image (projection) has FOV = 43 mm x 36 mm. All these numbers for FOVs are arbitrary but they reflect the essence of the problem: each projection (image) has its own FOV which can be different from FOVs of two other projections (images). Aspect ratio was applied in order to show real ratio between sides of an image. Thus pixels are not square after applying aspect ratio.

In the images, dark blue area is noise area (see the attached pdf file). All images in pdf file were drawn in PowerPoint but they look like real MR images. In noise area of real images, intensity values should be zero, but actually they are not zero, so in each projection I will have to select ROI covering the whole sample and cut it (and there is another question how to determine edges of the sample in each image). The result will be the image with the sample, and in this image the noise will be zero around the sample. So in the pdf file attached, it is assumed that the intensities in the noise area are already set to zero.

As one can see from the pictures, the sample is non-uniform (three different colors demonstrate this non-uniformity), but probably it doesn’t matter.

I need to divide the intensity in each point of the sample in the left image (X-Y projection) by the corresponding length, i.e. I need to obtain normalized left image. The length for each point should be calculated from the middle image (Z-X projection). Each point (pixel) in the left image is a parallelepiped that is defined by three sides: side1=0.0390625 mm (see attached picture, pixel size under the left image), side2=0.05859375 mm, side3=Length. Lengths are in millimeters. In the attached pdf file, in the left image, vertical lines are drawn which pass through the sample: X1, X2, …, Xm (these X values are in millimeters). On each line (X1, X2, …, Xm), in the interval within the sample, violet points are drawn. The number of violet points on each line is the number of pixels belonging to the sample on this specific X-line. Since there was no slice selection in the MRI experiment, I assume that for each violet point on specific X-line (i.e. for each violet point in specific column of left image), the length is the same.

In the middle image (Z-X projection), horizontal lines are drawn which pass through the sample: X’1, X’2, …, X’m (these X’ values are in millimeters). All points on each X’-line, which belong to the sample, should be considered to calculate lengths: L1, L2, …, Lm. Length = (number of pixels which lie on specific X’-line and belong to the sample) multiplied by (pixel size along Z axis in Z-X projection).

It seems that the third image (Z-Y projection) is not necessary for calculation.

The problem is that FOVs of the first and the second projections are different. And thus X1, X2, …, Xm are not equal to X’1, X’2, …, X’m. I need to match the first and the second image (and maybe all three images, but at least two images). The first variant is to extend the side along X axis in the left image (X-Y projection) from 20 mm to 40 mm. The second variant is to compress the side along X axis in the middle image (Z-X projection) from 40 mm to 20 mm. Which variant would you recommend? And how can I implement this extension (or compression)? Or maybe there is another way to achieve the goal?

Additional information:

I don’t need to reconstruct the object from the projections. The purpose is to obtain normalized left image (normalization factor is length which is different for each pixel in the left image). It is preparation step for further quantitative analysis. (Length should be calculated from the middle image as shown in the attached pdf file)

Regarding matching. I need to perform matching of the left and middle projections, since I need to know what length should I take for violet points on X20-line in the left image (for example). If the scaling for the side along X-axis in the left and in the middle images were the same (for example, horizontal side = 20 mm in the left image, and vertical side = 20 mm in the middle image), then it would be possible to say that for the violet points on X20-line in the left image, the length can be calculated based on the number of pixels along horizontal X’20-line in the middle image (in this case, X’20 would be equal to X20).

I will be very appreciate for your help.

Presentation_Imagescforum_scaling_images.pdf (185.8 KB)