How should i calculate effective scaling factor for resizing the image without losing much information?
could you please be more specific?
What is an “effective scaling factor”?
What is meant by calculation of a scaling factor?
What kind of resizing is desired and for what purpose?
If you downscale an image, you will always loose information, provided that the original image was sampled exactly according to the sampling theorem.
If you enlarge an image, you don’t loose information if you create the additional samples by proper interpolation. The correct interpolation function is the 2D “sinc”-kernel but there are many approximations that require less computational effort.
You may have a look at this excellent review paper:
Meijering E. (2002)
A chronology of interpolation: from ancient astronomy to modern signal and image processing.
Proceedings of the IEEE 90: 319-342. doi:10.1109/5.993400
Thanks a lot for your reply and link to the paper.
I have a 300 DPI image and i want to scale it down to a smaller resolution to make further image processing faster.
How can I choose an optimum scaling factor to scale down my image and also not loose too much information?
Currently, I am using Bicubic Interpolation to suppress aliasing effect.
“dpi” is a useless measure for image processing and you should find out yourself why this is so. What you need to know is, how your original images have been sampled.
As I’ve written, downscaling always results in information loss, if the original images were sampled exactly according to the sampling theorem. If however they were oversampled, then you may downscale them without loss, but only until the sampling limit is reached.
I don’t understnad what you mean by “optimum scaling factor”.
I don’t think that it exists without a suitable criterion.
"[…] not loose too much information"
There is no way out, the information loss necessarily and always depends directly on the scale factor and it may additionally depend on the way you implement the down-scaling.
Let’s assume that your images were sampled exactly according to the sampling theorem. If you downscale them correctly, e.g. by a linear factor of two, then the number of pixels becomes a quarter of the original number and the information is also a quarter of the original information.
To downscale correctly, e.g. by a linear factor of two, implies lowpass-filtering of the original images with a filter-function that ideally shows a hard frequency limit at half of the bandwidth of the original images. Approximations of the ideal lowpass-filter will always introduce more or less artifacts (aliasing).
– Bicubic interpolation is to be regarded as an approximation of the ideal lowpass-filter! –
So please realize that there is no optimum factor for down-scaling but there are more or less good ways to avoid additional loss of information due to filter approximations.