After Hog feature extraction do I need to use StandardScaler before PCA

Hello,
I am a beginner in the field of image processing and machine learning und I hope I’m here in the right forum.
It’s about training an SVM (support vector machine) with hog features from images which are reduced by PCA (principal component analysis). The Hog features are extracted as follows:

from skimage import feature

...
feat = feature.hog(image, orientations=12, pixels_per_cell=(4,4), cells_per_block=(2,2), block_norm='L2_Hys', transform_sqrt=True)
...

The hog feature vectors contain only pixel values between 0 and 1, so it seems that they are normalized. Therefore I’m not sure if I should really apply the often recommended StandardScaler before applying PCA?

Thanks in advance

@codingRightNow

Perhaps @jni or @stefanv can assist you in this?

Hey @codingRightNow,

Yes, the output is normalized in this case. You can check why and how it is done in the source code.

I’m not sure how StandardScaler on sklearn works, but it seems something beyond normalization… could you check their preprocessing page?
Please let me know if this helped you (or not).

1 Like

Hey @alexdesiqueira, thank you for your answer and your link.
The StandardScaler should transform the data such that its distribution will have mean value of 0 and standard deviation of 1. I have done a few tests with different SVMs and different data sets. I found that LinearSVC performs best when the StandardScaler is not used. The SVC with RBF kernel shows the opposite behavior, here the accuracy is highest when the hog features are scaled with the StandardScaler (the reason may be the one called in your link). But I have not really understood, why the LinearSVC performs worse with the StandardScaler.