Polynomial Features and Polynomial Regression
Polynomial features are higher power polynomial terms of the original features, which are added to the feature space of a model.
Let us understand this with a few examples.
Suppose we have a dataset with features x_1, x_2 and target variable y. A multivariable linear regression model for this set of data would be:
Polynomial features are higher ordered values of x_1 and x_2 which we can add to this model, for eg. x_1^2, x_1^3, x_2^2 , etc.
Our new model would look like this:
This is known as a polynomial regression model.
We can also combine the features to create higher ordered terms like: x_1x_2 or x_1x_2^2 .
Important Note:
We treat the polynomial regression model just like a linear regression model, where each of the polynomial terms are additional features. Although new polynomial terms are added as features, the model is still a linear model in terms of the model parameters w and b. Hence, each of the new features are still linearly related to the outcome variable.
The training method is exactly the same as a linear regression model. We can use gradient descent to find the optimal values of each of the weights in our model.