Support Vector Machines are supervised learning models which can be used for both classification and regression. SVMs are among the best supervised learning algorithms. It is effective in high dimensional space and it is memory efficient as well. Consider a binary classification problem, where the task is to assign a one of the two labels to given input. We plot each data item as a point in n-dimensional space as follows:
We can perform classification by finding the hyperplane that differentiate the two classes very well. As you can see in the above image, we can draw m number of hyperplanes. How do we find the best one? We can find the optimal hyperplane by maximizing the margin .
We define margin as a twice of the distance between the hyperplane and the nearest sample points to the hyperplane. This points are known as support vector. They known as support vectors because they hold up optimal hyperplane. In above figure, support vectors are represented with filled color. Consider a first hyperplane in figure-1 which touches the two sample points(red). Although it classifies all the examples correctly, but the problem is our hyperplane is close to the so many sample points and other red examples might fall on the other side of the hyperplane. This problem can be solved by choosing a hyperplane which is farthest away from the sample points. It turns out that this type of model generalize very well. This optimal hyperplane is also known as maximum margin separator.
We know that we want hyperplane with maximum margin and we also discussed why we want this. Now, let us learn how to find this optimal hyperplane? Before that, please note in case of SVMs, we represent class labels with +1 and -1 instead of 0 and 1(Binary Valued Labels). Here, we represent each hyperplane, the optimal one, negative and positive hyperplane(dashed lines) with linear equations - wTx + b = 0, wTx+b = -1 and wTx + b = +1 respectively. The left most dashed line is negative hyperplane. We represent red points with x- and blue points with x+. To derive the equation for a margin let us substract equations of negative and positive hyperplane from each other.
Adding length of the vector w to normalize this,
where, 2/||w|| is the margin. Now the objective of the SVM becomes maximization of the margin under the constraint that samples are classified correctly.
This can also be written more compactly as
In practice, it is easier to minimize the below given reciprocal term
This is the quadratic programming problem with the linear constraint.
In the case of inherently noisy data, we may not want a linear hyperplane in high-dimensional space. Rather, we'd like a decision surface in low dimensional space that does not clearly seperate the classes, but reflects the reality of the noisy data. That is possible with the soft margin classifier, which allows examples to fall on the wrong side of the decision boundary, but assigns them a penalty proportional to the distance required to move them back on the correct side. In soft margin classifier, we add slack variables to the linear constraint.
Now, our objective to minimize is
C is the regularization parameter. Small C allows constraint to be easily ignored and results in large margin whereas large C makes constraints hard to ignore and results in narrow margin. This is still a quadratic optimization problem and there is a unique minimum. Now let us implement linear SVM classifier in Python using sklearn. We will use iris dataset
#import the dependenciesfrom sklearn.datasets import load_irisfrom sklearn.svm import SVC#load datasetdataset = load_iris()data = dataset.datatarget = dataset.target
In machine learning, we always need to do some preprocessing to make our dataset suitable for the learning algorithm. I will introduce few preprocessing techniques as we go through various algorithms. Here, we will perform feature scaling which is required for optimal performance. Feature scaling is used to standardize the range of features of data.
from sklearn.preprocessing import StandardScalersc = StandardScaler()sc.fit_transform(data) #check out preprocessing module of sklearn to learn more about preprocessing in ML
Output:
array([[ -9.00681170e-01, 1.03205722e+00, -1.34127240e+00,-1.31297673e+00],[ -1.14301691e+00, -1.24957601e-01, -1.34127240e+00,-1.31297673e+00],[ -1.38535265e+00, 3.37848329e-01, -1.39813811e+00,-1.31297673e+00],[ -1.50652052e+00, 1.06445364e-01, -1.28440670e+00,-1.31297673e+00],[ -1.02184904e+00, 1.26346019e+00, -1.34127240e+00,-1.31297673e+00],......[ 1.03800476e+00, 5.69251294e-01, 1.10395287e+00,1.71090158e+00],[ 1.03800476e+00, -1.24957601e-01, 8.19624347e-01,1.44795564e+00],[ 5.53333275e-01, -1.28197243e+00, 7.05892939e-01,9.22063763e-01],[ 7.95669016e-01, -1.24957601e-01, 8.19624347e-01,1.05353673e+00],[ 4.32165405e-01, 8.00654259e-01, 9.33355755e-01,1.44795564e+00],[ 6.86617933e-02, -1.24957601e-01, 7.62758643e-01,7.90590793e-01]])
#now let us divide data into training and testing setfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(data, target, random_state=42, test_size=0.3)#train a modelmodel = SVC()model.fit(X_train,y_train)
Output:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',max_iter=-1, probability=False, random_state=None, shrinking=True,tol=0.001, verbose=False)
from sklearn.metrics import accuracy_scoreaccuracy_score(y_test, model.predict(X_test)) # Outputs: 1.0model.support_vectors_.shape # Outputs: (39, 4)model.support_vectors_
Output:
array([[ 5.1, 3.8, 1.9, 0.4],[ 4.8, 3.4, 1.9, 0.2],[ 5.5, 4.2, 1.4, 0.2],[ 4.5, 2.3, 1.3, 0.3],[ 5.8, 4. , 1.2, 0.2],[ 5.6, 3. , 4.5, 1.5],[ 5. , 2. , 3.5, 1. ],[ 5.4, 3. , 4.5, 1.5],[ 6.7, 3. , 5. , 1.7],[ 5.9, 3.2, 4.8, 1.8],[ 5.1, 2.5, 3. , 1.1],[ 6. , 2.7, 5.1, 1.6],[ 6.3, 2.5, 4.9, 1.5],[ 6.1, 2.9, 4.7, 1.4],[ 6.5, 2.8, 4.6, 1.5],[ 7. , 3.2, 4.7, 1.4],[ 6.1, 3. , 4.6, 1.4],[ 5.5, 2.6, 4.4, 1.2],[ 4.9, 2.4, 3.3, 1. ],[ 6.9, 3.1, 4.9, 1.5],[ 6.3, 2.3, 4.4, 1.3],[ 6.3, 2.8, 5.1, 1.5],[ 7.7, 2.8, 6.7, 2. ],[ 6.3, 2.7, 4.9, 1.8],[ 7.7, 3.8, 6.7, 2.2],[ 5.7, 2.5, 5. , 2. ],[ 6. , 3. , 4.8, 1.8],[ 5.8, 2.7, 5.1, 1.9],[ 6.2, 3.4, 5.4, 2.3],[ 6.1, 2.6, 5.6, 1.4],[ 6. , 2.2, 5. , 1.5],[ 6.3, 3.3, 6. , 2.5],[ 6.2, 2.8, 4.8, 1.8],[ 6.9, 3.1, 5.4, 2.1],[ 6.5, 3. , 5.2, 2. ],[ 7.2, 3. , 5.8, 1.6],[ 5.6, 2.8, 4.9, 2. ],[ 5.9, 3. , 5.1, 1.8],[ 4.9, 2.5, 4.5, 1.7]])
Till now, we see problems where input data can be separated by linear hyperplane. But what is data points are not linearly separable as shown below?
To solve this type of problems where data can not be seperated linearly, we add new feature. For example, let us add new feature z = x2 + y2. Now, if we plot data points on x and z axis we get :
As you can see, now we can have a linear hyperplane that can seperate data points very well. Do we need to add this additional feature manually? And the answer is no. We use the technique called Kernel Trick. Kernel trick is nothing but a set of functions which takes low-dimensional input space and transform it into high-dimensional space where data points are linearly seperable. These functions are called kernels. Widely used kernels are Radial Basis Function Kernel, Polynomial Kernel, Sigmoid kernel, etc.
Let us implement this in sklearn.
#we have already imported libs and datasetmodel2= SVC(kernel="rbf", gamma=0.2)model2.fit(X_train, y_train)model2.score(X_test, y_test)# Output: 1.0
We can have different decision boundary for different kernels and gamma values. Here is the screenshot from scikit-learn website.