Support Vector Machine
Data Science

Support Vector Machine

Types of machine learning

Supervised Learning

It means training the algorithm on labeled data, that is, the input and output data together. Through this process, the algorithm builds a relationship between the inputs and the outputs so that it can predict new data outputs.


Unsupervised Learning

It means training the algorithm on unlabeled data, that is, the income data only. Through this process, the algorithm builds a relationship between the inputs so that it enables it.


Reinforcement Learning

It means that the algorithm learns on its own and from its mistakes.


Support Vector Machine Algorithm

It is a method that combines statistical theory and directed education, developed by researcher Vapnik in 1998. The idea of ​​the vector support machine algorithm is based on the search for the best way to divide data into two groups by placing a hyperplane between them regardless of the nature of the data, whether it is linear or separable. No, this is where its strength lies.


Hyperplane

The hyperplane may be represented in a two-dimensional space as shown in Figure, that is, it is a straight line if the number of features entered is 2, or in a three-dimensional space (flat) as shown in Figure, if it is The number of features entered is 3, or in an n-dimension space if the number of features entered is n.



Support Vectors

Support vectors are the set of data points closest to the hyperplane and they are of great importance in the process of data classification because through them the best hyperplane is determined in the process of data separation, thus removing it or changing its location requires specifying another hyperplane again.



Margin

It is the distance between the hyperplane and the nearest point of the data set points as shown in Figure, so that the greater this distance, the greater the probability of classifying new data correctly, as shown in Figure. We note that hyper level B is the best From hyperplane A, because the distance between it and any point of the support vectors is as large as possible, and this allows the possibility of classifying the largest number of points correctly. Therefore, the importance of the margin lies in getting the best position for the super level, and this is what we meant at the top by the phrase the best super level. Things became clearer!


The difference between Soft Margin and Hard Margin

In the solid margin, the data set is linearly separable, that is, the data points are not overlapping with each other, as in Figure. In the flexible margin, the data set contains a few points that prevent the linear separating process. In this case, these points are allowed to be incorrectly classified, whether they are within the margin area or outside it. Therefore, a coefficient is added that expresses the percentage of error in each of the training points, whether they are correctly or incorrectly classified.


Types of machine support vectors

Linear SVM Vector Machine

It is a classifier that is used to separate the data that is characterized as being linearly separable as in Figure, by a super plane that expresses a straight line whose task is to separate the data set into two groups.

This classifier is called Linear SVM Classifier, i.e. Linear Support Vector Machine Classifier. Its working principle is to find the ideal values of the straight line equation that separates these data in the best possible way as in Figure, i.e. with the largest margin distance between any point of the support vectors and the plane superlative.

Non-Linear SVM Support Vector Machine

    It is a classifier that is used to separate data that is not linearly separable by a straight line as in Figure (12) and this classifier is called Non-Linear SVM Classifier ie the nonlinear support vector machine classifier.

Its working principle is to transform the data set we have from its current space, i.e. from the two-dimensional space to a higher space, where we can separate and distinguish the data. Thus, the algorithm starts automatically by experimenting with adding a third dimension to the data, letting it be Z as a first possibility, and then testing the data set represented in a three-dimensional space, X, Y, Z, has it become separable or not? If the condition is met, then the algorithm searches at the hyper-level that is more suitable for the separation process, perhaps a three-dimensional plane or a specific solid. M Has it become separable or not and so on... The algorithm continues adding dimensions until it reaches a stage where the data becomes separable and this method is called the Kernel Method.


SVM vector booster machine algorithm usage

This algorithm is used in Image Classification and Segmentation, Category Assignment, Classifier in NLP Applications, Classifying Email as Fake Spam Email or Real Ham Email, Sentiment Analysis, and others. It is also used in biological applications and other sciences. The most popular application used to recognize handwritten numbers in the Postal Automation Services mail system.


Conclusion

In this article, we learned about the support vector machine algorithm, which is one of the most important algorithms used in most machine learning applications, and we learned about the concepts used in it, such as the hyper-level, support vectors, and margin of its two types, elastic and inelastic (hard), in addition to how to theoretically separate linear and non-linear data, We will not be satisfied with that. We will continue this series in the next article, where we will explain how to separate linear and non-linear data through a practical application in the Python language.

  • Mohamed Ahmed El-Gharib
  • Mar, 28 2022

Add New Comments

Please login in order to make a comment.

Recent Comments

Be the first to start engaging with the bis blog.