
gamma and degree are adjusted by the kernel gamma and kernel degree parameters respectively.

The ExampleSet that was given as input is passed without changing to the output through this port. This model can now be applied on unseen data sets. The SVM model is delivered from this output port.

Thus often you may have to use the Nominal to Numerical operator before application of this operator. This operator cannot handle nominal attributes it can be applied on data sets with numeric attributes. The hyperplanes in the higher dimensional space are defined as the set of points whose inner product with a vector in that space is constant. To keep the computational load reasonable, the mapping used by the SVM schemes are designed to ensure that dot products may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function K(x,y) selected to suit the problem. For this reason, it was proposed that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. Whereas the original problem may be stated in a finite dimensional space, it often happens that the sets to discriminate are not linearly separable in that space.

Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite- dimensional space, which can be used for classification, regression, or other tasks. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other. The standard SVM takes a set of input data and predicts, for each given input, which of the two possible classes comprises the input, making the SVM a non-probabilistic binary linear classifier. Explanation of these kernel types is given in the parameters section. This operator supports various kernel types including dot, radial, polynomial, neural, anova, epachnenikov, gaussian combination and multiquadric. mySVM works with linear or quadratic and even asymmetric loss functions. This learning method can be used for both regression and classification and provides a fast algorithm and good results for many learning tasks. This learner uses the Java implementation of the support vector machine mySVM by Stefan Rueping. It is based on the internal Java implementation of the mySVM by Stefan Rueping. SynopsisThis operator is an SVM (Support Vector Machine) Learner.
