# A WEIGHTED GENERALIZED LS-SVM

## Abstract

Neural networks play an important role in system
modelling. This is especially true if model building is mainly based on
observed data. Among neural models the Support Vector Machine (SVM)
solutions are attracting increasing attention, mostly because they
automatically answer certain crucial questions involved by neural network
construction. They derive an `optimal´ network structure and answer the
most important question related to the `quality´ of the resulted network.
The main drawback of standard Support Vector Machines (SVM) is its high
computational complexity, therefore recently a new technique, the
Least Squares SVM (LS-SVM) has been introduced. This is algorithmically
more effective, because the solution can be obtained by solving a linear
equation set instead of a computation-intensive quadratic programming
problem. Although the gain in efficiency is rather significant, for really
large problems the computational burden of LS-SVM is still too high.
Moreover, an attractive feature of SVM, its sparseness is lost.
This paper proposes a
special new generalized formulation and solution technique for the standard
LS-SVM. By solving the modified LS-SVM equation set in least squares (LS)
sense (LS* ^{2}*-SVM), a pruned solution is achieved, while the
computational burden is further reduced (Generalized LS-SVM). In this
generalized LS-SVM framework a further modification weighting is also
proposed, to reduce the sensitivity of the network construction to outliers
while maintaining sparseness.