On-line Learning in RKHS
- PI: Jose Principe
- Funding Source: NSF
On-line learning in RKHS provides a new class of nonlinear filters that are adapted one sample at a time and that also approximate the required nonlinearity incrementally. When the kernel is the Gaussian, they are growing RBF networks, where the weights are related to the error at each sample. Unlike neural networks, this class of nonlinear filters does not suffer from local minima. They have many potential applications in nonlinear signal processing and the methods can also be applied to large machine learning applications.
On-line KLMS is intrinsically regularized
- Ph.D. Student: Weifeng Liu, Puskal Pokharel
Regularization seems to be always required in kernel space systems. However, we proved that the KLMS is well posed in the sense of Hadamard. Since the KLMS algorithm adapts the filter weights always in the direction of the gradient, the solution never leaves the data manifold, hence KLMS does not need regularization. The stepsize controls how regularized the solution is.
Weifeng Liu, P.P.Pokharel, Jose C. Principe, Kernel Least Mean Square Algorithm (submitted)
Nonlinear adaptive filters in RKHS
- Ph.D. student: Weifeng Liu
The beauty of implementing linear adaptive filters in RKHS is that they are nonlinear in the input space. We propose the class of Kernel Affine Projection Algorithms (KAPA) as a general framework for on-line algorithms in RKHS. We have recently been able to develop a kernelized version of the extended RLS algorithm (Ex-KRLS) for tracking, which has a scalar state in the RKHS.
Liu W., Principe J., Kernel Affine Projection Algorithms, European J. of Signal Processing, Special Issue on Machine Learning for Signal Processing, 2008
Liu W., Principe J., Extended Recursive Least Squares in RKHS, in Proc. First Workshop on Cognitive Signal Processing, Santorini, Greece, 2008
Active Learning Strategies
- Ph.D. Student: Weifeng Liu
One of the issues of on-line learning filters is that the filter grows with each new sample. It is intuitive that once sufficient samples are used, it is wasteful to create a new center for each new sample because of redundancy. How to choose samples that are relevant is a nontrivial matter. We are investigating a class of algorithms based on an instantaneous information cost to select if the new sample should be incorporated or not in the filter. The criterion for filtering can be exactly estimated using Gaussian process theory
Liu W., Principe J., Active Online Gaussian Process Regression Based on Conditional Information, submitted to NIPS 2008.