Software: LWPR
Locally Weighted Projection Regression
Locally Weighted Projection Regression (LWPR) is an algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it uses locally linear models, spanned by a small number of univariate regressions in selected directions in input space. A locally weighted variant of Partial Least Squares (PLS) is employed for doing the dimensionality reduction. This nonparametric local learning system
 learns rapidly with second order learning methods based on incremental training,
 uses statistically sound stochastic cross validation to learn,
 adjusts its weighting kernels based on local information only,
 has a computational complexity that is linear in the number of inputs, and
 can deal with a large number of  possibly redundant  inputs,
as shown in evaluations with up to 50 dimensional data sets. To our knowledge, this is the first truly incremental spatially localized learning method to combine all these properties.
Sethu Vijayakumar, Aaron D'Souza and Stefan Schaal. Incremental Online Learning in High Dimensions. Neural Computation, vol. 17, no. 12, pp. 26022634 (2005). [pdf]
A good guide to practical usage:
Stefan Klanke, Sethu Vijayakumar and Stefan Schaal, A library for Locally Weighted Projection Regression, Journal of Machine Learning Research (JMLR), vol. 9, pp. 623626 (2008). [pdf]
Sethu Vijayakumar and Stefan Klanke, A library for Locally Weighted Projection Regression  Supplementary Documentation.
When to use LWPR (and when not)
LWPR is particularly suited for the following regression problems: The function to be learnt is nonlinear. Otherwise having multiple local models is a waste of resources, and you should rather use ordinary linear regression, or (global) PLS for the case of highdimensional or irrelevant inputs.
 There are large amounts of training data. If you desire good generalization from only relatively few samples (say, less than 2000), you are probably better off with Gaussian Processes (GP). LWPR needs that much data to properly detect the local dimensionality (number of projection directions) and the scale on which the regression function is locally linear. If the training set is small, the samples need to be presented to LWPR multiple times in random order.
 Your application requires incremental, online training. If you can afford to collect the data beforehand, and the time required for batch learning is not critical, LWPR loses its edge against SVM regression, or (Sparse) GP regression. When compared to global function approximators like multilayer neural networks, LWPR has the tremendous advantage that its local models learn independently and without interference.
 The input space is highdimensional, but the data lies on lowdimensional manifolds. LWPR places local models only where they are needed, and can detect the local dimensionality through PLS, yielding robust estimates of the regression coefficients. The latter feature sets off LWPR against previous (but otherwise similar) algorithms such as Receptive Field Weighted Regression.
 The model may require adaptation, since the target mapping may change over time. This suits LWPR very well because a builtin forgetting factor can be tuned to match the expected time scale at which such changes occur. The adaptation then usually happens quite fast, since the overall placement of receptive fields, their size, and the local PLS directions of a welltrained model can often be kept, while the regression parameters get readjusted.
Our Implementation
We have implemented the LWPR algorithm in plain ANSI C, with wrappers and bindings for C++, Matlab/Octave, and Python (using Numpy). LWPR models can be stored in platformdependent binary files or optionally in platformindependent, humanreadable XML files. The latter functionality relies on the XML parser Expat as the only dependency.
LWPR models are fully interchangeable between the different implementations, that is, you could train a regression model in Matlab, store it to a file, and load it from a Python script or C++ program to calculate predictions. Just as well, you could train a model from real robot data collected online in C/C++, and later inspect the LWPR model comfortably within Matlab.
Documentation
The Matlab and Python implementations contain documentation in their "native"
format (accessible through the help
command in either environment).
The C library and its C++ wrapper contains Doxygenstyle documentation, which
you can browse online here. A more indepth description
of all LWPR data fields and parameters, together with some hints for tuning the latter,
can be found in this supplementary documentation.
Further to that, this link leads you to a small tutorial.
Download and Installation
The library is freely available under the terms of the LGPL (with an exception that allows for static linking) and can be downloaded here. (version history)
 Version 1.2.4  released 2012/02/02
 Download from sourceforge.net
 Version 1.2.3  released 2009/11/12

lwpr1.2.3.tar.gz (708 kB) lwpr1.2.3.zip (850 kB)  Version 1.2.2  released 2009/06/29

lwpr1.2.2.tar.gz (707 kB) lwpr1.2.2.zip (849 kB)  Version 1.2.1  released 2008/11/04

lwpr1.2.1.tar.gz (680 kB) lwpr1.2.1.zip (808 kB)  Version 1.2  released 2008/06/23

lwpr1.2.tar.gz (680 kB) lwpr1.2.zip (807 kB)  Version 1.1  released 2008/04/02

lwpr1.1.tar.gz (678 kB) lwpr1.1.zip (802 kB)
If you unpack the archive, you'll get a new directory lwprx.y, which contains information about the library and installation instructions in the files README.TXT and INSTALL.TXT. Starting with version 1.1, the LWPR library can be built using the canonical configure/make/make install trio on UNIXlike systems.
Inside the toplevel directory, there are several subdirectories that contain sources, include files and documentation:
 matlab
 contains the Matlab functions (.m files). To use the LWPR library from Matlab, all you have to do is to add this directory to your Matlab path, and to run "lwpr_buildmex" within Matlab in order to build the MEX wrappers. Recent versions of Octave (2.9.12 or later) are compatible with Matlab's MEXinterface, and thus the build script we provide works in that environment as well.
 include
 C header files of the LWPR library and C++ header (lwpr.hh) file for wrapping the C library as a C++ class.
 src
 C sources.
 mexsrc
 C sources of Matlab/MEXwrappers. Building these
is handled by the script
lwpr_buildmex
, or alternatively using autotools on UNIX (see the options of the configure script).  example_c
 contains a simple demo that shows how to use the library from a C program.
 example_cpp
 contains the same demo written in C++, using the objectoriented interface to the library.
 python
 contains a Python extension module for LWPR, written
in C, and also a Python script demonstrating its usage. If you have
Python's
distutils
installed, you can build the extension usingsetup.py
, otherwise try the included Makefile on a Linux/Unix system. Please note again that the Python modules requires numpy.  html
 contains documentation for the C and C++ modules as generated by Doxygen.
 VisualC
 Visual Studio "solutions" and project files. Only tested with Visual Studio Express 2008.
 MinGW
 Contains a simple Makefile for building the C library and examples on Windows using the MinGW compiler. Probably also works with Cygwin.
 doc
 contains supplementary documentation and hints how to tune learning parameters etc.
 tests
 contains a simple test program (written in C) that checks some aspects of the library during a "make check" call on UNIXlike systems.