Lwpr Class Reference
#include <lwpr.h>
List of all members.
Detailed Description
This is an interface to LWPR, Locally Weighted Projection Regression.
Definition at line 29 of file lwpr.h.
Public Member Functions
-
Lwpr ()
- constructor, also loads parameters from parameter file or command line
-
void learn (const doubleA &X, const doubleA &Y)
- updates the LWPR model given a new datapoint x->y, or a batch of data
-
void predict (const doubleA &X, doubleA &Y)
- predicts the output Y for a given input X -- also for batch data
-
double confidence (const doubleA &x)
- get the confidence bounds (standard deviation) at some input point
-
void save (char *)
- saves the LWPR model in the file given as parameter
-
void load (char *, bool alsoParameters=true)
- loads the LWPR model from the file given as parameter
-
void report (std::ostream &os)
- preliminary -- output useful readable state information
-
int get_rfs_no (int out_dim)
- get the number of rfs for the given output dimension
-
double get_proj_average (int out_dim)
- average number of projection dimensions used in the local models
Public Attributes
-
int verbosity
- determines the verbosity during learning, set to 0 for silence
-
uint rfsno
- number of receptive fields
-
doubleA norm
- The normalization by which each input column is divided (one entry for every input dimension) - this tries to make the all the inputs dimensionless. The norm field can be a single scalar (1d-array with N=1), a vector to normalizes each input dimension differently (1d-array), or a general matrix to transform the input arbitrarily (2d-array). (In general, it is dangerous to divide each dimension by its variance without considering the physical properties of the input values since some input dimensions may be actually moving very little relative to its range. Ideally, one should know the range of possible inputs in each dimension and try to normalize each input by that).
-
bool add_proj
- Allow(disallow) the addition of projection direction (to the initial number of projections specified by init_n_reg) if deemed necessary. This acts as a debugging variable.
-
bool updateD
- Allow(disallow) the gradient descent up date of the Distance Metric D.
-
bool useNorm
- Allow(disallow) normalization.
-
double cut_off
- If none of the local models (receptive fields) elicits activation greater than cutoff for a query point, that query is labeled as not being well supported. We should not trust predictions for this data point (We get separate test statistics which include/exclude these queries). LWPR returns ZERO for this model, there is no unlimited extrapolation.
-
double w_prune
- If a training data elicits responses greater than w_prune from 2 local models, then, the one with the greater error is pruned.
-
double w_gen
- A new local model is generated if no existing model elicits response (activation) greater than w_gen for a training data.
-
double initD
- Initial values of distance metric.
-
double init_lambda
- Initial forgetting factor.
-
double tau_lambda
- Annealing constant for the forgetting factor.
-
double final_lambda
- Final forgetting factor.
-
double add_threshold
- The mean squared error of the current regression dimension is compared against the previous one. Only if the ratio of the nMSEr-1/nMSEr < add_threshold, a new regression direction is added. This criterion is used in conjunction with some other checks to ensure that the decision is based on enough data support.
-
bool meta
- Allow (disallow) second order learning.
-
double meta_rate
- To be used if second order learning is enabled.
-
int initR
- Number of projections to start regression with (Usually start with 2, one for regression and the other to compare whether an additional dimension should be used.
-
double alpha
- Distance Metric learning rate initialization for gradient descent (currently implements only equal rates along all dimensions, i.e. initializes all the learning rates with the same parameters. (If you see lots of large update warnings or instability in convergence, you have too large a learning rate (alpha). If the Mean Squared Error is decreasing but the convergence is slow, you might try increasing the learning rate.).
-
double gamma
- Multiplication factor for the regularization penalty term in the optimization functional.
-
bool blend
- 1: Allows weighted mixing of results from overlapping receptive fields. 0 : Winner-take-all prediction.
The documentation for this class was generated from the following file:
[]