RETRO (vipl) functions


LIBRARY ROUTINE

llrftrain - Localized Receptive Field Classifier Trainer

LIBRARY CALL

int llrftrain (
	xvimage *image,
	xvimage *cc_img,
	xvimage *var_img,
	xvimage *cn_img,
	xvimage **wt_img,
	float converge,
	float meu,
	int border,
	int max_iter,
	int prt_mse,
	float delta_mse,
	kfile *printdev)

INPUT

OUTPUT

RETURN VALUE

TRUE (1) on success, FALSE (0) on failure

DESCRIPTION

lrftrain trains on an image for the weights used with the Localized Receptive Field classifier (see lrfclass). The Localized Receptive Field (LRF) is based on a single layer of self-organizing, "localized receptive field" units, followed by a single layer perceptron. The single layer of perceptron units use the LMS or Adaline learning rule to adjust the weights.

LRF network theory

The basic network model of the LRF consists of a two layer topology. The first layer of "receptive field" nodes are trained using a clustering algorithm, such as K-means, or some other algorithm which can determine the receptive field centers. Each node in the first layer computes a receptive field response function, which should approach zero as the distance from the center of the receptive field is increased. The second layer of the LRF model sums the weighted outputs of the first layer, which produces the output or response of the network. A supervised LMS rule is used to train the weights of the second layer nodes.

The response function of the LRF network is formulated as follows:

f(x) = SUM(Ti * Ri(x))

where,

Ri(x) = Q( ||x - xi|| / Wi )

x - is a real valued vector in the input space, Ri - is the ith receptive field response function, Q - is a radially symmetric functio with a single maximum at the origin, decreasing to zero at large radii, xi - is the center of the ith receptive field, Wi - is the width of the ith receptive field, Ti - is the weight associated with each receptive field

The receptive field response functions ( Ri(x) ), should be formulated such that they decrease rapidly with increasing radii. This ensures that the response functions provide highly localized representations of the input space. The response function used here is modeled after the gaussian, and uses the trace of the covariance matrix to set the widths of the receptive field centers.

The weights for the output layer are found using the LMS learning rule. The weights are adjusted at each iteration to minimize the total error, which is based on the difference between the network output and the desired result.

Prior to using the LRF algorithm, it is necessary to run "vkmeans" on the input training image to fix the cluster centers, followed by a supervised classification of the clustered image, which assigns a desired class to each cluster center. NOTE that the image resulting from the supervised classification MUST be appended to the "cluster center" image before running the LRF. This is necessary since it makes the appropriate desired class assignments to the cluster centers for the training phase of the LRF.

The number of receptive field response nodes in the first layer of the LRF is determined by the number of cluster centers in the "cluster center" image. The number of output classes, and hence the number of output nodes in the second (ie. last) layer, is determined by the number of desired classes that was specified in the "supervised" classification phase of the clustering. This information is contained in the last band of the cluster center image. The number of weights in the network is determined by the number of receptive field response nodes and the number of output nodes. That is,

#Wts = (#rf_response_nodes * #output_nodes) + #output_nodes

This routine was written with the help of and ideas from Dr. Don Hush, University of New Mexico, Dept. of EECE.

ADDITIONAL INFORMATION

none

EXAMPLES

none

SIDE EFFECTS

none

RESTRICTIONS

All input images except the "cluster number" image (cn_img) MUST be of data storage type FLOAT. The "cluster number" image (cn_img) MUST be of data storage type INTEGER. The output "weight" image (wt_img) is of data storage type FLOAT.

MODIFICATION

none

FILES

$RETRO/objects/library/vipl/src/llrftrain.c

SEE ALSO

vipl(3)

COPYRIGHT

Copyright (C) 1993 - 1997, Khoral Research, Inc. ("KRI") All rights reserved.