WorldCat Linked Data Explorer

http://worldcat.org/entity/work/id/25276218

Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are not only equivalent to generalized splines, but are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients C sub alpha: moving centers and adjustable norm-weights. Moving the centers is equivalent to task-dependent clustering and changing the norm weights is equivalent to task-dependent dimensionality reduction. (KR).

Open All Close All

http://schema.org/description

  • "The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\alpha$: {\it moving centers} and {\it adjustable norm-weight}."
  • "Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are not only equivalent to generalized splines, but are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients C sub alpha: moving centers and adjustable norm-weights. Moving the centers is equivalent to task-dependent clustering and changing the norm weights is equivalent to task-dependent dimensionality reduction. (KR)."@en

http://schema.org/name

  • "Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering"@en
  • "Extensions of a Theory of Networks for Approximation and Learning : Dimensionality reduction and clustering"
  • "Extensions of a theory of networks for approximation and learning: dimensionality reduction and clustering"
  • "Extensions of a theory of networks for approximation and learning : dimensionality reduction and clustering"
  • "Extensions of a theory of networks for approximation and learning : dimensionality reduction and clustering"@en