| Name: | Description: | Size: | Format: | |
|---|---|---|---|---|
| 851.04 KB | Adobe PDF |
Advisor(s)
Abstract(s)
When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their
influence in the model output. In this work we extend this concept to the case where the training problem is formulated as the minimization of the integral of the squared error, along the input domain. With this approach, the gradient-based non-linear
optimization algorithms require the computation of terms that are either dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters. These latter terms can be numerically computed with the data provided. The use of this functional approach brings at least two advantages in comparison with the standard training formulation: firstly, computational complexity savings, as some terms are independent on the size of the data and
matrices inverses or pseudo-inverses are avoided; secondly, as the performance surface using this approach is closer to the one obtained with the true (typically unknown) function, the use of gradient-based training algorithms has more chance to find
models that produce a better fit to the underlying function.
Description
Keywords
Neural networks training Parameter separability Functional back-propagation
Pedagogical Context
Citation
Ruano, Antonio E; Cabrita, Cristiano L.; Ferreira, Pedro M. Towards a more analytical training of neural networks and neuro-fuzzy systems, Trabalho apresentado em 2011 IEEE 7th International Symposium on Intelligent Signal Processing - (WISP 2011), In Proceedings of the 2011 IEEE 7th International Symposium on Intelligent Signal Processing, Floriana, Malta, 2011.
Publisher
IEEE
