Repository logo
 
Loading...
Profile Picture
Person

Cabrita, Cristiano Lourenço

Search Results

Now showing 1 - 6 of 6
  • Exploiting the functional training approach in Radial Basis Function networks
    Publication . Cabrita, Cristiano Lourenço; Ruano, Antonio; Ferreira, P. M.
    This paper investigates the application of a novel approach for the parameter estimation of a Radial Basis Function (RBF) network model. The new concept (denoted as functional training) minimizes the integral of the analytical error between the process output and the model output [1]. In this paper, the analytical expressions needed to use this approach are introduced, both for the back-propagation and the Levenberg- Marquardt algorithms. The results show that the proposed methodology outperforms the standard methods in terms of function approximation, serving as an excellent tool for RBF networks training.
  • Extending the functional training approach for B-splines
    Publication . Cabrita, Cristiano Lourenço; Ruano, Antonio; Ferreira, P. M.; Kóczy, László T.
    When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. This concept of parameter separability can also be applied when the training problem is formulated as the minimization of the integral of the (functional) squared error, over the input domain. Using this approach, the computation of the gradient involves terms that are dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters, over the input domain. This paper extends the application of this formulation to B-splines, describing how the Levenberg- Marquardt method can be applied using this methodology. Simulation examples show that the use of the functional approach obtains important savings in computational complexity and a better approximation over the whole input domain.
  • Exploiting the functional training approach in B-Splines
    Publication . Ruano, Antonio; Cabrita, Cristiano Lourenço; Ferreira, P. M.; Kóczy, László T.
    When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. This concept of parameter separability can also be applied when the training problem is formulated as the minimization of the integral of the (functional) squared error, over the input domain. Using this approach, the computation of the gradient involves terms that are dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters, over the input domain. These later terms can be numerically computed with the data. The use of the functional approach is introduced here for B-splines. An example shows that, besides great computational complexity savings, this approach obtains better results than the standard, discrete technique, as the performance surface employed is more similar to the one obtained with the function underlying the data. In some cases, as shown in the example, a complete analytical solution can be found.
  • Exploiting the functional training approach in Takagi-Sugeno Neuro-fuzzy Systems
    Publication . Cabrita, Cristiano Lourenço; Ruano, Antonio; Ferreira, P. M.; Kóczy, László T.
    When used for function approximation purposes, neural networks and neuro-fuzzy systems belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. This concept of parameter separability can also be applied when the training problem is formulated as the minimization of the integral of the (functional) squared error, over the input domain. Using this approach, the computation of the derivatives involves terms that are dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters, over the input domain. These later terms can be numerically computed with the data. The use of the functional approach is introduced here for Takagi-Sugeno models. An example shows that this approach obtains better results than the standard, discrete technique, as the performance surface employed is more similar to the one obtained with the function underlying the data. In some cases, as shown in the example, a complete analytical solution can be found.
  • Towards a more analytical training of neural networks and neuro-fuzzy systems
    Publication . Ruano, Antonio; Cabrita, Cristiano Lourenço; Ferreira, P. M.
    When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. In this work we extend this concept to the case where the training problem is formulated as the minimization of the integral of the squared error, along the input domain. With this approach, the gradient-based non-linear optimization algorithms require the computation of terms that are either dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters. These latter terms can be numerically computed with the data provided. The use of this functional approach brings at least two advantages in comparison with the standard training formulation: firstly, computational complexity savings, as some terms are independent on the size of the data and matrices inverses or pseudo-inverses are avoided; secondly, as the performance surface using this approach is closer to the one obtained with the true (typically unknown) function, the use of gradient-based training algorithms has more chance to find models that produce a better fit to the underlying function.
  • Training neural networks and neuro-fuzzy systems: a unified view
    Publication . Ruano, Antonio; Ferreira, P. M.; Cabrita, Cristiano Lourenço; Matos, S.
    Neural and neuro-fuzzy models are powerful nonlinear modelling tools. Different structures, with different properties, are widely used to capture static or dynamical nonlinear mappings. Static (non-recurrent) models share a common structure: a nonlinear stage, followed by a linear mapping. In this paper, the separability of linear and nonlinear parameters is exploited for completely supervised training algorithms. Examples of this unified view are presented, involving multilayer perceptrons, radial basis functions, wavelet networks, B-splines, Mamdani and TSK fuzzy systems.