Repository logo
 
Loading...
Profile Picture
Person

Cabrita, Cristiano Lourenço

Search Results

Now showing 1 - 8 of 8
  • Exploiting the functional training approach in Radial Basis Function networks
    Publication . Cabrita, Cristiano Lourenço; Ruano, Antonio; Ferreira, P. M.
    This paper investigates the application of a novel approach for the parameter estimation of a Radial Basis Function (RBF) network model. The new concept (denoted as functional training) minimizes the integral of the analytical error between the process output and the model output [1]. In this paper, the analytical expressions needed to use this approach are introduced, both for the back-propagation and the Levenberg- Marquardt algorithms. The results show that the proposed methodology outperforms the standard methods in terms of function approximation, serving as an excellent tool for RBF networks training.
  • A new domain decomposition for B-spline Neural Networks
    Publication . Cabrita, Cristiano Lourenço; Ruano, Antonio; Kóczy, László T.
    B-spline Neural Networks (BSNNs) belong to the class of networks termed grid or lattice-based associative memories networks (AMN). The grid is a key feature since it allows these networks to exhibit relevant properties which make them efficient in solving problems namely, functional approximation, non-linear system identification, and on-line control. The main problem associated with BSNNs is that the model complexity grows exponentially with the number of input variables. To tackle this drawback, different authors developed heuristics for functional decomposition, such as the ASMOD algorithm or evolutionary approaches [2]. In this paper, we present a complementary approach, by allowing the properties of B-spline models to be achieved by non-full grids. This approach can be applied either to a single model or to an ASMOD decomposition. Simulation results show that comparable results, in terms of approximations can be obtained with less complex models.
  • Extending the functional training approach for B-splines
    Publication . Cabrita, Cristiano Lourenço; Ruano, Antonio; Ferreira, P. M.; Kóczy, László T.
    When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. This concept of parameter separability can also be applied when the training problem is formulated as the minimization of the integral of the (functional) squared error, over the input domain. Using this approach, the computation of the gradient involves terms that are dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters, over the input domain. This paper extends the application of this formulation to B-splines, describing how the Levenberg- Marquardt method can be applied using this methodology. Simulation examples show that the use of the functional approach obtains important savings in computational complexity and a better approximation over the whole input domain.
  • Exploiting the functional training approach in B-Splines
    Publication . Ruano, Antonio; Cabrita, Cristiano Lourenço; Ferreira, P. M.; Kóczy, László T.
    When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. This concept of parameter separability can also be applied when the training problem is formulated as the minimization of the integral of the (functional) squared error, over the input domain. Using this approach, the computation of the gradient involves terms that are dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters, over the input domain. These later terms can be numerically computed with the data. The use of the functional approach is introduced here for B-splines. An example shows that, besides great computational complexity savings, this approach obtains better results than the standard, discrete technique, as the performance surface employed is more similar to the one obtained with the function underlying the data. In some cases, as shown in the example, a complete analytical solution can be found.
  • Design of neuro-fuzzy models by evolutionary and gradient-based algorithms
    Publication . Cabrita, Cristiano Lourenço; Ruano, A. E.
    All systems found in nature exhibit, with different degrees, a nonlinear behavior. To emulate this behavior, classical systems identification techniques use, typically, linear models, for mathematical simplicity. Models inspired by biological principles (artificial neural networks) and linguistically motivated (fuzzy systems), due to their universal approximation property, are becoming alternatives to classical mathematical models. In systems identification, the design of this type of models is an iterative process, requiring, among other steps, the need to identify the model structure, as well as the estimation of the model parameters. This thesis addresses the applicability of gradient-basis algorithms for the parameter estimation phase, and the use of evolutionary algorithms for model structure selection, for the design of neuro-fuzzy systems, i.e., models that offer the transparency property found in fuzzy systems, but use, for their design, algorithms introduced in the context of neural networks. A new methodology, based on the minimization of the integral of the error, and exploiting the parameter separability property typically found in neuro-fuzzy systems, is proposed for parameter estimation. A recent evolutionary technique (bacterial algorithms), based on the natural phenomenon of microbial evolution, is combined with genetic programming, and the resulting algorithm, bacterial programming, advocated for structure determination. Different versions of this evolutionary technique are combined with gradient-based algorithms, solving problems found in fuzzy and neuro-fuzzy design, namely incorporation of a-priori knowledge, gradient algorithms initialization and model complexity reduction.
  • Exploiting the functional training approach in Takagi-Sugeno Neuro-fuzzy Systems
    Publication . Cabrita, Cristiano Lourenço; Ruano, Antonio; Ferreira, P. M.; Kóczy, László T.
    When used for function approximation purposes, neural networks and neuro-fuzzy systems belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. This concept of parameter separability can also be applied when the training problem is formulated as the minimization of the integral of the (functional) squared error, over the input domain. Using this approach, the computation of the derivatives involves terms that are dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters, over the input domain. These later terms can be numerically computed with the data. The use of the functional approach is introduced here for Takagi-Sugeno models. An example shows that this approach obtains better results than the standard, discrete technique, as the performance surface employed is more similar to the one obtained with the function underlying the data. In some cases, as shown in the example, a complete analytical solution can be found.
  • Improving energy efficiency in smart-houses by optimizing electrical loads management
    Publication . Cabrita, Cristiano Lourenço; Monteiro, Jânio; Cardoso, Pedro
    In this work, the Genetic Algorithm is explored for solving a predictive based demand side management problem (a combinatorial optimization problem) and the main measures lbr performance evaluation are evaluated. In this context, we propose a smart energy scheduling approach for household appliances in real-time to achieve minimum consumption costs and a reduction in peak load. We consider a scenario of selfconsumption where the surplus from local power generation can be sold to the grid, and the existence of appliances that can be shiftable from peak hours to off-peak hours. Results confirm the importance of the tuning procedure and the structure of the genome and algorithm's operators determine the performance of such type of meta-heuristics. This fact is more decisive when there are several operational constraints on the system, as for example short-term optimal scheduling decision, time constraints and power limitations. Details about the scheduling problem, comparison strategies, metrics, and results are provided.
  • Towards a more analytical training of neural networks and neuro-fuzzy systems
    Publication . Ruano, Antonio; Cabrita, Cristiano Lourenço; Ferreira, P. M.
    When used for function approximation purposes, neural networks belong to a class of models whose parameters can be separated into linear and nonlinear, according to their influence in the model output. In this work we extend this concept to the case where the training problem is formulated as the minimization of the integral of the squared error, along the input domain. With this approach, the gradient-based non-linear optimization algorithms require the computation of terms that are either dependent only on the model and the input domain, and terms which are the projection of the target function on the basis functions and on their derivatives with respect to the nonlinear parameters. These latter terms can be numerically computed with the data provided. The use of this functional approach brings at least two advantages in comparison with the standard training formulation: firstly, computational complexity savings, as some terms are independent on the size of the data and matrices inverses or pseudo-inverses are avoided; secondly, as the performance surface using this approach is closer to the one obtained with the true (typically unknown) function, the use of gradient-based training algorithms has more chance to find models that produce a better fit to the underlying function.