IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Multi-Objective Training of Neural Networks

Multi-Objective Training of Neural Networks
View Sample PDF
Author(s): M. P. Cuéllar (Universidad de Granada, Spain), Miguel Delgado (Universidad de Granada, Spain)and M. C. Pegalajar (Universidad de Granada, Spain)
Copyright: 2009
Pages: 7
Source title: Encyclopedia of Artificial Intelligence
Source Author(s)/Editor(s): Juan Ramón Rabuñal Dopico (University of A Coruña, Spain), Julian Dorado (University of A Coruña, Spain)and Alejandro Pazos (University of A Coruña, Spain)
DOI: 10.4018/978-1-59904-849-9.ch168

Purchase

View Multi-Objective Training of Neural Networks on the publisher's website for pricing and purchasing information.

Abstract

Traditionally, the application of a neural network (Haykin, 1999) to solve a problem has required to follow some steps before to obtain the desired network. Some of these steps are the data preprocessing, model selection, topology optimization and then the training. It is usual to spend a large amount of computational time and human interaction to perform each task of before and, particularly, in the topology optimization and network training. There have been many proposals to reduce the effort necessary to do these tasks and to provide the experts with a robust methodology. For example, Giles et al. (1995) provides a constructive method to optimize iteratively the topology of a recurrent network. Other methods attempt to reduce the complexity of the network structure by mean of removing unnecessary network nodes and connections like in (Morse, 1994). In the last years, evolutionary algorithms have been shown as promising tools to solve this problem, existing many competitive approaches in the literature. For example, Blanco et al. (2001) proposed a master-slave genetic algorithm to train (master algorithm) and to optimize the size of the network (slave algorithm). For a general view of the problem and the use of evolutionary algorithms for neural network training and optimization, we refer the reader to (Yao, 1999). Although the literature about genetic algorithms and neural networks is very extensive, we would like to remark the recent popularity of multi-objective optimization (Coello et al., 2002, Jin, 2006), specially to solve the problem of simultaneous training and topology optimization of neural networks. These methods have shown to perform suitably for this task in previous works, although most of them are proposed for feedforward models. They attempt to optimize the structure of the network (number of connections, hidden units or layers), while training the network at the same time. Multi-objective algorithms may provide important advantages in the simultaneous training and optimization of neural networks: They may force the search to return a set of optimal networks instead of a single one; they are capable to speed-up the optimization process; they may be preferred to a weight-aggregation procedure to cover the regularization problem in neural networks; and they are more suitable when the designer would like to combine different error measures for the training. A recent review of these techniques may be found in (Jin, 2006).

Related Content

Kamel Mouloudj, Vu Lan Oanh LE, Achouak Bouarar, Ahmed Chemseddine Bouarar, Dachel Martínez Asanza, Mayuri Srivastava. © 2024. 20 pages.
José Eduardo Aleixo, José Luís Reis, Sandrina Francisca Teixeira, Ana Pinto de Lima. © 2024. 52 pages.
Jorge Figueiredo, Isabel Oliveira, Sérgio Silva, Margarida Pocinho, António Cardoso, Manuel Pereira. © 2024. 24 pages.
Fatih Pinarbasi. © 2024. 20 pages.
Stavros Kaperonis. © 2024. 25 pages.
Thomas Rui Mendes, Ana Cristina Antunes. © 2024. 24 pages.
Nuno Geada. © 2024. 12 pages.
Body Bottom