The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Learning in Feed-Forward Artificial Neural Networks I
Abstract
The view of artificial neural networks as adaptive systems has lead to the development of ad-hoc generic procedures known as learning rules. The first of these is the Perceptron Rule (Rosenblatt, 1962), useful for single layer feed-forward networks and linearly separable problems. Its simplicity and beauty, and the existence of a convergence theorem made it a basic departure point in neural learning algorithms. This algorithm is a particular case of the Widrow-Hoff or delta rule (Widrow & Hoff, 1960), applicable to continuous networks with no hidden layers with an error function that is quadratic in the parameters.
Related Content
Kamel Mouloudj, Vu Lan Oanh LE, Achouak Bouarar, Ahmed Chemseddine Bouarar, Dachel Martínez Asanza, Mayuri Srivastava.
© 2024.
20 pages.
|
José Eduardo Aleixo, José Luís Reis, Sandrina Francisca Teixeira, Ana Pinto de Lima.
© 2024.
52 pages.
|
Jorge Figueiredo, Isabel Oliveira, Sérgio Silva, Margarida Pocinho, António Cardoso, Manuel Pereira.
© 2024.
24 pages.
|
Fatih Pinarbasi.
© 2024.
20 pages.
|
Stavros Kaperonis.
© 2024.
25 pages.
|
Thomas Rui Mendes, Ana Cristina Antunes.
© 2024.
24 pages.
|
Nuno Geada.
© 2024.
12 pages.
|
|
|