IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Neural Networks and Graph Transformations

Neural Networks and Graph Transformations
View Sample PDF
Author(s): Ingrid Fischer (University of Konstanz, Germany)
Copyright: 2009
Pages: 6
Source title: Encyclopedia of Data Warehousing and Mining, Second Edition
Source Author(s)/Editor(s): John Wang (Montclair State University, USA)
DOI: 10.4018/978-1-60566-010-3.ch217

Purchase

View Neural Networks and Graph Transformations on the publisher's website for pricing and purchasing information.

Abstract

As the beginning of the area of artificial neural networks the introduction of the artificial neuron by McCulloch and Pitts is considered. They were inspired by the biological neuron. Since then many new networks or new algorithms for neural networks have been invented with the result. In most textbooks on (artificial) neural networks there is no general definition on what a neural net is but rather an example based introduction leading from the biological model to some artificial successors. Perhaps the most promising approach to define a neural network is to see it as a network of many simple processors (“units”), each possibly having a small amount of local memory. The units are connected by communication channels (“connections”) that usually carry numeric (as opposed to symbolic) data called the weight of the connection. The units operate only on their local data and on the inputs they receive via the connections. It is typical of neural networks, that they have great potential for parallelism, since the computations of the components are largely independent of each other. Typical application areas are: • Capturing associations or discovering regularities within a set of patterns; • Any application where the number of variables or diversity of the data is very great; • Any application where the relationships between variables are vaguely understood; or, • Any application where the relationships are difficult to describe adequately with conventional approaches. Neural networks are not programmed but can be trained in different ways. In supervised learning, examples are presented to an initialized net. From the input and the output of these examples, the neural net learns. There are as many learning algorithms as there are types of neural nets. Also learning is motivated physiologically. When an example is presented to a neural network it cannot recalculate, several different steps are possible: the neuron’s data is changed, the connection’s weight is changed or new connections and/or neurons are inserted. Introductory books into neural networks are (Graupe, 2007; Colen, Kuehn & Sollich, 2005). There are many advantages and limitations to neural network analysis and to discuss this subject properly one must look at each individual type of network. Nevertheless there is one specific limitation of neural networks potential users should be aware of. Neural networks are more or less, depending on the different types, the ultimate “black boxes”. The final result of the learning process is a trained network that provides no equations or coefficients defining a relationship beyond its own internal mathematics. Graphs are widely used concepts within computer science, in nearly every field graphs serve as a tool for visualization, summarization of dependencies, explanation of connections, etc. Famous examples are all kinds of different nets and graphs as e.g. semantic nets, petri nets, flow charts, interaction diagrams or neural networks, the focus of this chapter. Invented 35 years ago, graph transformations have been constantly expanding. Wherever graphs are used, graph transformations are also applied (Rozenberg, 1997; Ehrig, Engels, Kreowski, and Rozenberg, 1999; Ehrig, Kreowski, Montanari, and Rozenberg, 1999; Ehrig, Ehrig, Prange & Taentzer, 2006).

Related Content

Girija Ramdas, Irfan Naufal Umar, Nurullizam Jamiat, Nurul Azni Mhd Alkasirah. © 2024. 18 pages.
Natalia Riapina. © 2024. 29 pages.
Xinyu Chen, Wan Ahmad Jaafar Wan Yahaya. © 2024. 21 pages.
Fatema Ahmed Wali, Zahra Tammam. © 2024. 24 pages.
Su Jiayuan, Zhang Jingru. © 2024. 26 pages.
Pua Shiau Chen. © 2024. 21 pages.
Minh Tung Tran, Thu Trinh Thi, Lan Duong Hoai. © 2024. 23 pages.
Body Bottom