IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Encyclopedia of Artificial Intelligence

Encyclopedia of Artificial Intelligence
Author(s)/Editor(s): Juan Ramón Rabuñal Dopico (University of A Coruña, Spain), Julian Dorado (University of A Coruña, Spain)and Alejandro Pazos (University of A Coruña, Spain)
Copyright: ©2009
DOI: 10.4018/978-1-59904-849-9
ISBN13: 9781599048499
ISBN10: 1599048493
EISBN13: 9781599048505

Purchase

View Encyclopedia of Artificial Intelligence on the publisher's website for pricing and purchasing information.


Description

Numerous real-life problems so complex as to be virtually insoluble by means of traditional computer science techniques can now be approached effectively with artificial intelligence techniques. Recently, artificial intelligence technologies have evolved at a rapid pace, raising significant challenges to the effort to stay current.

The Encyclopedia of Artificial Intelligence is a comprehensive and in-depth reference to the most recent developments in the field covering theoretical developments, techniques, technologies, and applications of systems using intelligent characteristics for adaptability, automation learning, classification, prediction, and even artistic creation,among others. Authored by over 400 of the world's leading experts from more than 40 countries, this commanding reference of 237 articles is essential to any research library's technology holdings.



Table of Contents

More...
Less...

Preface

1. INTRODUCTION

There was a fleeting part in Renaissance comedies, called “prologue” that vanished as soon as the action started. These preliminary arguments aim to play the part of the “prologue”.

This encyclopaedia provides the latest, fundamental and properly validated knowledge for anyone, not necessarily a specialist in artificial intelligence (AI), to qualify as such. Although not all the international AI authorities have contributed, which would have been an impossible feat, all the contributors are authorities. Each chapter is written by a leading world specialist in the subject matter. The encyclopaedia addresses a comprehensive and structured list of subjects, without hardly any redundancy, which is no easy achievement in a work of this kind. However, one sign of a good book is that it inspires others to write more and better things, and I, for one, am convinced that this encyclopaedia will spawn an endless number of texts that will render it obsolete in no time. This will be its greatest glory.

An encyclopaedic volume is, by it very nature, confined to pondering the past and the present, rather looking into the future. However, and this is why I was asked to write this prologue, it should give a glimpse of what is to come. I rise to this challenge by first underlining the importance of AI in section 2, and then dealing with all AI’s defects and failings in section 3. Finally, I propose, in section 4, a theory, which is evidently valid for AI. However, this theory is not confined to this field, as it is applicable to any information system, be it intelligent or otherwise, or natural or artificial.

2. IMPORTANCE OF ARTIFICIAL INTELLIGENCE

The first point is that there remain only two truly profound problems for science to solve: 1) the origin of the universe, and its development and future; and 2) the nature of the three levels of consciousness —self-consciousness, intentionality and qualia—, which is perhaps the most disconcerting property of matter (Atkins, 2003). In actual fact, the physicist and joint discoverer of DNA’s double helix structure, Francis Harry Compton Crick (Crick, 1994), Nobel prize-winner for Medicine, and the physician Rita Levi Montalcini view research into the brain as being even more important than the investigation of the universe. Now, AI will necessarily play a role in solving both problems, at least according to leading, Nobel prize-winning scientists. Suffice it to say with respect to the first, cosmological problem, that, according to Chlovsky, Kardachaev (Kardachaev, 1964) and Dyson (Dyson, 1979), if potential extraterrestrial civilizations are classified according to the level of development they achieved, the earth would be at level 0, whereas the most advanced, level IV, would use AI techniques. The astronomer Frank J. Tipler (Tipler, 1980) was making a similar point when he stated that it was the delay in computer not rocket technology that was preventing the human race from exploring the Galaxy. With regard to the second problem, note that someone as important as Rita Levi Montalcini (Levi, 2000), awarded the Nobel prize for Physiology, considered that the three most important research lines into the brain and mind that have produced the most innovative results to be neuroscience, alongside the cognitive science and AI.

3. STATE OF THE ART

Now that the AI’s importance in solving the two real problems facing science is clear, it is worthwhile asking, “Is AI today in a position to successfully take up this challenge?” The answer is, unfortunately, no. There are many different reasons. The most important follow:

The sterile reductionism of assuming that intelligence is merely the ability to reason validly. To start with, be it what it may, intelligence differs from instinct and the stimulus-response association as to the following behaviours, at least: prediction, adaptation or the right response to change, intentionality, serendipity or ability to find by “chance” something better than what one was looking for, apart from reasoning, of course. And, more importantly, of all the possible “pure” inference modes successfully and routinely used by human beings —abduction, deduction, induction and retroduction—, only one, deduction, has in practice been used up to now, and this subject to constraints, basically in the form of “modus ponens”.

The ambiguity of the term artificial, which is used in two very different senses. As Sokolowski (Sokolowski, 1988) pointed out, the term artificial means one thing when it refers to grass and another completely differently thing when applied, for instance, to light. In both cases, the adjective artificial is used to qualify something that is manufactured. In the case of grass, artificial means that the thing the adjective qualifies appears to be, that is, simulates but is not really what it appears to be. In other words, it masquerades as something else, and anyone who took artificial grass to be grass would be wrong. However, artificial light is really light, it illuminates, it is composed of photons, it travels at 300,000 km/h, etc. Evidently, it is generated as a substitute for natural light. However, once it has been produced, it is what it appears to be.

Inductivism. Almost without exception, all AI research is guilty of inductivism. It is thought that if a program solves one particular problem, another solves another and so on, it will be possible to establish, by induction, a law that accounts for intelligence (Nilson, 1974). This is a mistaken approach that leads to many dead-ends. As is common knowledge today, all sciences, and especially the hard sciences, work in quite the opposite way. They aim to create theories from just a few principles. Relativity is founded on three premises, quantum mechanics on four, the Maxwell-Hertz theory is based on four principles, Wallace and Russell’s theory of evolution on two, etc. The best AI has managed to do, however, is to establish two approaches, which, stretching it a bit, could be termed hypotheses: connectionism, versus symbolism, which are, additionally, “contradictory”.

Hardware versus Software. Looking at the evolution of microprocessors, which conforms reasonably well to the Moore’s law, we find that the evolution of this technology really obeys a doubly exponential law. This means that, if this pace keeps up, by around 2020, computers will have the computing power of the human brain. Additionally, according to Seth Lloyd’s calculations, based on earlier estimates by Beckestein, Bremermann and Livitin (Lloyd, 2000), when technology breaks the quantum barrier of today’s computers, and tomorrow’s quantum computers will, in theory, be limited, for every kilo of matter, to a memory capacity of approximately 1031 and a computational capacity of 1050 operations per second. The computational limits of the brain are currently 1020 and 1016, respectively, that is, iota bits and peta operations per second. As iota is now the highest prefix for naming the orders of magnitude of the units of measurement, at least two new prefixes are necessary to designate memory capacity and nine for computational capacity, taking into account that the orders of magnitude go three by three at such levels.

Software development is, on the other hand, at most, linear. If parameters like learning and the variability rate of market permanence, etc., are taken into account, software development is even in decline. This is not to mention the so-called “kludge” programs designed in the style of cartoonist Rube Goldberg, who drew complex cartoon devices to perform simple tasks. “Kludge” programs, designed and built without foresight or any underlying “theory”, end up full of gratuitous and useless complexities often to the point that not even their designers can understand them.

And although there is no one cause for this remarkable difference between the evolution of software and hardware, the non-existence of a theory underlying proper software and knowledge engineering is most likely to be at its root. Therefore, as it is better to offer good example than good words, or as Shakespeare wrote, “actions speak louder than words”, the next section is a brief outline of a theory that extends to all information systems, taking into account the preceding diagnosis.

4. PROPOSED THEORY

This theory is composed of just two constructs: holons and informons. Holons, which can be both part and whole (hence the name), autonomous, recursive, open, self-organizing, cooperative and stable, are abstractions that can become, for example, AI, a production system, or an agent or agents, DNA, a codon/amino acid conversion table, a Turing machine, a Turing machine action table, etc. (Pazos, 2007). Informons also represent an abstraction of everything that is generically called information. Information can range from a signal, through signs, data, news and its supports, to knowledge. Not only does information exist, it is also one of the building blocks of the reality factory. The other two are matter and energy, which have been equivalent ever since Einstein published his famous equation, , in 1905, referred to as the “destiny of humankind”.

These three components of the universe are related to each other as shown in Figure 1 (Stonier, 1990). The problem with information is that it is one of those notions and terms that everyone claims to understand. However, the closer or more carefully you look, the more elusive it is. It is in actual fact a bit like energy, whose precise definition always ends up slipping through your fingers. Now, informons can range from a simple data item or variable value, through a database or knowledge map(s), to a system of ontologies in AI and software engineering. On the other hand, the different bases —adenine, guanine, cytosine and thymine, which are the codons—, are DNA’s informons, whereas the ones and blanks on an infinite tape play this role in a Turing machine.

And, incidentally, as we mentioned Einstein’s formula, we have used the plus-minus sign here even though both Einstein and his followers omitted the minus sign. And taking into account that the formula actually stems from a square root, this is wrong because Bakshara proved back in the 12th century that any root has two signs. Apart from this, it was thanks to the use of the negative value that Dirac discovered anti-matter.

The author of this prologue and other colleagues (Alonso, 2008) have suggested that holons and informons are the underlying elements of an information theory in any domain in which information is essential (Alonso 2004), such as is the case of the brain, DNA and, as mentioned, computer science. This section takes into account this proposal and presents its application and particularisation to the last item on this list, that is, the development—formulation of postulates—of a theory for computer science based on holons and informons. These postulates or principles are as follows:


    P1. Complementariness. Any information system, be it biological or artificial, is composed of just two elements: informons and holons. Formally, IS={I, H}.

    P2. Computation. The computation “operation” o can be established in any information system. This operation involves transforming input holons or holons and informons into output holons and informons, that is, holons (H) interrelate with holons or informons (I) to produce new holons (H’) and informons (I’). So, the possible results of holons operating with holons or informons are: a new holon, a new informon, a new holon and a new informon or several new holons and/or informons. Formally, this postulate would be expressed in set theory as follows: (P(H) \ {?}) o ((P(H) \ {?}) ? (P(I) \ {?})) ? P(H’ ? I’) \ {?}, where P(S), the set of parts of S, is the set of all subsets of S, that is, P(S) = {X: X ? S}.

    P3. Satisfiability. Any information system is governed by satisfiability rather than optimisation or maximisation criteria.

The definition of the above postulates accounts for the following two essential features:

Computation or, as shown in Table 1, the consideration of computer science at different levels of abstraction, which is essential for achieving a theory that covers the globality of this field. Table 2 lists some examples that are illustrative of the range of cases that the second postulate covers. Its versatility can also be appreciated by looking at other examples, such as warnings (informon) and the executable code (holon) that a compiler (holon) generates from source code (informon). This is an example of how a holon interrelates with an informon to produce a new holon and a new informon.

Holism or the consideration of holarchies and emergent properties. The paper referenced in Table 1 on integration of functions is especially illustrative on this point. This paper shows how a holon “operated” with another holon can produce a new holon (H o H ? H’) with properties that were not present, either implicitly or explicitly, in the original holons. In other words, a holarchy can originate emergent properties in the Aristotelian sense of holism: the whole is more than the sum of its parts.

More...
Less...

Reviews and Testimonials

This publication has been indexed in the DBLP Computer Science Bibliography.

– 

This encyclopedia provides the latest, fundamental and properly validated knowledge for anyone, not necessarily a specialist in artificial intelligence (AI), to qualify as such. One sign of a good book is that it inspires others to write more and better things, and I am convinced that this encyclopedia will spawn an endless number of texts that will render it obsolete in no time. This will be its greatest glory.

– Juan Ramón Rabuñal Dopico, University of A Coruña, Spain

This three-volume encyclopedia set is an in-depth reference to recent developments in the field of artificial intelligence, covering theory, techniques, technologies, and applications of systems using intelligent characteristics.

– Book News Inc. (December 2008)

Author's/Editor's Biography

Juan Ramón Rabuñal Dopico (Ed.)
Juan Ramón Rabuñal Dopico is associate professor in the Department of Information and Communications Technologies, University of A Coruña (Spain). He finished his studies in computer engineering in 1996, and in 2002, he became a doctor of computer science with his thesis “Methodology for the Development of Knowledge Extraction Systems in ANNs”. He has worked on several Spanish and European projects and has published many books and papers in several international journals. He is currently working in the areas of evolutionary computation, artificial neural networks, and knowledge extraction systems.

Julian Dorado (Ed.)
Julian Dorado is associate professor in the Faculty of Computer Science in the University of a Coruña. He finished his graduate in Computer Science in 1994. In 1999, he became Ph.D, with a special mention of European Doctor. In 2004, he has finished his graduate in Biology. He has worked as a teacher of the University for more than 8 years. He has published many books and papers on several journals and international conferences. He is at present working on Bioinformatics, Evolutionary Computing, Artificial Neural Networks, Computer Graphics and Data Mining.

Alejandro Pazos (Ed.)
Alejandro Pazos is professor in Computer Science Faculty (University of A Coruña). He was born in Padron in 1959. He is MD by Faculty of Medicine at University of Santiago de Compostela in 1987. He obtains the Master Knowledge Engerineering in 1989 and PhD in computer science in 1990 by the Polytechnique University of Madrid. He also archives the PhD grade in medicine in 1996 by the University Complutese of Madrid. He has worked with research groups in Georgia Institute of Technology, Havard Medical School, Stanford University, Politechnique University of Madrid, etc. He funded and is the director of the research laboratory: Artificial Neural Networks and Adaptative Systems in Computer science Faculty and is also co-director of the Medical Informatics and Radiology Diagnostic Center at the University of A Coruña.

More...
Less...

Body Bottom