Skip navigation

Neural Information Processing Systems

  •  
  • Page 1 of 2
The Biology, Intelligence, and Technology of Self-Organizing Machines

Evolutionary robotics is a new technique for the automatic creation of autonomous robots. Inspired by the Darwinian principle of selective reproduction of the fittest, it views robots as autonomous artificial organisms that develop their own skills in close interaction with the environment and without human intervention. Drawing heavily on biology and ethology, it uses the tools of neural networks, genetic algorithms, dynamic systems, and biomorphic engineering. The resulting robots share with simple biological systems the characteristics of robustness, simplicity, small size, flexibility, and modularity.

In evolutionary robotics, an initial population of artificial chromosomes, each encoding the control system of a robot, is randomly created and put into the environment. Each robot is then free to act (move, look around, manipulate) according to its genetically specified controller while its performance on various tasks is automatically evaluated. The fittest robots then "reproduce" by swapping parts of their genetic material with small random mutations. The process is repeated until the "birth" of a robot that satisfies the performance criteria.

This book describes the basic concepts and methodologies of evolutionary robotics and the results achieved so far. An important feature is the clear presentation of a set of empirical experiments of increasing complexity. Software with a graphic interface, freely available on a Web page, will allow the reader to replicate and vary (in simulation and on real robots) most of the experiments.

As book review editor of the IEEE Transactions on Neural Networks, Mohamad Hassoun has had the opportunity to assess the multitude of books on artificial neural networks that have appeared in recent years. Now, in Fundamentals of Artificial Neural Networks, he provides the first systematic account of artificial neural network paradigms by identifying clearly the fundamental concepts and major methodologies underlying most of the current theory and practice employed by neural network researchers.

Such a systematic and unified treatment, although sadly lacking in most recent texts on neural networks, makes the subject more accessible to students and practitioners. Here, important results are integrated in order to more fully explain a wide range of existing empirical observations and commonly used heuristics. There are numerous illustrative examples, over 200 end-of-chapter analytical and computer-based problems that will aid in the development of neural network analysis and design skills, and a bibliography of nearly 700 references.

Proceeding in a clear and logical fashion, the first two chapters present the basic building blocks and concepts of artificial neural networks and analyze the computational capabilities of the basic network architectures involved. Supervised, reinforcement, and unsupervised learning rules in simple nets are brought together in a common framework in chapter three. The convergence and solution properties of these learning rules are then treated mathematically in chapter four, using the "average learning equation" analysis approach. This organization of material makes it natural to switch into learning multilayer nets using backprop and its variants, described in chapter five. Chapter six covers most of the major neural network paradigms, while associative memories and energy minimizing nets are given detailed coverage in the next chapter. The final chapter takes up Boltzmann machines and Boltzmann learning along with other global search/optimization algorithms such as stochastic gradient search, simulated annealing, and genetic algorithms.

It is now clear that the brain is unlikely to be understood without recourse to computational theories. The theme of An Introduction to Natural Computation is that ideas from diverse areas such as neuroscience, information theory, and optimization theory have recently been extended in ways that make them useful for describing the brain's programs. This book provides a comprehensive introduction to the computational material that forms the underpinnings of the currently evolving set of brain models. It stresses the broad spectrum of learning models—ranging from neural network learning through reinforcement learning to genetic learning—and situates the various models in their appropriate neural context.

To write about models of the brain before the brain is fully understood is a delicate matter. Very detailed models of the neural circuitry risk losing track of the task the brain is trying to solve. At the other extreme, models that represent cognitive constructs can be so abstract that they lose all relationship to neurobiology. An Introduction to Natural Computation takes the middle ground and stresses the computational task while staying near the neurobiology.

Methods, Models, and Conceptual Issues

An Invitation to Cognitive Science provides a point of entry into the vast realm of cognitive science by treating in depth examples of issues and theories from many subfields. The first three volumes of the series cover Language, Visual Cognition, and Thinking.

Volume 4, Methods, Models, and Conceptual Issues, expands the series in new directions. The chapters span many areas of cognitive science—including artificial intelligence, neural network models, animal cognition, signal detection theory, computational models, reaction-time methods, and cognitive neuroscience. The volume also offers introductions to several general methods and theoretical approaches for analyzing the mind, and shows how some of these approaches are applied in the development of quantitative models.

Rather than general and inevitably superficial surveys of areas, the contributors present "case studies"—detailed accounts of one or two achievements within an area. The goal is to tell a good story, challenging the reader to embark on an intellectual adventure.

Daniel N. Osherson, general editor

A Connectionist Perspective on Development


Rethinking Innateness asks the question, "What does it really mean to say that a behavior is innate?" The authors describe a new framework in which interactions, occurring at all levels, give rise to emergent forms and behaviors. These outcomes often may be highly constrained and universal, yet are not themselves directly contained in the genes in any domain-specific way.

One of the key contributions of Rethinking Innateness is a taxonomy of ways in which a behavior can be innate. These include constraints at the level of representation, architecture, and timing; typically, behaviors arise through the interaction of constraints at several of these levels. The ideas are explored through dynamic models inspired by a new kind of "developmental connectionism," a marriage of connectionist models and developmental neurobiology, forming a new theoretical framework for the study of behavioral development.


A Handbook for Connectionist Simulations

This book is the companion volume to Rethinking Innateness: A Connectionist Perspective on Development (The MIT Press, 1996), which proposed a new theoretical framework to answer the question "What does it mean to say that a behavior is innate?" The new work provides concrete illustrations—in the form of computer simulations—of properties of connectionist models that are particularly relevant to cognitive development. This enables the reader to pursue in depth some of the practical and empirical issues raised in the first book. The authors' larger goal is to demonstrate the usefulness of neural network modeling as a research methodology.

The book comes with a complete software package, including demonstration projects, for running neural network simulations on both Macintosh and Windows 95. It also contains a series of exercises in the use of the neural network simulator provided with the book. The software is also available to run on a variety of UNIX platforms.

 

Elements of Artificial Neural Networks provides a clearly organized general introduction, focusing on a broad range of algorithms, for students and others who want to use neural networks rather than simply study them.

The authors, who have been developing and team teaching the material in a one-semester course over the past six years, describe most of the basic neural network models (with several detailed solved examples) and discuss the rationale and advantages of the models, as well as their limitations. The approach is practical and open-minded and requires very little mathematical or technical background. Written from a computer science and statistics point of view, the text stresses links to contiguous fields and can easily serve as a first course for students in economics and management.

The opening chapter sets the stage, presenting the basic concepts in a clear and objective way and tackling important—yet rarely addressed—questions related to the use of neural networks in practical situations. Subsequent chapters on supervised learning (single layer and multilayer networks), unsupervised learning, and associative models are structured around classes of problems to which networks can be applied. Applications are discussed along with the algorithms. A separate chapter takes up optimization methods.

The most frequently used algorithms, such as backpropagation, are introduced early on, right after perceptrons, so that these can form the basis for initiating course projects. Algorithms published as late as 1995 are also included. All of the algorithms are presented using block-structured pseudo-code, and exercises are provided throughout. Software implementing many commonly used neural network algorithms is available at the book's website.

Transparency masters, including abbreviated text and figures for the entire book, are available for instructors using the text.

Downloadable instructor resources available for this title: solution manual

Thinking

An Invitation to Cognitive Science provides a point of entry into the vast realm of cognitive science, offering selected examples of issues and theories from many of its subfields. All of the volumes in the second edition contain substantially revised and as well as entirely new chapters.

Rather than surveying theories and data in the manner characteristic of many introductory textbooks in the field, An Invitation to Cognitive Science employs a unique case study approach, presenting a focused research topic in some depth and relying on suggested readings to convey the breadth of views and results. Each chapter tells a coherent scientific story, whether developing themes and ideas or describing a particular model and exploring its implications.

The volumes are self contained and can be used individually in upper-level undergraduate and graduate courses ranging from introductory psychology, linguistics, cognitive science, and decision sciences, to social psychology, philosophy of mind, rationality, language, and vision science.

Language

An Invitation to Cognitive Science provides a point of entry into the vast realm of cognitive science, offering selected examples of issues and theories from many of its subfields. All of the volumes in the second edition contain substantially revised and as well as entirely new chapters.

Rather than surveying theories and data in the manner characteristic of many introductory textbooks in the field, An Invitation to Cognitive Science employs a unique case study approach, presenting a focused research topic in some depth and relying on suggested readings to convey the breadth of views and results. Each chapter tells a coherent scientific story, whether developing themes and ideas or describing a particular model and exploring its implications.

The volumes are self contained and can be used individually in upper-level undergraduate and graduate courses ranging from introductory psychology, linguistics, cognitive science, and decision sciences, to social psychology, philosophy of mind, rationality, language, and vision science.

Recent decades have produced a blossoming of research in artificial systems that exhibit important properties of mind. But what exactly is this dramatic new work and how does it change the way we think about the mind, or even about who or what has mind?

Stan Franklin is the perfect tour guide through the contemporary interdisciplinary matrix of artificial intelligence, cognitive science, cognitive neuroscience, artificial neural networks, artificial life, and robotics that is producing a new paradigm of mind. Leisurely and informal, but always informed, his tour touches on all of the major facets of mechanisms of mind.

Along the way, Franklin makes the case for a perspective that rejects a rigid distinction between mind and non-mind in favor of a continuum from less to more mind, and for the role of mind as a control structure with the essential task of choosing the next action. Selected stops include the best of the work in these different fields, with the key concepts and results explained in just enough detail to allow readers to decide for themselves why the work is significant.

Major attractions include animal minds, Allan Newell's SOAR, the three Artificial Intelligence debates, John Holland's genetic algorithms, Wilson's Animat, Brooks' subsumption architecture, Jackson's pandemonium theory, Ornstein's multimind, Marvin Minsky's society of mind, Pattie Maes's behavior networks, Gerald Edelman's neural Darwinism, Drescher's schema mechanisms, Pentti Kanerva's sparse distributed memory, Douglas Hofstadter and Melanie Mitchell's Copycat, and Agre and Chapman's deictic representations.

A Bradford Book

  •  
  • Page 1 of 2