Skip navigation

Neural Information Processing Systems

  • Page 3 of 9

Artificial Neural Networks (ANNs) offer an efficient method for finding optimal cleanup strategies for hazardous plumes contaminating groundwater by allowing hydrologists to rapidly search through millions of possible strategies to find the most inexpensive and effective containment of contaminants and aquifer restoration. ANNs also provide a faster method of developing systems that classify seismic events as being earthquakes or underground explosions.

Farid Dowla and Leah Rogers have developed a number of ANN applications for researchers and students in hydrology and seismology. This book, complete with exercises and ANN algorithms, illustrates how ANNs can be used in solving problems in environmental engineering and the geosciences, and provides the necessary tools to get started using these elegant and efficient new techniques.

Following the development of four primary ANN algorithms (backpropagation, self-organizing, radial basis functions, and hopfield networks), and a discussion of important issues in ANN formulation (generalization properties, computer generation of training sets, causes of slow training, feature extraction and preprocessing, and performance evaluation), readers are guided through a series of straightforward yet complex illustrative problems. These include groundwater remediation management, seismic discrimination between earthquakes and underground explosions, automated monitoring for acoustic and seismic sensor data, estimation of seismic sources, geospatial estimation, lithologic classification from geophysical logging, earthquake forecasting, and climate change. Each chapter contains detailed exercises often drawn from field data that use one or more of the four primary ANN algorithms presented.

As book review editor of the IEEE Transactions on Neural Networks, Mohamad Hassoun has had the opportunity to assess the multitude of books on artificial neural networks that have appeared in recent years. Now, in Fundamentals of Artificial Neural Networks, he provides the first systematic account of artificial neural network paradigms by identifying clearly the fundamental concepts and major methodologies underlying most of the current theory and practice employed by neural network researchers.

Such a systematic and unified treatment, although sadly lacking in most recent texts on neural networks, makes the subject more accessible to students and practitioners. Here, important results are integrated in order to more fully explain a wide range of existing empirical observations and commonly used heuristics. There are numerous illustrative examples, over 200 end-of-chapter analytical and computer-based problems that will aid in the development of neural network analysis and design skills, and a bibliography of nearly 700 references.

Proceeding in a clear and logical fashion, the first two chapters present the basic building blocks and concepts of artificial neural networks and analyze the computational capabilities of the basic network architectures involved. Supervised, reinforcement, and unsupervised learning rules in simple nets are brought together in a common framework in chapter three. The convergence and solution properties of these learning rules are then treated mathematically in chapter four, using the "average learning equation" analysis approach. This organization of material makes it natural to switch into learning multilayer nets using backprop and its variants, described in chapter five. Chapter six covers most of the major neural network paradigms, while associative memories and energy minimizing nets are given detailed coverage in the next chapter. The final chapter takes up Boltzmann machines and Boltzmann learning along with other global search/optimization algorithms such as stochastic gradient search, simulated annealing, and genetic algorithms.

Motivated by the remarkable fluidity of memory the way in which items are pulled spontaneously and effortlessly from our memory by vague similarities to what is currently occupying our attention Sparse Distributed Memory presents a mathematically elegant theory of human long term memory.

The book, which is self contained, begins with background material from mathematics, computers, and neurophysiology; this is followed by a step by step development of the memory model. The concluding chapter describes an autonomous system that builds from experience an internal model of the world and bases its operation on that internal model. Close attention is paid to the engineering of the memory, including comparisons to ordinary computer memories.

Sparse Distributed Memory provides an overall perspective on neural systems. The model it describes can aid in understanding human memory and learning, and a system based on it sheds light on outstanding problems in philosophy and artificial intelligence. Applications of the memory are expected to be found in the creation of adaptive systems for signal processing, speech, vision, motor control, and (in general) robots. Perhaps the most exciting aspect of the memory, in its implications for research in neural networks, is that its realization with neuronlike components resembles the cortex of the cerebellum.

Pentti Kanerva is a scientist at the Research Institute for Advanced Computer Science at the NASA Ames Research Center and a visiting scholar at the Stanford Center for the Study of Language and Information. A Bradford Book.

The Adaptive Suspension Vehicle

What is 16 feet long, 10 feet high, weighs 6,000 pounds, has six legs, and can sprint at 8 mph and step over a 4 foot wall? The Adaptive Suspension Vehicle (ASV) described in this book.

Machines That Walk provides the first in depth treatment of the "statically stable walking machine" theory employed in the design of the ASV, the most sophisticated, self contained, and practical walking machine being developed today. Under construction at Ohio State University, the automatically terrain adaptive ASV has one human operator, can carry a 500 pound payload and is expected to have better fuel economy and mobility than that of conventional wheeled and tracked vehicles in rough terrain.

The development of the ASV is a milestone in robotics research, and Machines That Walk provides a wealth of research results in mobility, gait, static stability, leg design, and vertical geometry design. The authors' treatment of statically stable gait theory and actuator coordination is by far the most complete available.

Proceedings of the 2001 Conference

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. The conference is interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and diverse applications. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented at the 2001 conference.

Foundations of Neural Computation

This book provides an overview of self-organizing map formation, including recent developments. Self-organizing maps form a branch of unsupervised learning, which is the study of what can be determined about the statistical properties of input data without explicit feedback from a teacher. The articles are drawn from the journal Neural Computation.

The book consists of five sections. The first section looks at attempts to model the organization of cortical maps and at the theory and applications of the related artificial neural network algorithms. The second section analyzes topographic maps and their formation via objective functions. The third section discusses cortical maps of stimulus features. The fourth section discusses self-organizing maps for unsupervised data analysis. The fifth section discusses extensions of self-organizing maps, including two surprising applications of mapping algorithms to standard computer science problems: combinatorial optimization and sorting.

Contributors:
J. J. Atick, H. G. Barrow, H. U. Bauer, C. M. Bishop, H. J. Bray, J. Bruske, J. M. L. Budd, M. Budinich, V. Cherkassky, J. Cowan, R. Durbin, E. Erwin, G. J. Goodhill, T. Graepel, D. Grier, S. Kaski, T. Kohonen, H. Lappalainen, Z. Li, J. Lin, R. Linsker, S. P. Luttrell, D. J. C. MacKay, K. D. Miller, G. Mitchison, F. Mulier, K. Obermayer, C. Piepenbrock, H. Ritter, K. Schulten, T. J. Sejnowski, S. Smirnakis, G. Sommer, M. Svensen, R. Szeliski, A. Utsugi, C. K. I. Williams, L. Wiskott, L. Xu, A. Yuille, J. Zhang.

Foundations of Neural Computation

Graphical models use graphs to represent and manipulate joint probability distributions. They have their roots in artificial intelligence, statistics, and neural networks. The clean mathematical formalism of the graphical models framework makes it possible to understand a wide variety of network-based approaches to computation, and in particular to understand many neural network algorithms and architectures as instances of a broader probabilistic methodology. It also makes it possible to identify novel features of neural network algorithms and architectures and to extend them to more general graphical models.

This book exemplifies the interplay between the general formal framework of graphical models and the exploration of new algorithm and architectures. The selections range from foundational papers of historical importance to results at the cutting edge of research.

Contributors:
H. Attias, C. M. Bishop, B. J. Frey, Z. Ghahramani, D. Heckerman, G. E. Hinton, R. Hofmann, R. A. Jacobs, Michael I. Jordan, H. J. Kappen, A. Krogh, R. Neal, S. K. Riis, F. B. RodrĂ­guez, L. K. Saul, Terrence J. Sejnowski, P. Smyth, M. E. Tipping, V. Tresp, Y. Weiss.

There is a sense among scientists that the time is finally ripe for the problem of consciousness to be solved once and for all. The development of new experimental and theoretical tools for probing the brain has produced an atmosphere of unparalleled optimism that the job can now be done properly: The race for consciousness is on!

In this book, John Taylor describes the complete scene of entries, riders, gamblers, and racecourses. He presents his own entry into the race, which he has been working on for the past twenty-five years—the relational theory of consciousness, according to which consciousness is created through the relations between brain states, especially those involving memories of personal experiences. Because it is an ongoing and adaptive process, consciousness emerges from past brain activity. It is this highly subtle and delicate process of emergence that leads to the complexity of consciousness. Taylor does not just present another theory of consciousness, but makes comprehensible the nuts-and-bolts methodology behind the myriad attempts to win the race.

Proceedings of the 2000 Conference

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. The conference is interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and diverse applications. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented at the 2000 conference.

In recent years, data from neurobiological experiments have made it increasingly clear that biological neural networks, which communicate through pulses, use the timing of the pulses to transmit information and perform computation. This realization has stimulated significant research on pulsed neural networks, including theoretical analyses and model development, neurobiological modeling, and hardware implementation.

This book presents the complete spectrum of current research in pulsed neural networks and includes the most important work from many of the key scientists in the field. The first half of the book consists of longer tutorial articles spanning neurobiology, theory, algorithms, and hardware. The second half contains a larger number of shorter research chapters that present more advanced concepts. The contributors use consistent notation and terminology throughout the book.

  • Page 3 of 9