Skip navigation

Neural Information Processing Systems

  • Page 3 of 9
The Design of Brain-Like Machines
Edited by Igor Aleksander

McClelland and Rumelhart's Parallel Distributed Processing was the first book to present a definitive account of the newly revived connectionist/neural net paradigm for artificial intelligence and cognitive science. While Neural Computing Architectures addresses the same issues, there is little overlap in the research it reports. These 18 contributions provide a timely and informative overview and synopsis of both pioneering and recent European connectionist research.

As book review editor of the IEEE Transactions on Neural Networks, Mohamad Hassoun has had the opportunity to assess the multitude of books on artificial neural networks that have appeared in recent years. Now, in Fundamentals of Artificial Neural Networks, he provides the first systematic account of artificial neural network paradigms by identifying clearly the fundamental concepts and major methodologies underlying most of the current theory and practice employed by neural network researchers.

Artificial Neural Networks (ANNs) offer an efficient method for finding optimal cleanup strategies for hazardous plumes contaminating groundwater by allowing hydrologists to rapidly search through millions of possible strategies to find the most inexpensive and effective containment of contaminants and aquifer restoration. ANNs also provide a faster method of developing systems that classify seismic events as being earthquakes or underground explosions.Farid Dowla and Leah Rogers have developed a number of ANN applications for researchers and students in hydrology and seismology.

From Ions to Networks

Much research focuses on the question of how information is processed in nervous systems, from the level of individual ionic channels to large-scale neuronal networks, and from "simple" animals such as sea slugs and flies to cats and primates. New interdisciplinary methodologies combine a bottom-up experimental methodology with the more top-down-driven computational and modeling approach.

Motivated by the remarkable fluidity of memory the way in which items are pulled spontaneously and effortlessly from our memory by vague similarities to what is currently occupying our attention Sparse Distributed Memory presents a mathematically elegant theory of human long term memory.

The Adaptive Suspension Vehicle

What is 16 feet long, 10 feet high, weighs 6,000 pounds, has six legs, and can sprint at 8 mph and step over a 4 foot wall? The Adaptive Suspension Vehicle (ASV) described in this book.

Proceedings of the 2001 Conference

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. The conference is interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and diverse applications. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented at the 2001 conference.

Foundations of Neural Computation

This book provides an overview of self-organizing map formation, including recent developments. Self-organizing maps form a branch of unsupervised learning, which is the study of what can be determined about the statistical properties of input data without explicit feedback from a teacher. The articles are drawn from the journal Neural Computation.The book consists of five sections. The first section looks at attempts to model the organization of cortical maps and at the theory and applications of the related artificial neural network algorithms.

Foundations of Neural Computation

Graphical models use graphs to represent and manipulate joint probability distributions. They have their roots in artificial intelligence, statistics, and neural networks. The clean mathematical formalism of the graphical models framework makes it possible to understand a wide variety of network-based approaches to computation, and in particular to understand many neural network algorithms and architectures as instances of a broader probabilistic methodology.

There is a sense among scientists that the time is finally ripe for the problem of consciousness to be solved once and for all. The development of new experimental and theoretical tools for probing the brain has produced an atmosphere of unparalleled optimism that the job can now be done properly: The race for consciousness is on!

  • Page 3 of 9