Skip navigation

Neural Information Processing Systems

  • Page 2 of 9
From Systems to Brains

Signal processing and neural computation have separately and significantly influenced many disciplines, but the cross-fertilization of the two fields has begun only recently. Research now shows that each has much to teach the other, as we see highly sophisticated kinds of signal processing and elaborate hierachical levels of neural computation performed side by side in the brain. In New Directions in Statistical Signal Processing, leading researchers from both signal processing and neural computation present new work that aims to promote interaction between the two disciplines.The book's 14 chapters, almost evenly divided between signal processing and neural computation, begin with the brain and move on to communication, signal processing, and learning systems. They examine such topics as how computational models help us understand the brain's information processing, how an intelligent machine could solve the "cocktail party problem" with "active audition" in a noisy environment, graphical and network structure modeling approaches, uncertainty in network communications, the geometric approach to blind signal processing, game-theoretic learning algorithms, and observable operator models (OOMs) as an alternative to hidden Markov models (HMMs).

Proceedings of the 2005 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December 2005 meeting, held in Vancouver.

Proceedings of the 2004 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December, 2004 conference, held in Vancouver.

Theory and Applications

The process of inductive inference—to infer general laws and principles from particular instances—is the basis of statistical modeling, pattern recognition, and machine learning. The Minimum Descriptive Length (MDL) principle, a powerful method of inductive inference, holds that the best explanation, given a limited set of observed data, is the one that permits the greatest compression of the data—that the more we are able to compress the data, the more we learn about the regularities underlying the data. Advances in Minimum Description Length is a sourcebook that will introduce the scientific community to the foundations of MDL, recent theoretical advances, and practical applications.

The book begins with an extensive tutorial on MDL, covering its theoretical underpinnings, practical implications as well as its various interpretations, and its underlying philosophy. The tutorial includes a brief history of MDL—from its roots in the notion of Kolmogorov complexity to the beginning of MDL proper. The book then presents recent theoretical advances, introducing modern MDL methods in a way that is accessible to readers from many different scientific fields. The book concludes with examples of how to apply MDL in research settings that range from bioinformatics and machine learning to psychology.

A Comparative Approach

The search for origins of communication in a wide variety of species including humans is rapidly becoming a thoroughly interdisciplinary enterprise. In this volume, scientists engaged in the fields of evolutionary biology, linguistics, animal behavior, developmental psychology, philosophy, the cognitive sciences, robotics, and neural network modeling come together to explore a comparative approach to the evolution of communication systems. The comparisons range from parrot talk to squid skin displays, from human language to Aibo the robot dog's language learning, and from monkey babbling to the newborn human infant cry. The authors explore the mysterious circumstances surrounding the emergence of human language, which they propose to be intricately connected with drastic changes in human lifestyle. While it is not yet clear what the physical environmental circumstances were that fostered social changes in the hominid line, the volume offers converging evidence and theory from several lines of research suggesting that language depended upon the restructuring of ancient human social groups.

The volume also offers new theoretical treatments of both primitive communication systems and human language, providing new perspectives on how to recognize both their similarities and their differences. Explorations of new technologies in robotics, neural network modeling and pattern recognition offer many opportunities to simulate and evaluate theoretical proposals.

The North American and European scientists who have contributed to this volume represent a vanguard of thinking about how humanity came to have the capacity for language and how nonhumans provide a background of remarkable capabilities that help clarify the foundations of speech.

Proceedings of the 2003 Conference

The annual Neural Information Processing (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only thirty percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains all the papers presented at the 2003 conference.

The Biology, Intelligence, and Technology of Self-Organizing Machines

Evolutionary robotics is a new technique for the automatic creation of autonomous robots. Inspired by the Darwinian principle of selective reproduction of the fittest, it views robots as autonomous artificial organisms that develop their own skills in close interaction with the environment and without human intervention. Drawing heavily on biology and ethology, it uses the tools of neural networks, genetic algorithms, dynamic systems, and biomorphic engineering. The resulting robots share with simple biological systems the characteristics of robustness, simplicity, small size, flexibility, and modularity.

In evolutionary robotics, an initial population of artificial chromosomes, each encoding the control system of a robot, is randomly created and put into the environment. Each robot is then free to act (move, look around, manipulate) according to its genetically specified controller while its performance on various tasks is automatically evaluated. The fittest robots then "reproduce" by swapping parts of their genetic material with small random mutations. The process is repeated until the "birth" of a robot that satisfies the performance criteria.

This book describes the basic concepts and methodologies of evolutionary robotics and the results achieved so far. An important feature is the clear presentation of a set of empirical experiments of increasing complexity. Software with a graphic interface, freely available on a Web page, will allow the reader to replicate and vary (in simulation and on real robots) most of the experiments.

Proceedings of the 2002 Conference

The annual Neural Information Processing (NIPS) meeting is the flagship conference on neural computation. The conference draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists—and the presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and applications. Only about thirty percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains all the papers presented at the 2002 conference.

The Design of Brain-Like Machines
Edited by Igor Aleksander

McClelland and Rumelhart's Parallel Distributed Processing was the first book to present a definitive account of the newly revived connectionist/neural net paradigm for artificial intelligence and cognitive science. While Neural Computing Architectures addresses the same issues, there is little overlap in the research it reports. These 18 contributions provide a timely and informative overview and synopsis of both pioneering and recent European connectionist research. Several chapters focus on cognitive modeling; however, most of the work covered revolves around abstract neural network theory or engineering applications, bringing important complementary perspectives to currently published work in PDP.

In four parts, chapters take up neural computing from the classical perspective, including both foundational and current work; the mathematical perspective (of logic, automata theory, and probability theory), presenting less well-known work in which the neuron is modeled as a logic truth function that can be implemented in a direct way as a silicon read only memory. They present new material both in the form of analytical tools and models and as suggestions for implementation in optical form, and summarize the PDP perspective in a single extended chapter covering PDP theory, application, and speculation in US research. Each part is introduced by the editor.

Visual Reconstruction presents a unified and highly original approach to the treatment of continuity in vision. It introduces, analyzes, and illustrates two new concepts. The first—the weak continuity constraint—is a concise, computational formalization of piecewise continuity. It is a mechanism for expressing the expectation that visual quantities such as intensity, surface color, and surface depth vary continuously almost everywhere, but with occasional abrupt changes. The second concept—the graduated nonconvexity algorithm—arises naturally from the first. It is an efficient, deterministic (nonrandom) algorithm for fitting piecewise continuous functions to visual data.

The book first illustrates the breadth of application of reconstruction processes in vision with results that the authors' theory and program yield for a variety of problems. The mathematics of weak continuity and the graduated nonconvexity (GNC) algorithm are then developed carefully and progressively.

Contents: Modeling Piecewise Continuity. Applications of Piecewise Continuous Reconstruction. Introducing Weak Continuity Constraints. Properties of the Weak String and Membrane. Properties of Weak Rod and Plate. The Discrete Problem. The Graduated Nonconvexity (GNC) Algorithm. Appendixes: Energy Calculations for the String and Membrane. Noise Performance of the Weak Elastic String. Energy Calculations for the Rod and Plate. Establishing Convexity. Analysis of the GNC Algorithm.

Visual Reconstruction is included in the Artificial Intelligence series, edited by Michael Brady and Patrick Winston.

  • Page 2 of 9