Skip navigation

Neural Information Processing Systems

  • Page 1 of 9
Adaptivity and Search in Evolving Neural Systems

Emergence—the formation of global patterns from solely local interactions—is a frequent and fascinating theme in the scientific literature both popular and academic. In this book, Keith Downing undertakes a systematic investigation of the widespread (if often vague) claim that intelligence is an emergent phenomenon. Downing focuses on neural networks, both natural and artificial, and how their adaptability in three time frames—phylogenetic (evolutionary), ontogenetic (developmental), and epigenetic (lifetime learning)—underlie the emergence of cognition.

The vast differences between the brain’s neural circuitry and a computer’s silicon circuitry might suggest that they have nothing in common. In fact, as Dana Ballard argues in this book, computational tools are essential for understanding brain function. Ballard shows that the hierarchical organization of the brain has many parallels with the hierarchical organization of computing; as in silicon computing, the complexities of brain computation can be dramatically simplified when its computation is factored into different levels of abstraction.

The goal of structured prediction is to build machine learning models that predict relational information that itself has structure, such as being composed of multiple interrelated parts. These models, which reflect prior knowledge, task-specific relations, and constraints, are used in fields including computer vision, speech recognition, natural language processing, and computational biology. They can carry out such tasks as predicting a natural language sentence, or segmenting an image into meaningful components.

Sparse modeling is a rapidly developing area at the intersection of statistical learning and signal processing, motivated by the age-old statistical problem of selecting a small number of predictive variables in high-dimensional datasets. This collection describes key approaches in sparse modeling, focusing on its applications in fields including neuroscience, computational biology, and computer vision.

Evolutionary robotics (ER) aims to apply evolutionary computation techniques to the design of both real and simulated autonomous robots. The Horizons of Evolutionary Robotics offers an authoritative overview of this rapidly developing field, presenting state-of-the-art research by leading scholars. The result is a lively, expansive survey that will be of interest to computer scientists, robotics engineers, neuroscientists, and philosophers.

This volume demonstrates the power of the Markov random field (MRF) in vision, treating the MRF both as a tool for modeling image data and, utilizing recently developed algorithms, as a means of making inferences about images. These inferences concern underlying image and scene structure as well as solutions to such problems as image reconstruction, image segmentation, 3D vision, and object labeling. It offers key findings and state-of-the-art research on both algorithms and applications.

Computational systems biology aims to develop algorithms that uncover the structure and parameterization of the underlying mechanistic model—in other words, to answer specific questions about the underlying mechanisms of a biological system—in a process that can be thought of as learning or inference. This volume offers state-of-the-art perspectives from computational biology, statistics, modeling, and machine learning on new methodologies for learning and inference in biological networks.

Proceedings of the 2006 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation and machine learning. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists—interested in theoretical and applied aspects of modeling, simulating, and building neural-like or intelligent systems. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning, and applications.

Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data.

Machine learning develops intelligent computer systems that are able to generalize from previously seen examples. A new domain of machine learning, in which the prediction must satisfy the additional constraints found in structured data, poses one of machine learning’s greatest challenges: learning functional dependencies between arbitrary input and output domains. This volume presents and analyzes the state of the art in machine learning algorithms and theory in this novel field.

  • Page 1 of 9