Skip navigation

Neural Information Processing Systems

  •  
  • Page 1 of 9

The vast differences between the brain’s neural circuitry and a computer’s silicon circuitry might suggest that they have nothing in common. In fact, as Dana Ballard argues in this book, computational tools are essential for understanding brain function. Ballard shows that the hierarchical organization of the brain has many parallels with the hierarchical organization of computing; as in silicon computing, the complexities of brain computation can be dramatically simplified when its computation is factored into different levels of abstraction.

Drawing on several decades of progress in computational neuroscience, together with recent results in Bayesian and reinforcement learning methodologies, Ballard factors the brain’s principal computational issues in terms of their natural place in an overall hierarchy. Each of these factors leads to a fresh perspective. A neural level focuses on the basic forebrain functions and shows how processing demands dictate the extensive use of timing-based circuitry and an overall organization of tabular memories. An embodiment level organization works in reverse, making extensive use of multiplexing and on-demand processing to achieve fast parallel computation. An awareness level focuses on the brain’s representations of emotion, attention and consciousness, showing that they can operate with great economy in the context of the neural and embodiment substrates.

The goal of structured prediction is to build machine learning models that predict relational information that itself has structure, such as being composed of multiple interrelated parts. These models, which reflect prior knowledge, task-specific relations, and constraints, are used in fields including computer vision, speech recognition, natural language processing, and computational biology. They can carry out such tasks as predicting a natural language sentence, or segmenting an image into meaningful components.

These models are expressive and powerful, but exact computation is often intractable. A broad research effort in recent years has aimed at designing structured prediction models and approximate inference and learning procedures that are computationally efficient. This volume offers an overview of this recent research in order to make the work accessible to a broader research community. The chapters, by leading researchers in the field, cover a range of topics, including research trends, the linear programming relaxation approach, innovations in probabilistic modeling, recent theoretical progress, and resource-aware learning.

Sebastian Nowozin is a Researcher in the Machine Learning and Perception group (MLP) at Microsoft Research, Cambridge, England. Peter V. Gehler is a Senior Researcher in the Perceiving Systems group at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. Jeremy Jancsary is a Senior Research Scientist at Nuance Communications, Vienna. Christoph H. Lampert is Assistant Professor at the Institute of Science and Technology Austria, where he heads a group for Computer Vision and Machine Learning.

Contributors
Jonas Behr, Yutian Chen, Fernando De La Torre, Justin Domke, Peter V. Gehler, Andrew E. Gelfand, Sébastien Giguère, Amir Globerson, Fred A. Hamprecht, Minh Hoai, Tommi Jaakkola, Jeremy Jancsary, Joseph Keshet, Marius Kloft, Vladimir Kolmogorov, Christoph H. Lampert, François Laviolette, Xinghua Lou, Mario Marchand, André F. T. Martins, Ofer Meshi, Sebastian Nowozin, George Papandreou, Daniel Průša, Gunnar Rätsch, Amélie Rolland, Bogdan Savchynskyy, Stefan Schmidt, Thomas Schoenemann, Gabriele Schweikert, Ben Taskar, Sinisa Todorovic, Max Welling, David Weiss, Thomáš Werner, Alan Yuille, Stanislav Živný

Sparse modeling is a rapidly developing area at the intersection of statistical learning and signal processing, motivated by the age-old statistical problem of selecting a small number of predictive variables in high-dimensional datasets. This collection describes key approaches in sparse modeling, focusing on its applications in fields including neuroscience, computational biology, and computer vision.

Sparse modeling methods can improve the interpretability of predictive models and aid efficient recovery of high-dimensional unobserved signals from a limited number of measurements. Yet despite significant advances in the field, a number of open issues remain when sparse modeling meets real-life applications. The book discusses a range of practical applications and state-of-the-art approaches for tackling the challenges presented by these applications. Topics considered include the choice of method in genomics applications; analysis of protein mass-spectrometry data; the stability of sparse models in brain imaging applications; sequential testing approaches; algorithmic aspects of sparse recovery; and learning sparse latent models.

Contributors
A. Vania Apkarian, Marwan Baliki, Melissa K. Carroll, Guillermo A. Cecchi, Volkan Cevher, Xi Chen, Nathan W. Churchill, Rémi Emonet, Rahul Garg, Zoubin Ghahramani, Lars Kai Hansen, Matthias Hein, Katherine Heller, Sina Jafarpour, Seyoung Kim, Mladen Kolar, Anastasios Kyrillidis, Aurelie Lozano, Matthew L. Malloy, Pablo Meyer, Shakir Mohamed, Alexandru Niculescu-Mizil, Robert D. Nowak, Jean-Marc Odobez, Peter M. Rasmussen, Irina Rish, Saharon Rosset, Martin Slawski, Stephen C. Strother, Jagannadan Varadarajan, Eric P. Xing

Evolutionary robotics (ER) aims to apply evolutionary computation techniques to the design of both real and simulated autonomous robots. The Horizons of Evolutionary Robotics offers an authoritative overview of this rapidly developing field, presenting state-of-the-art research by leading scholars. The result is a lively, expansive survey that will be of interest to computer scientists, robotics engineers, neuroscientists, and philosophers.

The contributors discuss incorporating principles from neuroscience into ER; dynamical analysis of evolved agents; constructing appropriate evolutionary pathways; spatial cognition; the coevolution of robot brains and bodies; group behavior; the evolution of communication; translating evolved behavior into design principles; the development of an evolutionary robotics–based methodology for shedding light on neural processes; an incremental approach to complex tasks; and the notion of “mindless intelligence”—complex processes from immune systems to social networks—as a way forward for artificial intelligence.

Contributors
Christos Ampatzis, Randall D. Beer, Josh Bongard, Joachim de Greeff, Ezequiel A. Di Paolo, Marco Dorigo, Dario Floreano, Inman Harvey, Sabine Hauert, Phil Husbands, Laurent Keller, Michail Maniadakis, Orazio Miglino, Sara Mitri, Renan Moioli, Stefano Nolfi, Michael O’Shea, Rainer W. Paine, Andy Philippides, Jordan B. Pollack, Michela Ponticorvo, Yoon-Sik Shim, Jun Tani, Vito Trianni, Elio Tuci, Patricia A. Vargas, Eric D. Vaughan

This volume demonstrates the power of the Markov random field (MRF) in vision, treating the MRF both as a tool for modeling image data and, utilizing recently developed algorithms, as a means of making inferences about images. These inferences concern underlying image and scene structure as well as solutions to such problems as image reconstruction, image segmentation, 3D vision, and object labeling. It offers key findings and state-of-the-art research on both algorithms and applications.
After an introduction to the fundamental concepts used in MRFs, the book reviews some of the main algorithms for performing inference with MRFs; presents successful applications of MRFs, including segmentation, super-resolution, and image restoration, along with a comparison of various optimization methods; discusses advanced algorithmic topics; addresses limitations of the strong locality assumptions in the MRFs discussed in earlier chapters; and showcases applications that use MRFs in more complex ways, as components in bigger systems or with multiterm energy functions. The book will be an essential guide to current research on these powerful mathematical tools.

Computational systems biology aims to develop algorithms that uncover the structure and parameterization of the underlying mechanistic model—in other words, to answer specific questions about the underlying mechanisms of a biological system—in a process that can be thought of as learning or inference. This volume offers state-of-the-art perspectives from computational biology, statistics, modeling, and machine learning on new methodologies for learning and inference in biological networks.

The chapters offer practical approaches to biological inference problems ranging from genome-wide inference of genetic regulation to pathway-specific studies. Both deterministic models (based on ordinary differential equations) and stochastic models (which anticipate the increasing availability of data from small populations of cells) are considered. Several chapters emphasize Bayesian inference, so the editors have included an introduction to the philosophy of the Bayesian approach and an overview of current work on Bayesian inference. Taken together, the methods discussed by the experts in Learning and Inference in Computational Systems Biology provide a foundation upon which the next decade of research in systems biology can be built.

Contributors: Florence d'Alch e-Buc, John Angus, Matthew J. Beal, Nicholas Brunel, Ben Calderhead, Pei Gao, Mark Girolami, Andrew Golightly, Dirk Husmeier, Johannes Jaeger, Neil D. Lawrence, Juan Li, Kuang Lin, Pedro Mendes, Nicholas A. M. Monk, Eric Mjolsness, Manfred Opper, Claudia Rangel, Magnus Rattray, Andreas Ruttor, Guido Sanguinetti, Michalis Titsias, Vladislav Vyshemirsky, David L. Wild, Darren Wilkinson, Guy Yosiphon

Computational Molecular Biology series

Proceedings of the 2006 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation and machine learning. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists—interested in theoretical and applied aspects of modeling, simulating, and building neural-like or intelligent systems. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December 2006 meeting, held in Vancouver.

Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.ContributorsLéon Bottou, Yoshua Bengio, Stéphane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Gaëlle Loosli, Joaquin Quiñonero-Candela, Carl Edward Rasmussen, Gunnar Rätsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, Sören Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-TovLéon Bottou is a Research Scientist at NEC Labs America. Olivier Chapelle is with Yahoo! Research. He is editor of Semi-Supervised Learning (MIT Press, 2006). Dennis DeCoste is with Microsoft Research. Jason Weston is a Research Scientist at NEC Labs America.

Machine learning develops intelligent computer systems that are able to generalize from previously seen examples. A new domain of machine learning, in which the prediction must satisfy the additional constraints found in structured data, poses one of machine learning’s greatest challenges: learning functional dependencies between arbitrary input and output domains. This volume presents and analyzes the state of the art in machine learning algorithms and theory in this novel field. The contributors discuss applications as diverse as machine translation, document markup, computational biology, and information extraction, among others, providing a timely overview of an exciting field. Contributors Yasemin Altun, Gökhan Bakir [no dot over i], Olivier Bousquet, Sumit Chopra, Corinna Cortes, Hal Daumé III, Ofer Dekel, Zoubin Ghahramani, Raia Hadsell, Thomas Hofmann, Fu Jie Huang, Yann LeCun, Tobias Mann, Daniel Marcu, David McAllester, Mehryar Mohri, William Stafford Noble, Fernando Pérez-Cruz, Massimiliano Pontil, Marc’Aurelio Ranzato, Juho Rousu, Craig Saunders, Bernhard Schölkopf, Matthias W. Seeger, Shai Shalev-Shwartz, John Shawe-Taylor, Yoram Singer, Alexander J. Smola, Sandor Szedmak, Ben Taskar, Ioannis Tsochantaridis, S.V.N Vishwanathan, Jason Weston Gökhan Bakir [no dot over i] is Research Scientist at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. Thomas Hofmann is a Director of Engineering at Google’s Engineering Center in Zurich and Adjunct Associate Professor of Computer Science at Brown University. Bernhard Schölkopf is Director of the Max Planck Institute for Biological Cybernetics and Professor at the Technical University Berlin. Alexander J. Smola is Senior Principal Researcher and Machine Learning Program Leader at National ICT Australia/Australian National University, Canberra. Ben Taskar is Assistant Professor in the Computer and Information Science Department at the University of Pennsylvania. S. V. N. Vishwanathan is Senior Researcher in the Statistical Machine Learning Program, National ICT Australia with an adjunct appointment at the Research School for Information Sciences and Engineering, Australian National University.

Interest in developing an effective communication interface connecting the human brain and a computer has grown rapidly over the past decade. The brain-computer interface (BCI) would allow humans to operate computers, wheelchairs, prostheses, and other devices, using brain signals only. BCI research may someday provide a communication channel for patients with severe physical disabilities but intact cognitive functions, a working tool in computational neuroscience that contributes to a better understanding of the brain, and a novel independent interface for human-machine communication that offers new options for monitoring and control. This volume presents a timely overview of the latest BCI research, with contributions from many of the important research groups in the field.

The book covers a broad range of topics, describing work on both noninvasive (that is, without the implantation of electrodes) and invasive approaches. Other chapters discuss relevant techniques from machine learning and signal processing, existing software for BCI, and possible applications of BCI research in the real world.

  •  
  • Page 1 of 9