Skip navigation

Computational Neuroscience

  • Page 2 of 6

Over the last decade, the study of complex networks has expanded across diverse scientific fields. Increasingly, science is concerned with the structure, behavior, and evolution of complex systems ranging from cells to ecosystems. Modern network approaches are beginning to reveal fundamental principles of brain architecture and function, and in Networks of the Brain, Olaf Sporns describes how the integrative nature of brain function can be illuminated from a complex network perspective. Highlighting the many emerging points of contact between neuroscience and network science, the book serves to introduce network theory to neuroscientists and neuroscience to those working on theoretical network models.

Brain networks span the microscale of individual cells and synapses and the macroscale of cognitive systems and embodied cognition. Sporns emphasizes how networks connect levels of organization in the brain and how they link structure to function. In order to keep the book accessible and focused on the relevance to neuroscience of network approaches, he offers an informal and nonmathematical treatment of the subject. After describing the basic concepts of network theory and the fundamentals of brain connectivity, Sporns discusses how network approaches can reveal principles of brain architecture. He describes new links between network anatomy and function and investigates how networks shape complex brain dynamics and enable adaptive neural computation. The book documents the rapid pace of discovery and innovation while tracing the historical roots of the field.

The study of brain connectivity has already opened new avenues of study in neuroscience. Networks of the Brain offers a synthesis of the sciences of complex networks and the brain that will be an essential foundation for future research.

Most neurons in the brain are covered by dendritic spines, small protrusions that arise from dendrites, covering them like leaves on a tree. But a hundred and twenty years after spines were first described by Ramón y Cajal, their function is still unclear. Dozens of different functions have been proposed, from Cajal’s idea that they enhance neuronal interconnectivity to hypotheses that spines serve as plasticity machines, neuroprotective devices, or even digital logic elements. In Dendritic Spines, leading neurobiologist Rafael Yuste attempts to solve the “spine problem,” searching for the fundamental function of spines. He does this by examining many aspects of spine biology that have fascinated him over the years, including their structure, development, motility, plasticity, biophysical properties, and calcium compartmentalization. Yuste argues that we may never understand how the brain works without understanding the specific function of spines. In this book, he offers a synthesis of the information that has been gathered on spines (much of which comes from his own studies of the mammalian cortex), linking their function with the computational logic of the neuronal circuits that use them. He argues that once viewed from the circuit perspective, all the pieces of the spine puzzle fit together nicely into a single, overarching function. Yuste connects these two topics, integrating current knowledge of spines with that of key features of the circuits in which they operate. He concludes with a speculative chapter on the computational function of spines, searching for the ultimate logic of their existence in the brain and offering a proposal that is sure to stimulate discussions and drive future research.

The field of neuroimaging has reached a watershed. Brain imaging research has been the source of many advances in cognitive neuroscience and cognitive science over the last decade, but recent critiques and emerging trends are raising foundational issues of methodology, measurement, and theory. Indeed, concerns over interpretation of brain maps have created serious controversies in social neuroscience, and, more important, point to a larger set of issues that lie at the heart of the entire brain mapping enterprise. In this volume, leading scholars—neuroimagers and philosophers of mind—reexamine these central issues and explore current controversies that have arisen in cognitive science, cognitive neuroscience, computer science, and signal processing. The contributors address both statistical and dynamical analysis and modeling of neuroimaging data and interpretation, discussing localization, modularity, and neuroimagers' tacit assumptions about how these two phenomena are related; controversies over correlation of fMRI data and social attributions (recently characterized for good or ill as "voodoo correlations"); and the standard inferential design approach in neuroimaging. Finally, the contributors take a more philosophical perspective, considering the nature of measurement in brain imaging, and offer a framework for novel neuroimaging data structures (effective and functional connectivity—"graphs").

Contributors: William Bechtel, Bharat Biswal, Matthew Brett, Martin Bunzl, Max Coltheart, Karl J. Friston, Joy J. Geng, Clark Glymour, Kalanit Grill-Spector, Stephen José Hanson, Trevor Harley, Gilbert Harman, James V. Haxby, Rik N. Henson, Nancy Kanwisher, Colin Klein, Richard Loosemore, Sébastien Meriaux, Chris Mole, Jeanette A. Mumford, Russell A. Poldrack, Jean-Baptiste Poline, Richard C. Richardson, Alexis Roche, Adina L. Roskies, Pia Rotshtein, Rebecca Saxe, Philipp Sterzer, Bertrand Thirion, Edward Vul

The Geometry of Excitability and Bursting

In order to model neuronal behavior or to interpret the results of modeling studies, neuroscientists must call upon methods of nonlinear dynamics. This book offers an introduction to nonlinear dynamical systems theory for researchers and graduate students in neuroscience. It also provides an overview of neuroscience for mathematicians who want to learn the basic facts of electrophysiology.

Dynamical Systems in Neuroscience presents a systematic study of the relationship of electrophysiology, nonlinear dynamics, and computational properties of neurons. It emphasizes that information processing in the brain depends not only on the electrophysiological properties of neurons but also on their dynamical properties. The book introduces dynamical systems, starting with one- and two-dimensional Hodgkin-Huxley-type models and continuing to a description of bursting systems. Each chapter proceeds from the simple to the complex, and provides sample problems at the end. The book explains all necessary mathematical concepts using geometrical intuition; it includes many figures and few equations, making it especially suitable for non-mathematicians. Each concept is presented in terms of both neuroscience and mathematics, providing a link between the two disciplines.

Nonlinear dynamical systems theory is at the core of computational neuroscience research, but it is not a standard part of the graduate neuroscience curriculum—or taught by math or physics department in a way that is suitable for students of biology. This book offers neuroscience students and researchers a comprehensive account of concepts and methods increasingly used in computational neuroscience.

An additional chapter on synchronization, with more advanced material, can be found at the author's website, www.izhikevich.com.

This book offers an introduction to current methods in computational modeling in neuroscience. The book describes realistic modeling methods at levels of complexity ranging from molecular interactions to large neural networks. A “how to” book rather than an analytical account, it focuses on the presentation of methodological approaches, including the selection of the appropriate method and its potential pitfalls. It is intended for experimental neuroscientists and graduate students who have little formal training in mathematical methods, but it will also be useful for scientists with theoretical backgrounds who want to start using data-driven modeling methods. The mathematics needed are kept to an introductory level; the first chapter explains the mathematical methods the reader needs to master to understand the rest of the book. The chapters are written by scientists who have successfully integrated data-driven modeling with experimental work, so all of the material is accessible to experimentalists. The chapters offer comprehensive coverage with little overlap and extensive cross-references, moving from basic building blocks to more complex applications.ContributorsPablo Achard, Haroon Anwar, Upinder S. Bhalla, Michiel Berends, Nicolas Brunel, Ronald L. Calabrese, Brenda Claiborne, Hugo Cornelis, Erik De Schutter, Alain Destexhe, Bard Ermentrout, Kristen Harris, Sean Hill, John R. Huguenard, William R. Holmes, Gwen Jacobs, Gwendal LeMasson, Henry Markram, Reinoud Maex, Astrid A. Prinz, Imad Riachi, John Rinzel, Arnd Roth, Felix Schürmann, Werner Van Geit, Mark C. W. van Rossum, Stefan Wils

Advances in Neuroelectric and Neuromagnetic Methods
Edited by Todd C. Handy

Cognitive electrophysiology concerns the study of the brain’s electrical and magnetic responses to both external and internal events. These can be measured using electroencephalograms (EEGs) or magnetoencephalograms (MEGs). With the advent of functional magnetic resonance imaging (fMRI), another method of tracking brain signals, the tools and techniques of ERP, EEG and MEG data acquisition and analysis have been developing at a similarly rapid pace, and this book offers an overview of key recent advances in cognitive electrophysiology. The chapters highlight the increasing overlap in EEG and MEG analytic techniques, describing several methods applicable to both; they discuss recent developments, including reverse correlation methods in visual-evoked potentials and a new approach to topographic mapping in high-density electrode montage; and they relate the latest thinking on design aspects of EEG/MEG studies, discussing how to optimize the signal-to-noise ratio as well as statistical developments for maximizing power and accuracy in data analysis using repeated-measure ANOVAS.ContributorsDenis Brunet, Douglas Cheyne, Marzia De Lucia, Sam M. Doesburg, John J. Foxe, Karl J. Friston, Marta I. Garrido, Sara L. Gonzalez Andino, Rolando Grave de Peralta Menendez, Jessica J. Green, Todd C. Handy, Anthony T. Herdman, Stefan J. Kiebel, Edmund C. Lalor, Theodor Landis, Teresa Y. L. Liu-Ambrose, John. J. McDonald, Christoph M. Michel, Marla J. S. Mickleborough, Micah M. Murray, Lindsay S. Nagamatsu, Barak A. Pearlmutter, Durk Talsma, Gregor Thut, Anne-Laura van Harmelen, Lawrence M. Ward

Interest in developing an effective communication interface connecting the human brain and a computer has grown rapidly over the past decade. The brain-computer interface (BCI) would allow humans to operate computers, wheelchairs, prostheses, and other devices, using brain signals only. BCI research may someday provide a communication channel for patients with severe physical disabilities but intact cognitive functions, a working tool in computational neuroscience that contributes to a better understanding of the brain, and a novel independent interface for human-machine communication that offers new options for monitoring and control. This volume presents a timely overview of the latest BCI research, with contributions from many of the important research groups in the field.

The book covers a broad range of topics, describing work on both noninvasive (that is, without the implantation of electrodes) and invasive approaches. Other chapters discuss relevant techniques from machine learning and signal processing, existing software for BCI, and possible applications of BCI research in the real world.

Probabilistic Approaches to Neural Coding

A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.

After an overview of the mathematical concepts, including Bayes' theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.

From Systems to Brains

Signal processing and neural computation have separately and significantly influenced many disciplines, but the cross-fertilization of the two fields has begun only recently. Research now shows that each has much to teach the other, as we see highly sophisticated kinds of signal processing and elaborate hierachical levels of neural computation performed side by side in the brain. In New Directions in Statistical Signal Processing, leading researchers from both signal processing and neural computation present new work that aims to promote interaction between the two disciplines.The book's 14 chapters, almost evenly divided between signal processing and neural computation, begin with the brain and move on to communication, signal processing, and learning systems. They examine such topics as how computational models help us understand the brain's information processing, how an intelligent machine could solve the "cocktail party problem" with "active audition" in a noisy environment, graphical and network structure modeling approaches, uncertainty in network communications, the geometric approach to blind signal processing, game-theoretic learning algorithms, and observable operator models (OOMs) as an alternative to hidden Markov models (HMMs).

Proceedings of the 2005 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December 2005 meeting, held in Vancouver.

  • Page 2 of 6