Skip navigation

Computational Neuroscience

  • Page 3 of 6

Head direction cells—neurons that fire only when an animal orients its head in a certain direction—are found in several different brain areas, with different neurons selective for different head orientations; they are influenced by landmarks as well as motor and vestibular information concerning how the head moves through space. These properties suggest that head direction cells play an important role in determining orientation in space and in navigation. Moreover, the prominence, strength, and clarity of head direction signals indicate their importance over the course of evolution and suggest that they can serve as a vital key for understanding brain function. This book presents the latest findings on head direction cells in a comprehensive treatment that will be a valuable reference for students and researchers in the cognitive sciences, neuroscience, computational science, and robotics.

The book begins by presenting head direction cell properties and an anatomical framework of the head direction system. It then looks at the types of sensory and motor information that control head direction cell firing, covering topics including the integration of diverse signals; the relationship between head direction cell activity and an animal's spatial behavior; and spatial and directional orientation in nonhuman primates and humans. The book concludes with a tutorial demonstrating the implementation of the continuous attractor network, a computational model of head direction cells, and an application of this approach for a navigational system for mobile robots.

Proceedings of the 2004 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December, 2004 conference, held in Vancouver.

A Foundation for Motor Learning

Neuroscience involves the study of the nervous system, and its topics range from genetics to inferential reasoning. At its heart, however, lies a search for understanding how the environment affects the nervous system and how the nervous system, in turn, empowers us to interact with and alter our environment. This empowerment requires motor learning. The Computational Neurobiology of Reaching and Pointing addresses the neural mechanisms of one important form of motor learning. The authors integrate material from the computational, behavioral, and neural sciences of motor control that is not available in any other single source. The result is a unified, comprehensive model of reaching and pointing. The book is intended to be used as a text by graduate students in both neuroscience and bioengineering and as a reference source by experts in neuroscience, robotics, and other disciplines.

The book begins with an overview of the evolution, anatomy, and physiology of the motor system, including the mechanisms for generating force and maintaining limb stability. The sections that follow, "Computing Locations and Displacements," "Skills, Adaptations, and Trajectories," and "Predictions, Decisions, and Flexibility," present a theory of sensorially guided reaching and pointing that evolves organically based on computational principles rather than a traditional structure-by-structure approach. The book also includes five appendixes that provide brief refreshers on fundamentals of biology, mathematics, physics, and neurophysiology, as well as a glossary of relevant terms. The authors have also made supplemental materials available on the Internet. These web documents provide source code for simulations, step-by-step derivations of certain mathematical formulations, and expanded explanations of some concepts.

Computation, Representation, and Dynamics in Neurobiological Systems

For years, researchers have used the theoretical tools of engineering to understand neural systems, but much of this work has been conducted in relative isolation. In Neural Engineering, Chris Eliasmith and Charles Anderson provide a synthesis of the disparate approaches current in computational neuroscience, incorporating ideas from neural coding, neural computation, physiology, communications theory, control theory, dynamics, and probability theory. This synthesis, they argue, enables novel theoretical and practical insights into the functioning of neural systems. Such insights are pertinent to experimental and computational neuroscientists and to engineers, physicists, and computer scientists interested in how their quantitative tools relate to the brain.

The authors present three principles of neural engineering based on the representation of signals by neural ensembles, transformations of these representations through neuronal coupling weights, and the integration of control theory and neural dynamics. Through detailed examples and in-depth discussion, they make the case that these guiding principles constitute a useful theory for generating large-scale models of neurobiological function. A software package written in MatLab for use with their methodology, as well as examples, course notes, exercises, documentation, and other material, are available on the Web.

Proceedings of the 2003 Conference

The annual Neural Information Processing (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only thirty percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains all the papers presented at the 2003 conference.

The Design of Brain-Like Machines
Edited by Igor Aleksander

McClelland and Rumelhart's Parallel Distributed Processing was the first book to present a definitive account of the newly revived connectionist/neural net paradigm for artificial intelligence and cognitive science. While Neural Computing Architectures addresses the same issues, there is little overlap in the research it reports. These 18 contributions provide a timely and informative overview and synopsis of both pioneering and recent European connectionist research. Several chapters focus on cognitive modeling; however, most of the work covered revolves around abstract neural network theory or engineering applications, bringing important complementary perspectives to currently published work in PDP.

In four parts, chapters take up neural computing from the classical perspective, including both foundational and current work; the mathematical perspective (of logic, automata theory, and probability theory), presenting less well-known work in which the neuron is modeled as a logic truth function that can be implemented in a direct way as a silicon read only memory. They present new material both in the form of analytical tools and models and as suggestions for implementation in optical form, and summarize the PDP perspective in a single extended chapter covering PDP theory, application, and speculation in US research. Each part is introduced by the editor.

The Collected Papers of Wilfrid Rall with Commentaries

Wilfrid Rall was a pioneer in establishing the integrative functions of neuronal dendrites that have provided a foundation for neurobiology in general and computational neuroscience in particular. This collection of fifteen previously published papers, some of them not widely available, have been carefully chosen and annotated by Rall's colleagues and other leading neuroscientists. It brings together Rall's work over more than forty years, including his first papers extending cable theory to complex dendritic trees, his ground-breaking paper introducing compartmental analysis to computational neuroscience, and his studies of synaptic integration in motoneurons, dendrodendritic interactions, plasticity of dendritic spines, and active dendritic properties.

Today it is well known that the brain's synaptic information is processed mostly in the dendrites where many of the plastic changes underlying learning and memory take place. It is particularly timely to look again at the work of a major creator of the field, to appreciate where things started and where they have led, and to correct any misinterpretations of Rall's work. The editors' introduction highlights the major insights that were gained from Rall's studies as well as from those of his collaborators and followers. It asks the questions that Rall proposed during his scientific career and briefly summarizes the answers.

The papers include commentaries by Milton Brightman, Robert E. Burke, William R. Holmes, Donald R. Humphrey, Julian J. B. Jack, John Miller, Stephen Redman, John Rinzel, Idan Segev, Gordon M. Shepherd, and Charles Wilson.

Motivated by the remarkable fluidity of memory the way in which items are pulled spontaneously and effortlessly from our memory by vague similarities to what is currently occupying our attention Sparse Distributed Memory presents a mathematically elegant theory of human long term memory.

The book, which is self contained, begins with background material from mathematics, computers, and neurophysiology; this is followed by a step by step development of the memory model. The concluding chapter describes an autonomous system that builds from experience an internal model of the world and bases its operation on that internal model. Close attention is paid to the engineering of the memory, including comparisons to ordinary computer memories.

Sparse Distributed Memory provides an overall perspective on neural systems. The model it describes can aid in understanding human memory and learning, and a system based on it sheds light on outstanding problems in philosophy and artificial intelligence. Applications of the memory are expected to be found in the creation of adaptive systems for signal processing, speech, vision, motor control, and (in general) robots. Perhaps the most exciting aspect of the memory, in its implications for research in neural networks, is that its realization with neuronlike components resembles the cortex of the cerebellum.

Pentti Kanerva is a scientist at the Research Institute for Advanced Computer Science at the NASA Ames Research Center and a visiting scholar at the Stanford Center for the Study of Language and Information. A Bradford Book.

Proceedings of the 2001 Conference

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. The conference is interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and diverse applications. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented at the 2001 conference.

Circuits and Principles

Neuromorphic engineers work to improve the performance of artificial systems through the development of chips and systems that process information collectively using primarily analog circuits. This book presents the central concepts required for the creative and successful design of analog VLSI circuits. The discussion is weighted toward novel circuits that emulate natural signal processing. Unlike most circuits in commercial or industrial applications, these circuits operate mainly in the subthreshold or weak inversion region. Moreover, their functionality is not limited to linear operations, but also encompasses many interesting nonlinear operations similar to those occurring in natural systems. Topics include device physics, linear and nonlinear circuit forms, translinear circuits, photodetectors, floating-gate devices, noise analysis, and process technology.

  • Page 3 of 6