Skip navigation

Computational Neuroscience

  • Page 3 of 6
Computational and Mathematical Modeling of Neural Systems

Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate. This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.

Head direction cells—neurons that fire only when an animal orients its head in a certain direction—are found in several different brain areas, with different neurons selective for different head orientations; they are influenced by landmarks as well as motor and vestibular information concerning how the head moves through space. These properties suggest that head direction cells play an important role in determining orientation in space and in navigation.

Proceedings of the 2004 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications.

A Foundation for Motor Learning

Neuroscience involves the study of the nervous system, and its topics range from genetics to inferential reasoning. At its heart, however, lies a search for understanding how the environment affects the nervous system and how the nervous system, in turn, empowers us to interact with and alter our environment. This empowerment requires motor learning. The Computational Neurobiology of Reaching and Pointing addresses the neural mechanisms of one important form of motor learning.

Computation, Representation, and Dynamics in Neurobiological Systems

For years, researchers have used the theoretical tools of engineering to understand neural systems, but much of this work has been conducted in relative isolation. In Neural Engineering, Chris Eliasmith and Charles Anderson provide a synthesis of the disparate approaches current in computational neuroscience, incorporating ideas from neural coding, neural computation, physiology, communications theory, control theory, dynamics, and probability theory. This synthesis, they argue, enables novel theoretical and practical insights into the functioning of neural systems.

Proceedings of the 2003 Conference

The annual Neural Information Processing (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications.

The Design of Brain-Like Machines
Edited by Igor Aleksander

McClelland and Rumelhart's Parallel Distributed Processing was the first book to present a definitive account of the newly revived connectionist/neural net paradigm for artificial intelligence and cognitive science. While Neural Computing Architectures addresses the same issues, there is little overlap in the research it reports. These 18 contributions provide a timely and informative overview and synopsis of both pioneering and recent European connectionist research.

The Collected Papers of Wilfrid Rall with Commentaries

Wilfrid Rall was a pioneer in establishing the integrative functions of neuronal dendrites that have provided a foundation for neurobiology in general and computational neuroscience in particular. This collection of fifteen previously published papers, some of them not widely available, have been carefully chosen and annotated by Rall's colleagues and other leading neuroscientists.

Motivated by the remarkable fluidity of memory the way in which items are pulled spontaneously and effortlessly from our memory by vague similarities to what is currently occupying our attention Sparse Distributed Memory presents a mathematically elegant theory of human long term memory.

Proceedings of the 2001 Conference

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. The conference is interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and diverse applications. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented at the 2001 conference.

  • Page 3 of 6