Neurocomputing methods are loosely based on a model of the brain as a network of simple interconnected processing elements corresponding to neurons. These methods derive their power from the collective processing of artificial neurons, the chief advantage being that such systems can learn and adapt to a changing environment. In knowledge-based neurocomputing, the emphasis is on the use and representation of knowledge about an application. Explicit modeling of the knowledge represented by such a system remains a major research topic.
What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons.
Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computationcollects, by topic, the most significant papers that have appeared in the journal over the past nine years.
Motion perception is fundamental to survival. Until recently, research on motion perception emphasized such basic aspects of motion as sampling and filtering. In the past decade, however, the emphasis has gradually shifted to higher-level motion processing—i.e., processing that takes place not only in the primary visual cortex but also in the "higher" or more complicated parts of the brain.
Athletes and musicians demonstrate the levels to which humans can ascend in the timing of behavior. But even common actions, such as opening a door or bringing a cup to one's lips, reveal how we organize our behavior temporally. When there is damage to the nervous system and the ability to time behavior breaks down, we become aware of how many things must go right for timing not to go terribly wrong.
The past decade has seen greatly increased interaction between theoretical work in neuroscience, cognitive science and information processing, and experimental work requiring sophisticated computational modeling. The 152 contributions in NIPS 8 focus on a wide variety of algorithms and architectures for both supervised and unsupervised learning. They are divided into nine parts: Cognitive Science, Neuroscience, Theory, Algorithms and Architectures, Implementations, Speech and Signal Processing, Vision, Applications, and Control.