Skip navigation

Terrence J. Sejnowski

Terrence J. Sejnowski is Francis Crick Professor, Director of the Computational Neurobiology Laboratory, and a Howard Hughes Medical Institute Investigator at the Salk Institute for Biological Studies and Professor of Biology at the University of California, San Diego.

Titles by This Author

How do groups of neurons interact to enable the organism to see, decide, and move appropriately? What are the principles whereby networks of neurons represent and compute? These are the central questions probed by The Computational Brain. Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field. The Computational Brain is the first unified and broadly accessible book to bring together computational concepts and behavioral data within a neurobiological framework.

Computer models constrained by neurobiological data can help reveal how -networks of neurons subserve perception and behavior - bow their physical interactions can yield global results in perception and behavior, and how their physical properties are used to code information and compute solutions. The Computational Brain focuses mainly on three domains: visual perception, learning and memory, and sensorimotor integration. Examples of recent computer models in these domains are discussed in detail, highlighting strengths and weaknesses, and extracting principles applicable to other domains. Churchland and Sejnowski show how both abstract models and neurobiologically realistic models can have useful roles in computational neuroscience, and they predict the coevolution of models and experiments at many levels of organization, from the neuron to the system.

The Computational Brain addresses a broad audience: neuroscientists, computer scientists, cognitive scientists, and philosophers. It is written for both the expert and novice. A basic overview of neuroscience and computational theory is provided, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research. Technical terms are clearly explained in the text, and definitions are provided in an extensive glossary. The appendix contains a précis of neurobiological techniques.

Patricia S. Churchland is Professor of Philosophy at the University of California, San Diego, Adjunct Professor at the Salk Institute, and a MacArthur Fellow. Terrence J. Sejnowski is Professor of Biology at the University of California, San Diego, Professor at the Salk Institute, where he is Director of the Computational Neurobiology Laboratory, and an Investigator of the Howard Hughes Medical Institute.

Titles by This Editor

From Systems to Brains

Signal processing and neural computation have separately and significantly influenced many disciplines, but the cross-fertilization of the two fields has begun only recently. Research now shows that each has much to teach the other, as we see highly sophisticated kinds of signal processing and elaborate hierachical levels of neural computation performed side by side in the brain. In New Directions in Statistical Signal Processing, leading researchers from both signal processing and neural computation present new work that aims to promote interaction between the two disciplines.The book's 14 chapters, almost evenly divided between signal processing and neural computation, begin with the brain and move on to communication, signal processing, and learning systems. They examine such topics as how computational models help us understand the brain's information processing, how an intelligent machine could solve the "cocktail party problem" with "active audition" in a noisy environment, graphical and network structure modeling approaches, uncertainty in network communications, the geometric approach to blind signal processing, game-theoretic learning algorithms, and observable operator models (OOMs) as an alternative to hidden Markov models (HMMs).

Foundations of Neural Computation

Graphical models use graphs to represent and manipulate joint probability distributions. They have their roots in artificial intelligence, statistics, and neural networks. The clean mathematical formalism of the graphical models framework makes it possible to understand a wide variety of network-based approaches to computation, and in particular to understand many neural network algorithms and architectures as instances of a broader probabilistic methodology. It also makes it possible to identify novel features of neural network algorithms and architectures and to extend them to more general graphical models.

This book exemplifies the interplay between the general formal framework of graphical models and the exploration of new algorithm and architectures. The selections range from foundational papers of historical importance to results at the cutting edge of research.

Contributors:
H. Attias, C. M. Bishop, B. J. Frey, Z. Ghahramani, D. Heckerman, G. E. Hinton, R. Hofmann, R. A. Jacobs, Michael I. Jordan, H. J. Kappen, A. Krogh, R. Neal, S. K. Riis, F. B. Rodríguez, L. K. Saul, Terrence J. Sejnowski, P. Smyth, M. E. Tipping, V. Tresp, Y. Weiss.

Foundations of Neural Computation

This book provides an overview of self-organizing map formation, including recent developments. Self-organizing maps form a branch of unsupervised learning, which is the study of what can be determined about the statistical properties of input data without explicit feedback from a teacher. The articles are drawn from the journal Neural Computation.

The book consists of five sections. The first section looks at attempts to model the organization of cortical maps and at the theory and applications of the related artificial neural network algorithms. The second section analyzes topographic maps and their formation via objective functions. The third section discusses cortical maps of stimulus features. The fourth section discusses self-organizing maps for unsupervised data analysis. The fifth section discusses extensions of self-organizing maps, including two surprising applications of mapping algorithms to standard computer science problems: combinatorial optimization and sorting.

Contributors:
J. J. Atick, H. G. Barrow, H. U. Bauer, C. M. Bishop, H. J. Bray, J. Bruske, J. M. L. Budd, M. Budinich, V. Cherkassky, J. Cowan, R. Durbin, E. Erwin, G. J. Goodhill, T. Graepel, D. Grier, S. Kaski, T. Kohonen, H. Lappalainen, Z. Li, J. Lin, R. Linsker, S. P. Luttrell, D. J. C. MacKay, K. D. Miller, G. Mitchison, F. Mulier, K. Obermayer, C. Piepenbrock, H. Ritter, K. Schulten, T. J. Sejnowski, S. Smirnakis, G. Sommer, M. Svensen, R. Szeliski, A. Utsugi, C. K. I. Williams, L. Wiskott, L. Xu, A. Yuille, J. Zhang.

Foundations of Neural Computation


Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computations collects, by topic, the most significant papers that have appeared in the journal over the past nine years.

The present volume focuses on neural codes and representations, topics of broad interest to neuroscientists and modelers. The topics addressed are: how neurons encode information through action potential firing patterns, how populations of neurons represent information, and how individual neurons use dendritic processing and biophysical properties of synapses to decode spike trains. The papers encompass a wide range of levels of investigation, from dendrites and neurons to networks and systems.


Foundations of Neural Computation

Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computationcollects, by topic, the most significant papers that have appeared in the journal over the past nine years.

This volume of Foundations of Neural Computation, on unsupervised learning algorithms, focuses on neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data.