Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computationcollects, by topic, the most significant papers that have appeared in the journal over the past nine years.
This volume of Foundations of Neural Computation, on unsupervised learning algorithms, focuses on neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data.
The six contributions in Connectionist Symbol Processing address the current tension within the artificial intelligence community between advocates of powerful symbolic representations that lack efficient learning procedures and advocates of relatively simple learning procedures that lack the ability to represent complex structures effectively. The authors seek to extend the representational power of connectionist networks without abandoning the automatic learning that makes these networks interesting.
Aware of the huge gap that needs to be bridged, the authors intend their contributions to be viewed as exploratory steps in the direction of greater representational power for neural networks. If successful, this research could make it possible to combine robust general purpose learning procedures and inherent representations of artificial intelligence—a synthesis that could lead to new insights into both representation and learning.
Contents: Preface, G. E. Hinton. BoltzCONS: Dynamic Symbol Structures in a Connectionist Network, D. S. Touretzky. Mapping Part-Whole Hierarchies into Connectionist Networks, G. E. Hinton. Recursive Distributed Representations, J. B. Pollack. Mundane Reasoning by Settling on a Plausible Model, M. Derthick. Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems, P. Smolensky. Learning and Applying Contextual Constraints in Sentence Comprehension, M. F. St. John and J. L. McClelland.