The six contributions in Connectionist Symbol Processing address the current tension within the artificial intelligence community between advocates of powerful symbolic representations that lack efficient learning procedures and advocates of relatively simple learning procedures that lack the ability to represent complex structures effectively. The authors seek to extend the representational power of connectionist networks without abandoning the automatic learning that makes these networks interesting.
Aware of the huge gap that needs to be bridged, the authors intend their contributions to be viewed as exploratory steps in the direction of greater representational power for neural networks. If successful, this research could make it possible to combine robust general purpose learning procedures and inherent representations of artificial intelligence—a synthesis that could lead to new insights into both representation and learning.
Contents: Preface, G. E. Hinton. BoltzCONS: Dynamic Symbol Structures in a Connectionist Network, D. S. Touretzky. Mapping Part-Whole Hierarchies into Connectionist Networks, G. E. Hinton. Recursive Distributed Representations, J. B. Pollack. Mundane Reasoning by Settling on a Plausible Model, M. Derthick. Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems, P. Smolensky. Learning and Applying Contextual Constraints in Sentence Comprehension, M. F. St. John and J. L. McClelland.
About the Editor
Geoffrey Hinton is Professor of Computer Science at the University of Toronto.