Jeffrey Elman

  • Exercises in Rethinking Innateness

    Exercises in Rethinking Innateness

    A Handbook for Connectionist Simulations

    Kim Plunkett and Jeffrey Elman

    This book is the companion volume to Rethinking Innateness: A Connectionist Perspective on Development (The MIT Press, 1996), which proposed a new theoretical framework to answer the question "What does it mean to say that a behavior is innate?" The new work provides concrete illustrations—in the form of computer simulations—of properties of connectionist models that are particularly relevant to cognitive development. This enables the reader to pursue in depth some of the practical and empirical issues raised in the first book. The authors' larger goal is to demonstrate the usefulness of neural network modeling as a research methodology.

    The book comes with a complete software package, including demonstration projects, for running neural network simulations on both Macintosh and Windows 95. It also contains a series of exercises in the use of the neural network simulator provided with the book. The software is also available to run on a variety of UNIX platforms.

    • Paperback $11.75 £9.99
  • Rethinking Innateness

    Rethinking Innateness

    A Connectionist Perspective on Development

    Elizabeth Bates, Jeffrey Elman, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi, and Kim Plunkett

    Rethinking Innateness asks the question, "What does it really mean to say that a behavior is innate?" The authors describe a new framework in which interactions, occurring at all levels, give rise to emergent forms and behaviors. These outcomes often may be highly constrained and universal, yet are not themselves directly contained in the genes in any domain-specific way.

    One of the key contributions of Rethinking Innateness is a taxonomy of ways in which a behavior can be innate. These include constraints at the level of representation, architecture, and timing; typically, behaviors arise through the interaction of constraints at several of these levels.The ideas are explored through dynamic models inspired by a new kind of "developmental connectionism," a marriage of connectionist models and developmental neurobiology, forming a new theoretical framework for the study of behavioral development. While relying heavily on the conceptual and computational tools provided by connectionism, Rethinking Innateness also identifies ways in which these tools need to be enriched by closer attention to biology.

    • Hardcover $80.00 £59.95
    • Paperback $40.00 £30.00

Contributor

  • The Architecture of Cognition

    The Architecture of Cognition

    Rethinking Fodor and Pylyshyn's Systematicity Challenge

    Paco Calvo and John Symons

    Philosophers and cognitive scientists reassess systematicity in the post-connectionist era, offering perspectives from ecological psychology, embodied and distributed cognition, enactivism, and other methodologies.

    In 1988, Jerry Fodor and Zenon Pylyshyn challenged connectionist theorists to explain the systematicity of cognition. In a highly influential critical analysis of connectionism, they argued that connectionist explanations, at best, can only inform us about details of the neural substrate; explanations at the cognitive level must be classical insofar as adult human cognition is essentially systematic. More than twenty-five years later, however, conflicting explanations of cognition do not divide along classicist-connectionist lines, but oppose cognitivism (both classicist and connectionist) with a range of other methodologies, including distributed and embodied cognition, ecological psychology, enactivism, adaptive behavior, and biologically based neural network theory. This volume reassesses Fodor and Pylyshyn's “systematicity challenge” for a post-connectionist era.

    The contributors consider such questions as how post-connectionist approaches meet Fodor and Pylyshyn's conceptual challenges; whether there is empirical evidence for or against the systematicity of thought; and how the systematicity of human thought relates to behavior. The chapters offer a representative sample and an overview of the most important recent developments in the systematicity debate.

    Contributors Ken Aizawa, William Bechtel, Gideon Borensztajn, Paco Calvo, Anthony Chemero, Jonathan D. Cohen, Alicia Coram, Jeffrey L. Elman, Stefan L. Frank, Antoni Gomila, Seth A. Herd, Trent Kriete, Christian J. Lebiere, Lorena Lobo, Edouard Machery, Gary Marcus, Emma Martín, Fernando Martínez-Manrique, Brian P. McLaughlin, Randall C. O'Reilly, Alex A. Petrov, Steven Phillips, William Ramsey, Michael Silberstein, John Symons, David Travieso, William H. Wilson, Willem Zuidema

    • Hardcover $53.00 £41.00
  • The Human Semantic Potential

    The Human Semantic Potential

    Spatial Language and Constrained Connectionism

    Terry Regier

    Drawing on ideas from cognitive linguistics, connectionism, and perception, The Human Semantic Potential describes a connectionist model that learns perceptually grounded semantics for natural language in spatial terms. Languages differ in the ways in which they structure space, and Regier's aim is to have the model perform its learning task for terms from any natural language. The system has so far succeeded in learning spatial terms from English, German, Russian, Japanese, and Mixtec.

    The model views simple movies of two-dimensional objects moving relative to one another and learns to classify them linguistically in accordance with the spatial system of some natural language. The overall goal is to determine which sorts of spatial configurations and events are learnable as the semantics for spatial terms and which are not. Ultimately, the model and its theoretical underpinnings are a step in the direction of articulating biologically based constraints on the nature of human semantic systems.

    Along the way Regier takes up such substantial issues as the attraction and the liabilities of PDP and structured connectionist modeling, the problem of learning without direct negative evidence, and the area of linguistic universals, which is addressed in the model itself. Trained on spatial terms from different languages, the model permits observations about the possible bases of linguistic universals and interlanguage variation.

    • Hardcover $11.75 £8.95
    • Paperback $19.50 £14.99
  • Subsymbolic Natural Language Processing

    Subsymbolic Natural Language Processing

    An Integrated Model of Scripts, Lexicon, and Memory

    Risto Miikkulainen

    Risto Miikkulainen draws on recent connectionist work in language comprehension to create a model that can understand natural language. Using the DISCERN system as an example, he describes a general approach to building high-level cognitive models from distributed neural networks and shows how the special properties of such networks are useful in modeling human performance. In this approach connectionist networks are not only plausible models of isolated cognitive phenomena, but also sufficient constituents for complete artificial intelligence systems. Distributed neural networks have been very successful in modeling isolated cognitive phenomena, but complex high-level behavior has been tractable only with symbolic artificial intelligence techniques. Aiming to bridge this gap, Miikkulainen describes DISCERN, a complete natural language processing system implemented entirely at the subsymbolic level. In DISCERN, distributed neural network models of parsing, generating, reasoning, lexical processing, and episodic memory are integrated into a single system that learns to read, paraphrase, and answer questions about stereotypical narratives. Miikkulainen's work, which includes a comprehensive survey of the connectionist literature related to natural language processing, will prove especially valuable to researchers interested in practical techniques for high-level representation, inferencing, memory modeling, and modular connectionist architectures.

    • Hardcover $68.00 £53.00
  • Analogy-Making as Perception

    Analogy-Making as Perception

    A Computer Model

    Melanie Mitchell

    Analogy-Making as Perception is based on the premise that analogy-making is fundamentally a high-level perceptual process in which the interaction of perception and concepts gives rise to "conceptual slippages" which allow analogies to be made.

    The psychologist William James observed that "a native talent for perceiving analogies is... the leading fact in genius of every order." The centrality and the ubiquity of analogy in creative thought have been noted again and again by scientists, artists, and writers, and understanding and modeling analogical thought have emerged as two of the most important challenges for cognitive science. Analogy-Making as Perception is based on the premise that analogy-making is fundamentally a high-level perceptual process in which the interaction of perception and concepts gives rise to "conceptual slippages" which allow analogies to be made. It describes Copycat - a computer model of analogymaking, developed by the author with Douglas Hofstadter, that models the complex, subconscious interaction between perception and concepts that underlies the creation of analogies. In Copycat, both concepts and high-level perception are emergent phenomena, arising from large numbers of low-level, parallel, non-deterministic activities. In the spectrum of cognitive modeling approaches, Copycat occupies a unique intermediate position between symbolic systems and connectionist systems a position that is at present the most useful one for understanding the fluidity of concepts and high-level perception. On one level the work described here is about analogy-making, but on another level it is about cognition in general. It explores such issues as the nature of concepts and perception and the emergence of highly flexible concepts from a lower-level "subcognitive" substrate.

    • Hardcover $12.75 £9.50
    • Paperback $28.00 £22.00
  • Mechanisms of Implicit Learning

    Mechanisms of Implicit Learning

    Connectionist Models of Sequence Processing

    Axel Cleeremans

    This book explores unintentional learning from an information-processing perspective.

    What do people learn when they do not know that they are learning? Until recently all of the work in the area of implicit learning focused on empirical questions and methods. In this book, Axel Cleeremans explores unintentional learning from an information-processing perspective. He introduces a theoretical framework that unifies existing data and models on implicit learning, along with a detailed computational model of human performance in sequence-learning situations.

    The model, based on a simple recurrent network (SRN), is able to predict perfectly the successive elements of sequences generated from finite-state, grammars. Human subjects are shown to exhibit a similar sensitivity to the temporal structure in a series of choice reaction time experiments of increasing complexity; yet their explicit knowledge of the sequence remains limited. Simulation experiments indicate that the SRN model is able to account for these data in great detail.

    Cleeremans' model is also useful in understanding the effects of a wide range of variables on sequence-learning performance such as attention, the availability of explicit information, or the complexity of the material. Other architectures that process sequential material are considered. These are contrasted with the SRN model, which they sometimes outperform. Considered together, the models show how complex knowledge may emerge through the operation of elementary mechanisms—a key aspect of implicit learning performance.

    • Hardcover $10.75 £8.99
  • Neural Computation of Pattern Motion

    Neural Computation of Pattern Motion

    Modeling Stages of Motion Analysis in the Primate Visual Cortex

    Margaret Euphrasia Sereno

    This book describes a neurally based model, implemented as a connectionist network, of how the aperture problem is solved.

    How does the visual system compute the global motion of an object from local views of its contours? Although this important problem in computational vision (also called the aperture problem) is key to understanding how biological systems work, there has been surprisingly little neurobiologically plausible work done on it. This book describes a neurally based model, implemented as a connectionist network, of how the aperture problem is solved. It provides a structural account of the model's performance on a number of tasks and demonstrates that the details of implementation influence the nature of the computation as well as predict perceptual effects that are unique to the model. The basic approach described can be extended to a number of different sensory computations. Sereno first reviews current research and theories about motion detection. She then considers the formal aspects of the aperture problem and describes a model of pattern motion perception that stands out in several respects. The model takes into account the structure of the visual system and attempts to build on known neurophysiological structures that might be available for solving the aperture problem, comparing performances in tasks involving direction and speed acuity, transparency, and motion coherency to human performance. The model's emphasis on the details of implementation rather-than on the goals of computation show that the details of data representation change the nature of the computation, producing predictions (including several illusions) that are unique and that can be confirmed through psychophysical experiments.

    • Hardcover $38.00 £26.95
    • Paperback $20.00 £14.99
  • Neural Networks for Control

    Neural Networks for Control

    W. Thomas Miller, III, Richard S. Sutton, and Paul J. Werbos

    Neural Networks for Control brings together examples of all the most important paradigms for the application of neural networks to robotics and control. Primarily concerned with engineering problems and approaches to their solution through neurocomputing systems, the book is divided into three sections: general principles, motion control, and applications domains (with evaluations of the possible applications by experts in the applications areas.) Special emphasis is placed on designs based on optimization or reinforcement, which will become increasingly important as researchers address more complex engineering challenges or real biological-control problems. A Bradford Book. Neural Network Modeling and Connectionism series

    • Hardcover $95.00
    • Paperback $11.75 £9.95
  • Connectionist Modeling and Brain Function

    A Developing Interface

    Stephen José Hanson and Carl R. Olson

    Bringing together contributions in biology, neuroscience, computer science, physics, and psychology, this book offers a solid tutorial on current research activity in connectionist-inspired biology-based modeling. It describes specific experimental approaches and also confronts general issues related to learning associative memory, and sensorimotor development. Introductory chapters by editors Hanson and Olson, along with Terrence Sejnowski, Christof Koch, and Patricia S. Churchland, provide an overview of computational neuroscience, establish the distinction between "realistic" brain models and "simplified" brain models, provide specific examples of each, and explain why each approach might be appropriate in a given context. The remaining chapters are organized so that material on the anatomy and physiology of a specific part of the brain precedes the presentation of modeling studies. The modeling itself ranges from simplified models to more realistic models and provides examples of constraints arising from known brain detail as well as choices modelers face when including or excluding such constraints. There are three sections, each focused on a key area where biology and models have converged.

    Connectionist Modeling and Brain Function is included in the Network Modeling and Connectionism series, edited by Jeffrey Elman.

    • Hardcover $57.00
  • Neural Network Design and the Complexity of Learning

    Neural Network Design and the Complexity of Learning

    J. Stephen Judd

    Using the tools of complexity theory, Stephen Judd develops a formal description of associative learning in connectionist networks. He rigorously exposes the computational difficulties in training neural networks and explores how certain design principles will or will not make the problems easier.Judd looks beyond the scope of any one particular learning rule, at a level above the details of neurons. There he finds new issues that arise when great numbers of neurons are employed and he offers fresh insights into design principles that could guide the construction of artificial and biological neural networks.The first part of the book describes the motivations and goals of the study and relates them to current scientific theory. It provides an overview of the major ideas, formulates the general learning problem with an eye to the computational complexity of the task, reviews current theory on learning, relates the book's model of learning to other models outside the connectionist paradigm, and sets out to examine scale-up issues in connectionist learning.Later chapters prove the intractability of the general case of memorizing in networks, elaborate on implications of this intractability and point out several corollaries applying to various special subcases. Judd refines the distinctive characteristics of the difficulties with families of shallow networks, addresses concerns about the ability of neural networks to generalize, and summarizes the results, implications, and possible extensions of the work.

    Neural Network Design and the Complexity of Learning is included in the Network Modeling and Connectionism series edited by Jeffrey Elman.

    • Hardcover $40.00 £30.00
    • Paperback $35.00 £27.00
  • Semantic Information Processing

    Semantic Information Processing

    Marvin Minsky

    This book collects a group of experiments directed toward making intelligent machines. Each of the programs described here demonstrates some aspect of behavior that anyone would agree require some intelligence, and each program solves its own kinds of problems. These include resolving ambiguities in word meanings, finding analogies between things, making logical and nonlogical inferences, resolving inconsistencies in information, engaging in coherent discourse with a person, and building internal models for representation of newly acquired information. Each of the programs has serious limitations, but the chapter authors provide clear perspectives for viewing both the achievements and limitations of their programs. But what is much more important than what these particular programs achieve are the methods they use to achieve what they do.

    • Hardcover $59.95
    • Paperback $30.00 £24.00