The psychologist William James observed that "a native talent for perceiving analogies is ... the leading fact in genius of every order." The centrality and the ubiquity of analogy in creative thought have been noted again and again by scientists, artists, and writers, and understanding and modeling analogical thought have emerged as two of the most important challenges for cognitive science.
Analogy-Making as Perception is based on the premise that analogy-making is fundamentally a high-level perceptual process in which the interaction of perception and concepts gives rise to "conceptual slippages" which allow analogies to be made. It describes Copycat - a computer model of analogymaking, developed by the author with Douglas Hofstadter, that models the complex, subconscious interaction between perception and concepts that underlies the creation of analogies.
In Copycat, both concepts and high-level perception are emergent phenomena, arising from large numbers of low-level, parallel, non-deterministic activities. In the spectrum of cognitive modeling approaches, Copycat occupies a unique intermediate position between symbolic systems and connectionist systems a position that is at present the most useful one for understanding the fluidity of concepts and high-level perception.
On one level the work described here is about analogy-making, but on another level it is about cognition in general. It explores such issues as the nature of concepts and perception and the emergence of highly flexible concepts from a lower-level "subcognitive" substrate.
Melanie Mitchell, Assistant Professor in the Department of Electrical Engineering and Computer Science at the University of Michigan, is a Fellow of the Michigan Society of Fellows. She is also Director of the Adaptive Computation Program at the Santa Fe Institute.
Intentions in Communication brings together major theorists from artificial intelligence and computer science, linguistics, philosophy, and psychology whose work develops the foundations for an account of the role of intentions in a comprehensive theory of communication. It demonstrates, for the first time, the emerging cooperation among disciplines concerned with the fundamental role of intention in communication.
The fourteen contributions in this book address central questions about the nature of intention as it is understood in theories of communication, the crucial role of intention recognition in understanding utterances, the use of principles of rational interaction in interpreting speech acts, the contribution of intonation contours to intention recognition, and the need for more general models of intention that support a view of dialogue as a collaborative activity.
Contributors: Michael E. Bratman, Philip R. Cohen, Hector J. Levesque, Martha E. Pollack, Henry Kautz, Andrew J. I. Jones, C. Raymond Perrault, Daniel Vanderveken, Janet Pierrehumbert, Julia Hirschberg, Richmond H. Thomason, Diane J Litman, James F. Allen, John R. Searle, Barbara J. Grosz, Candace L. Sidner, Herbert H. Clark and Deanna Wilkes-Gibbs. The book also includes commentaries by James F. Allen, W. A Woods, Jerry Morgan, Jerrold M. Sadock Jerry R. Hobbs, Kent Bach.
Intentions in Communication is included in the System Development Foundation Benchmark Series.
Classical computationalism—-the view that mental states are computational states—-has come under attack in recent years. Critics claim that in defining computation solely in abstract, syntactic terms, computationalism neglects the real-time, embodied, real-world constraints with which cognitive systems must cope. Instead of abandoning computationalism altogether, however, some researchers are reconsidering it, recognizing that real-world computers, like minds, must deal with issues of embodiment, interaction, physical implementation, and semantics.
This book lays the foundation for a successor notion of computationalism. It covers a broad intellectual range, discussing historic developments of the notions of computation and mechanism in the computationalist model, the role of Turing machines and computational practice in artificial intelligence research, different views of computation and their role in the computational theory of mind, the nature of intentionality, and the origin of language.
Intelligence takes many forms. This exciting study explores the novel insight, based on well-established ethological principles, that animals, humans, and autonomous robots can all be analyzed as multi-task autonomous control systems. Biological adaptive systems, the authors argue, can in fact provide a better understanding of intelligence and rationality than that provided by traditional AI.
In this technically sophisticated, clearly written investigation of robot-animal analogies, McFarland and Bösser show that a bee's accuracy in navigating on a cloudy day and a moth's simple but effective hearing mechanisms have as much to teach us about intelligent behavior as human models. In defining intelligent behavior, what matters is the behavioral outcome, not the nature of the mechanism by which the outcome is achieved. Similarly, in designing robots capable of intelligent behavior, what matters is the behavioral outcome.
McFarland and Bösser address the problem of how to assess the consequences of robot behavior in a way that is meaningful in terms of the robot's intended role, comparing animal and robot in relation to rational behavior, goal seeking, task accomplishment, learning, and other important theoretical issues.
Computational modeling plays a central role in cognitive science. This book provides a comprehensive introduction to computational models of human cognition. It covers major approaches and architectures, both neural network and symbolic; major theoretical issues; and specific computational models of a variety of cognitive processes, ranging from low-level (e.g., attention and memory) to higher-level (e.g., language and reasoning). The articles included in the book provide original descriptions of developments in the field. The emphasis is on implemented computational models rather than on mathematical or nonformal approaches, and on modeling empirical data from human subjects.
This wide-ranging collection of essays is inspired by the memory of the cognitive psychologist John Macnamara, whose influential contributions to language and concept acquisition have provided the basis for numerous research programs. The areas covered by the essays include the foundations of language and thought, congnitive and linguistic development, and mathematical approaches to cognition.
Einstein said that "the whole of science is nothing more than a refinement of everyday thinking." David Klahr suggests that we now know enough about cognition—and hence about everyday thinking—to advance our understanding of scientific thinking. In this book he sets out to describe the cognitive and developmental processes that have enabled scientists to make the discoveries that comprise the body of information we call "scientific knowledge."
Over the past decade Klahr and his colleagues have conducted extensive laboratory experiments in which they create discovery contexts, computer-based environments, to evoke the kind of thinking characteristic of scientific discovery in the "real world." In attempting to solve the problems posed by the discovery tasks, experiment participants (from preschoolers through university students, as well as laypersons) use many of the same higher-order cognitive processes used by practicing scientists. Through this work Klahr integrates two disparate approaches—the content-based approach and the process-based approach—to present a comprehensive model of the psychology of scientific discovery.
In Mind and Mechanism, Drew McDermott takes a computational approach to the mind-body problem (how it is that a purely physical entity, the brain, can have experiences). He begins by demonstrating the falseness of dualist approaches, which separate the physical and mental realms. He then surveys what has been accomplished in artificial intelligence, clearly differentiating what we know how to build from what we can imagine building. McDermott then details a computational theory of consciousness claiming that the mind can be modeled entirely in terms of computation—and deals with various possible objections. He also discusses cultural consequences of the theory, including its impact on religion and ethics.
Since the 1970s the cognitive sciences have offered multidisciplinary ways of understanding the mind and cognition. The MIT Encyclopedia of the Cognitive Sciences (MITECS) is a landmark, comprehensive reference work that represents the methodological and theoretical diversity of this changing field.
At the core of the encyclopedia are 471 concise entries, from Acquisition and Adaptationism to Wundt and X-bar Theory. Each article, written by a leading researcher in the field, provides an accessible introduction to an important concept in the cognitive sciences, as well as references or further readings. Six extended essays, which collectively serve as a roadmap to the articles, provide overviews of each of six major areas of cognitive science: Philosophy; Psychology; Neurosciences; Computational Intelligence; Linguistics and Language; and Culture, Cognition, and Evolution. For both students and researchers, MITECS will be an indispensable guide to the current state of the cognitive sciences.
By the mid-1980s researchers from artificial intelligence, computer science, brain and cognitive science, and psychology realized that the idea of computers as intelligent machines was inappropriate. The brain does not run "programs"; it does something entirely different. But what? Evolutionary theory says that the brain has evolved not to do mathematical proofs but to control our behavior, to ensure our survival. Researchers now agree that intelligence always manifests itself in behavior—thus it is behavior that we must understand. An exciting new field has grown around the study of behavior-based intelligence, also known as embodied cognitive science, "new AI," and "behavior-based AI."
This book provides a systematic introduction to this new way of thinking. After discussing concepts and approaches such as subsumption architecture, Braitenberg vehicles, evolutionary robotics, artificial life, self-organization, and learning, the authors derive a set of principles and a coherent framework for the study of naturally and artificially intelligent systems, or autonomous agents. This framework is based on a synthetic methodology whose goal is understanding by designing and building.
The book includes all the background material required to understand the principles underlying intelligence, as well as enough detailed information on intelligent robotics and simulated agents so readers can begin experiments and projects on their own. The reader is guided through a series of case studies that illustrate the design principles of embodied cognitive science.