In Human Reasoning and Cognitive Science, Keith Stenning and Michiel van Lambalgen--a cognitive scientist and a logician--argue for the indispensability of modern mathematical logic to the study of human reasoning. Logic and cognition were once closely connected, they write, but were “divorced” in the past century; the psychology of deduction went from being central to the cognitive revolution to being the subject of widespread skepticism about whether human reasoning really happens outside the academy. Stenning and van Lambalgen argue that logic and reasoning have been separated because of a series of unwarranted assumptions about logic.
Stenning and van Lambalgen contend that psychology cannot ignore processes of interpretation in which people, wittingly or unwittingly, frame problems for subsequent reasoning. The authors employ a neurally implementable defeasible logic for modeling part of this framing process, and show how it can be used to guide the design of experiments and interpret results.
How do novel scientific concepts arise? In Creating Scientific Concepts, Nancy Nersessian seeks to answer this central but virtually unasked question in the problem of conceptual change. She argues that the popular image of novel concepts and profound insight bursting forth in a blinding flash of inspiration is mistaken. Instead, novel concepts are shown to arise out of the interplay of three factors: an attempt to solve specific problems; the use of conceptual, analytical, and material resources provided by the cognitive-social-cultural context of the problem; and dynamic processes of reasoning that extend ordinary cognition.
Focusing on the third factor, Nersessian draws on cognitive science research and historical accounts of scientific practices to show how scientific and ordinary cognition lie on a continuum, and how problem-solving practices in one illuminate practices in the other. Her investigations of scientific practices show conceptual change as deriving from the use of analogies, imagistic representations, and thought experiments, integrated with experimental investigations and mathematical analyses. She presents a view of constructed models as hybrid objects, serving as intermediaries between targets and analogical sources in bootstrapping processes. Extending these results, she argues that these complex cognitive operations and structures are not mere aids to discovery, but that together they constitute a powerful form of reasoning—model-based reasoning—that generates novelty. This new approach to mental modeling and analogy, together with Nersessian's cognitive-historical approach, makes Creating Scientific Concepts equally valuable to cognitive science and philosophy of science.
Evolutionary psychology occupies an important place in the drive to understand and explain human behavior. Darwinian ideas provide powerful tools to illuminate how fundamental aspects of the way humans think, feel, and interact derive from reproductive interests and an ultimate need for survival. In this updated and expanded edition of Evolution and Human Behavior, John Cartwright considers the emergence of Homo sapiens as a species and looks at contemporary issues, such as familial relationships and conflict and cooperation, in light of key theoretical principles. The book covers basic concepts including natural and sexual selection, life history theory, and the fundamentals of genetics. New material will be found in chapters on emotion, culture, incest avoidance, ethics, and cognition and reasoning. Two new chapters are devoted to the evolutionary analysis of mental disorders. Students of psychology, human biology, and physical and cultural anthropology will find Evolution and Human Behavior a comprehensive textbook of great value.
In The Allure of Machinic Life, John Johnston examines new forms of nascent life that emerge through technical interactions within human-constructed environments—"machinic life"—in the sciences of cybernetics, artificial life, and artificial intelligence. With the development of such research initiatives as the evolution of digital organisms, computer immune systems, artificial protocells, evolutionary robotics, and swarm systems, Johnston argues, machinic life has achieved a complexity and autonomy worthy of study in its own right.
Drawing on the publications of scientists as well as a range of work in contemporary philosophy and cultural theory, but always with the primary focus on the "objects at hand"—the machines, programs, and processes that constitute machinic life—Johnston shows how they come about, how they operate, and how they are already changing. This understanding is a necessary first step, he further argues, that must precede speculation about the meaning and cultural implications of these new forms of life.
Developing the concept of the "computational assemblage" (a machine and its associated discourse) as a framework to identify both resemblances and differences in form and function, Johnston offers a conceptual history of each of the three sciences. He considers the new theory of machines proposed by cybernetics from several perspectives, including Lacanian psychoanalysis and "machinic philosophy." He examines the history of the new science of artificial life and its relation to theories of evolution, emergence, and complex adaptive systems (as illustrated by a series of experiments carried out on various software platforms). He describes the history of artificial intelligence as a series of unfolding conceptual conflicts—decodings and recodings—leading to a "new AI" that is strongly influenced by artificial life. Finally, in examining the role played by neuroscience in several contemporary research initiatives, he shows how further success in the building of intelligent machines will most likely result from progress in our understanding of how the human brain actually works.
This critical history of research on acquired language deficits (aphasias) demonstrates the usefulness of linguistic analysis of aphasic syndrome for neuropsychology, linguistics, and psycholinguistics. Drawing on new empirical studies, Grodzinsky concludes that the use of grammatical tools for the description of the aphasias is critical. The selective nature of these deficits offers a novel view into the inner workings of our language faculty and the mechanisms that support it.
In contrast to other proposals that the left anterior cerebral cortex is crucial for all syntactic capacity, Grodzinsky's discoveries support his theory that this region is necessary for only a small component of the human language faculty. On this basis he provides a detailed explanation for many aphasic phenomena—including a number of puzzling cross-linguistic aphasia differences—and uses aphasic data to evaluate competing linguistic theories.
Theoretical Perspectives on Language Deficits is included in the series Biology of Language and Cognition, edited by John P. Marshall. A Bradford Book.
How does the visual system compute the global motion of an object from local views of its contours? Although this important problem in computational vision (also called the aperture problem) is key to understanding how biological systems work, there has been surprisingly little neurobiologically plausible work done on it. This book describes a neurally based model, implemented as a connectionist network, of how the aperture problem is solved. It provides a structural account of the model's performance on a number of tasks and demonstrates that the details of implementation influence the nature of the computation as well as predict perceptual effects that are unique to the model. The basic approach described can be extended to a number of different sensory computations.
Sereno first reviews current research and theories about motion detection. She then considers the formal aspects of the aperture problem and describes a model of pattern motion perception that stands out in several respects. The model takes into account the structure of the visual system and attempts to build on known neurophysiological structures that might be available for solving the aperture problem, comparing performances in tasks involving direction and speed acuity, transparency, and motion coherency to human performance. The model's emphasis on the details of implementation rather-than on the goals of computation show that the details of data representation change the nature of the computation, producing predictions (including several illusions) that are unique and that can be confirmed through psychophysical experiments.
Computational neuroscientists have recently turned to modeling olfactory structures because these are likely to have the same functional properties as currently popular network designs for perception and memory. This book provides a useful survey of current work on olfactory system circuitry, including connections of this system to brain structures involved in cognition and memory, and describes the computational models of olfactory processing that have been developed to date.
Contributions cover empirical investigations of the neurobiology of the olfactory systems (anatomy, physiology, synaptic plasticity, behavioral physiology) as well as the application of computer models to understanding these systems. Fundamental issues in olfactory processing by the nervous systems such as experimental strategies in the study of olfaction, stages of odor processing, and critical questions in sensory coding are considered across empirical/applied boundaries and throughout the contributions.
Contributors: 1. Fundamental Anatomy, Physiology, and Plasticity of the Olfactory System. Gordon M. Shepherd. John S. Kauer, S. R. Neff, Kathryn A. Hamilton, and Angel R. Cinelli. Kevin L. Ketchum, Lewis B. Haberly. Joseph L. Price, S. Thomas Carmichael, Ken M. Carnes, MarieChristine Clugnet, Masaru Kuroda, and James P. Ray. Michael Leon, Donald A. Wilson, and Kathleen M. Guthrie. Gary Lynch and Richard Granger. Howard Eichenbaum, Tim Otto, Cynthia Wible, and jean Piper. II. Developments in Computational Models of the Olfactory System. DeLiang Wang, Joachim Buhmann, and Christoph von der Marlsburg. Walter Freeman. Richard Granger, Ursula Staubi, José Ambrose-Ingersoll, and Gary Lynch. James M. Bower. Dan Hammerstrom and Eric Means.
Recent research on the syntax of signed languages has revealed that, apart from some modality-specific differences, signed languages are organized according to the same underlying principles as spoken languages. This book addresses the organization and distribution of functional categories in American Sign Language (ASL), focusing on tense, agreement, and wh-constructions.
Signed languages provide illuminating evidence about functional projections of a kind unavailable in the study of spoken languages. Along with manual signing, crucial information is expressed by specific movements of the face and upper body. The authors argue that such nonmanual markings are often direct expressions of abstract syntactic features. The distribution and intensity of these markings provide information about the location of functional heads and the boundaries of functional projections. The authors show how evidence from ASL is useful for evaluating a number of recent theoretical proposals on, among other things, the status of syntactic agreement projections and constraints on phrase structure and the directionality of movement.
Building a person has been an elusive goal in artificial intelligence. This failure, John Pollock argues, is because the problems involved are essentially philosophical; what is needed for the construction of a person is a physical system that mimics human rationality. Pollock describes an exciting theory of rationality and its partial implementation in OSCAR, a computer system whose descendants will literally be persons.
In developing the philosophical superstructure for this bold undertaking, Pollock defends the conception of man as an intelligent machine and argues that mental states are physical states and persons are physical objects as described in the fable of Oscar, the self conscious machine.
Pollock brings a unique blend of philosophy and artificial intelligence to bear on the vexing problem of how to construct a physical system that thinks, is self conscious, has desires, fears, intentions, and a full range of mental states. He brings together an impressive array of technical work in philosophy to drive theory construction in AI. The result is described in his final chapter on "cognitive carpentry."
The Core Language Engine presents the theoretical and engineering advances embodied in one of the most comprehensive natural language processing systems designed to date. Recent research results from different areas of computational linguistics are integrated into a single elegant design with potential for application to tasks ranging from machine translation to information system interfaces.
Bridging the gap between theoretical and implementation oriented literature, The Core Language Engine describes novel analyses and techniques developed by the contributors at SRI International's Cambridge Computer Science Research Centre. It spans topics that include a wide-coverage unification grammar for English syntax and semantics, context-dependent and contextually disambiguated logical form representations, interactive translation, efficient algorithms for parsing and generation, and mechanisms for quantifier scoping, reference resolution, and lexical acquisition.
Contents: Introduction to the CLE. Logical Forms. Categories and Rules. Unification Based Syntactic Analysis. Semantic Rules for English. Lexical Analysis. Syntactic and Semantic Processing. Quantifier Scoping. Sortal Restrictions. Resolving Quasi Logical Forms. Lexical Acquisition. The CLE in Application Development. Ellipsis, Comparatives, and Generation. Swedish-English QLF Translation.
Many philosophers and cognitive scientists dismiss the notion of qualia, sensory experiences that are internal to the brain. Leading opponents of qualia (and of Indirect Realism, the philosophical position that has qualia as a central tenet) include Michael Tye, Daniel Dennett, Paul and Patricia Churchland, and even Frank Jackson, a former supporter. Qualiaphiles apparently face the difficulty of establishing philosophical contact with the real when their access to it is seen by qualiaphobes to be second-hand and, worse, hidden behind a "veil of sensation"—a position that would slide easily into relativism and solipsism, presenting an ethical dilemma. In The Case for Qualia, proponents of qualia defend the Indirect Realist position and mount detailed counterarguments against opposing views.
The book first presents philosophical defenses, with arguments propounding, variously, a new argument from illusion, a sense-datum theory, dualism, "qualia realism," qualia as the "cement" of the experiential world, and "subjective physicalism." Three scientific defenses follow, discussing color, heat, and the link between the external object and the internal representation. Finally, specific criticisms of opposing views include discussions of the Churchlands' "neurophilosophy," answers to Frank Jackson's abandonment of qualia (one of which is titled, in a reference to Jackson’s famous thought experiment, "Why Frank Should Not Have Jilted Mary"), and refutations of Transparency Theory.
Contributors: Torin Alter, Michel Bitbol, Harold I. Brown, Mark Crooks, George Graham, C.L. Hardin, Terence E. Horgan, Robert J. Howell, Amy Kind, E.J. Lowe, Riccardo Manzotti, Barry Maund, Martine Nida-Rümelin, John O'Dea, Isabelle Peschard, Matjaž Potrc, Diana Raffman, Howard Robinson, William S. Robinson, John R. Smythies, Edmond Wright Edmond Wright is the editor of New Representationalisms: Essays in the Philosophy of Perception and the author of Narrative, Perception, Language, and Faith.
John Staddon has devoted his long and distinguished career to the study of the adaptive function and mechanisms of learning. He did his graduate work at the famous Skinner Lab at Harvard in the early 1960s (supervised by Richard Herrnstein, who did his doctoral work with B. F. Skinner), but his work can be characterized as theoretical behaviorism. Staddon, now at Duke University, believes that experimental analysis is never enough to make sense of behavior and that “theoretical imagination” is also required. Staddon’s theoretical imagination has distinguished his work over the years and has influenced the field. Staddon is not afraid to deviate from the norm: when psychologists were maintaining their distance from behavioral psychology, Staddon was promoting optimality theories. Optimality theories in psychology are now commonplace. In this volume, Staddon’s colleagues and former students discuss topics that have been important in his work: behavioral ability and choice, memory, time and models (the subject of his work at Harvard), and behaviorism. They also reflect on Staddon’s influence on their own work and the evolution of their thinking on these topics. ContributorsGiulio Bolacchi, Daniel T. Cerutti, Mircea Ioan Chelaru, J. Mark Cleaveland, Robert H. I. Dale, Rebecca A. Dixon, Valentin Dragoi, Stephen Gray, Jennifer J. Higa, John M. Horner, Nancy K. Innis, Mandar S. Jog, Richard Keen, John E. Kello, Eric Macaux, Armando Machado, John C. Malone, Jr., Kazuchika Manabe, Susan R. Perry, Alliston K. ReidNancy K. Innis was Professor of Psychology at the University of Western Ontario. J. E. R. Staddon supervised her Ph.D. work at Duke University.
In order to solve problems, humans are able to synthesize apparently unrelated concepts, take advantage of serendipitous opportunities, hypothesize, invent, and engage in other similarly abstract and creative activities, primarily through the use of their visual systems. In Scenario Visualization, Robert Arp offers an evolutionary account of the unique human ability to solve nonroutine vision-related problems. He argues that by the close of the Pleistocene epoch, humans evolved a conscious creative problem-solving capacity, which he terms scenario visualization, that enabled them to outlive other hominid species and populate the planet. Arp shows that the evidence for scenario visualization—by which images are selected, integrated, and then transformed and projected into visual scenarios—can be found in the kinds of complex tools our hominid ancestors invented in order to survive in the ever-changing environments of the Pleistocene world.
Arp also argues that this conscious capacity shares an analogous affinity with neurobiological processes of selectivity and integration in the visual system, and that similar processes can be found in the activities of organisms in general. The evolution of these processes, he writes, helps account for the modern-day conscious ability of humans to use visual information to solve nonroutine problems creatively in their environments. Arp’s account of scenario visualization and its emergence in evolutionary history suggests an answer to two basic questions asked by philosophers and biologists concerning human nature: why we are unique; and how we got that way.
Robert Arp is Postdoctoral Research Fellow at the National Center for Biomedical Ontology. His areas of specialization include philosophy of biology and philosophy of mind. He is the author of numerous articles and the forthcoming An Integrated Approach to the Philosophy of Mind.
The explanatory power of economic theory is tested by the phenomenon of irrational consumption, examples of which include such addictive behaviors as disordered and pathological gambling. Midbrain Mutiny examines different economic models of disordered gambling, using the frameworks of neuroeconomics (which analyzes decision making in the brain) and picoeconomics (which analyzes patterns of consumption behavior), and drawing on empirical evidence about behavior and the brain.
The book describes addiction in neuroeconomic terms as chronic disruption of the balance between the midbrain dopamine system and the prefrontal and frontal serotonergic system, and reviews recent evidence from trials testing the effectiveness of antiaddiction drugs. The authors argue that the best way to understand disordered and addictive gambling is with a hybrid picoeconomic-neuroeconomic model.
The "hard problem" of today's consciousness studies is subjective experience: understanding why some brain processing is accompanied by an experienced inner life. Recent scientific advances offer insights for understanding the physiological and chemical phenomenology of consciousness. But by leaving aside the internal experiential nature of consciousness in favor of mapping neural activity, such science leaves many questions unanswered. In Ontology of Consciousness, scholars from a range of disciplines—from neurophysiology to parapsychology, from mathematics to anthropology and indigenous non-Western modes of thought—go beyond these limits of current neuroscience research to explore insights offered by other intellectual approaches to consciousness.
These scholars focus their attention on such philosophical approaches to consciousness as Tibetan Tantric Buddhism, North American Indian insights, pre-Columbian Mesoamerican civilization, and the Byzantine Empire. Some draw on artifacts and ethnographic data to make their point. Others translate cultural concepts of consciousness into modern scientific language using models and mathematical mappings. Many consider individual experiences of sentience and existence, as seen in African communalism, Hindi psychology, Zen Buddhism, Indian vibhuti phenomena, existentialism, philosophical realism, and modern psychiatry. Some reveal current views and conundrums in neurobiology to comprehend sentient intellection.
Karim Akerma, Matthijs Cornelissen, Antoine Courban, Mario Crocco, Christian de Quincey, Thomas B. Fowler, Erlendur Haraldsson, David J. Hufford, Pavel B. Ivanov, Heinz Kimmerle, Stanley Krippner, Armand J. Labbé, James Maffie, Hubert Markl, Graham Parkes, Michael Polemis, E Richard Sorenson, Mircea Steriade, Thomas Szasz, Mariela Szirko, Robert A. F. Thurman, Edith L. B. Turner, Julia Watkin, and Helmut Wautischer
Surveys show that our growing concern over protecting the environment is accompanied by a diminishing sense of human contact with nature. Many people have little commonsense knowledge about nature—are unable, for example, to identify local plants and trees or describe how these plants and animals interact. Researchers report dwindling knowledge of nature even in smaller, nonindustrialized societies. In The Native Mind and the Cultural Construction of Nature, Scott Atran and Douglas Medin trace the cognitive consequences of this loss of knowledge. Drawing on nearly two decades of cross-cultural and developmental research, they examine the relationship between how people think about the natural world and how they act on it and how these are affected by cultural differences.
These studies, which involve a series of targeted comparisons among cultural groups living in the same environment and engaged in the same activities, reveal critical universal aspects of mind as well as equally critical cultural differences. Atran and Medin find that, despite a base of universal processes, the cultural differences in understandings of nature are associated with significant differences in environmental decision making as well as intergroup conflict and stereotyping stemming from these differences. The book includes two intensive case studies, one focusing on agro-forestry among Maya Indians and Spanish speakers in Mexico and Guatemala and the other on resource conflict between Native-American and European-American fishermen in Wisconsin. The Native Mind and the Cultural Construction of Nature offers new perspectives on general theories of human categorization, reasoning, decision making, and cognitive development.
Emergence, largely ignored just thirty years ago, has become one of the liveliest areas of research in both philosophy and science. Fueled by advances in complexity theory, artificial life, physics, psychology, sociology, and biology and by the parallel development of new conceptual tools in philosophy, the idea of emergence offers a way to understand a wide variety of complex phenomena in ways that are intriguingly different from more traditional approaches. This reader collects for the first time in one easily accessible place classic writings on emergence from contemporary philosophy and science. The chapters, by such prominent scholars as John Searle, Stephen Weinberg, William Wimsatt, Thomas Schelling, Jaegwon Kim, Robert Laughlin, Daniel Dennett, Herbert Simon, Stephen Wolfram, Jerry Fodor, Philip Anderson, and David Chalmers, cover the major approaches to emergence. Each of the three sections ("Philosophical Perspectives," "Scientific Perspectives," and "Background and Polemics") begins with an introduction putting the chapters into context and posing key questions for further exploration. A bibliography lists more specialized material, and an associated website (http://mitpress.mit.edu/emergence) links to downloadable software and to other sites and publications about emergence.
Contributors: P. W. Anderson, Andrew Assad, Nils A. Baas, Mark A. Bedau, Mathieu S. Capcarrère, David Chalmers, James P. Crutchfield, Daniel C. Dennett, J. Doyne Farmer, Jerry Fodor, Carl Hempel, Paul Humphreys, Jaegwon Kim, Robert B. Laughlin, Bernd Mayer, Brian P. McLaughlin, Ernest Nagel, Martin Nillson, Paul Oppenheim, Norman H. Packard, David Pines, Steen Rasmussen, Edmund M. A. Ronald, Thomas Schelling, John Searle, Robert S. Shaw, Herbert Simon, Moshe Sipper, Stephen Weinberg, William Wimsatt, and Stephen Wolfram
The idea of intelligent machines has become part of popular culture, and t tracing the history of the actual science of machine intelligence reveals a rich network of cross-disciplinary contributions—the unrecognized origins of ideas now central to artificial intelligence, artificial life, cognitive science, and neuroscience. In The Mechanical Mind in History, scientists, artists, historians, and philosophers discuss the multidisciplinary quest to formalize and understand the generation of intelligent behavior in natural and artificial systems as a wholly mechanical process.
The contributions illustrate the diverse and interacting notions that chart the evolution of the idea of the mechanical mind. They describe the mechanized mind as, among other things, an analogue system, an organized suite of chemical interactions, a self-organizing electromechanical device, an automated general-purpose information processor, and an integrated collection of symbol manipulating mechanisms. They investigate the views of pivotal figures that range from Descartes and Heidegger to Alan Turing and Charles Babbage, and they emphasize such frequently overlooked areas as British cybernetic and pre-cybernetic thinkers. The volume concludes with the personal insights of five highly influential figures in the field: John Maynard Smith, John Holland, Oliver Selfridge, Horace Barlow, and Jack Cowan.
Peter Asaro, Horace Barlow, Andy Beckett, Margaret Boden, Jon Bird, Paul Brown, Seth Bullock, Roberto Cordeschi, Jack Cowan, Ezequiel Di Paolo, Hubert Dreyfus, Andrew Hodges, Owen Holland, Jana Horáková, Philip Husbands, Jozef Kelemen, John Maynard Smith, Donald Michie, Oliver Selfridge, Michael Wheeler.
Established wisdom in cognitive science holds that the everyday folk psychological abilities of humans—our capacity to understand intentional actions performed for reasons—are inherited from our evolutionary forebears. In Folk Psychological Narratives, Daniel Hutto challenges this view (held in somewhat different forms by the two dominant approaches, "theory theory" and simulation theory) and argues for the sociocultural basis of this familiar ability. He makes a detailed case for the idea that the way we make sense of intentional actions essentially involves the construction of narratives about particular persons. Moreover he argues that children acquire this practical skill only by being exposed to and engaging in a distinctive kind of narrative practice.
Hutto calls this developmental proposal the narrative practice hypothesis (NPH). Its core claim is that direct encounters with stories about persons who act for reasons (that is, folk psychological narratives) supply children with both the basic structure of folk psychology and the norm-governed possibilities for wielding it in practice. In making a strong case for the as yet underexamined idea that our understanding of reasons may be socioculturally grounded, Hutto not only advances and explicates the claims of the NPH, but he also challenges certain widely held assumptions. In this way, Folk Psychological Narratives both clears conceptual space around the dominant approaches for an alternative and offers a groundbreaking proposal.
For much of the twentieth century, philosophy and science went their separate ways. In moral philosophy, fear of the so-called naturalistic fallacy kept moral philosophers from incorporating developments in biology and psychology. Since the 1990s, however, many philosophers have drawn on recent advances in cognitive psychology, brain science, and evolutionary psychology to inform their work. This collaborative trend is especially strong in moral philosophy, and these three volumes bring together some of the most innovative work by both philosophers and psychologists in this emerging interdisciplinary field. The contributors to volume 2 discuss recent empirical research that uses the diverse methods of cognitive science to investigate moral judgments, emotions, and actions. Each chapter includes an essay, comments on the essay by other scholars, and a reply by the author(s) of the original essay. Topics include moral intuitions as a kind of fast and frugal heuristics, framing effects in moral judgments, an analogy between Chomsky’s universal grammar and moral principles, the role of emotions in moral beliefs, moral disagreements, the semantics of moral language, and moral responsibility.
Walter Sinnott-Armstrong is Professor of Philosophy and Hardy Professor of Legal Studies at Dartmouth College.
Contributors to volume 2: Fredrik Bjorklund, James Blair, Paul Bloomfield, Fiery Cushman, Justin D'Arms, John Deigh, John Doris, Julia Driver, Ben Fraser, Gerd Gigerenzer, Michael Gill, Jonathan Haidt, Marc Hauser, Daniel Jacobson, Joshua Knobe, Brian Leiter, Don Loeb, Ron Mallon, Darcia Narvaez, Shaun Nichols, Alexandra Plakias, Jesse Prinz, Geoffrey Sayre-McCord, Russ Shafer-Landau, Walter Sinnott-Armstrong, Cass Sunstein, William Tolhurst, Liane Young
How human musical experience emerges from the audition of organized tones is a riddle of long standing. In The Musical Representation, Charles Nussbaum offers a philosophical naturalist's solution. Nussbaum founds his naturalistic theory of musical representation on the collusion between the physics of sound and the organization of the human mind-brain. He argues that important varieties of experience afforded by Western tonal art music since 1650 arise through the feeling of tone, the sense of movement in musical space, cognition, emotional arousal, and the engagement, by way of specific emotional responses, of deeply rooted human ideals.
Construing the art music of the modern West as representational, as a symbolic system that carries extramusical content, Nussbaum attempts to make normative principles of musical representation explicit and bring them into reflective equilibrium with the intuitions of competent listeners. The human mind-brain, writes Nussbaum, is a living record of its evolutionary history; relatively recent cognitive acquisitions derive from older representational functions of which we are hardly aware. Consideration of musical art can help bring to light the more ancient cognitive functions that underlie modern human cognition.
With mind-brain identity theories no longer dominant in philosophy of mind in the late 1950s, scientific materialists turned to functionalism, the view that the identity of any mental state depends on its function in the cognitive system of which it is a part. The philosopher Hilary Putnam was one of the primary architects of functionalism and was the first to propose computational functionalism, which views the human mind as a computer or an information processor. But, in the early 1970s, Putnam began to have doubts about functionalism, and in his masterwork Representation and Reality (MIT Press, 1988), he advanced four powerful arguments against his own doctrine of computational functionalism.
In Gödel, Putnam, and Functionalism, Jeff Buechner systematically examines Putnam’s arguments against functionalism and contends that they are unsuccessful. Putnam’s first argument uses Gödel’s incompleteness theorem to refute the view that there is a computational description of human reasoning and rationality; his second, the “triviality argument,” demonstrates that any computational description can be attributed to any physical system; his third, the multirealization argument, shows that there are infinitely many computational realizations of an arbitrary intentional state; his fourth argument buttresses this assertion by showing that there cannot be local computational reductions because there is no computable partitioning of the infinity of computational realizations of an arbitrary intentional state into a single package or small set of packages (equivalence classes). Buechner analyzes these arguments and the important inferential connections among them—for example, the use of both the Gödel and triviality arguments in the argument against local computational reductions—and argues that none of Putnam’s four arguments succeeds in refuting functionalism. Gödel, Putnam, and Functionalism will inspire renewed discussion of Putnam’s influential book and will confirm Representation and Reality as a major work by a major philosopher.
If consciousness is "the hard problem" in mind science—explaining how the amazing private world of consciousness emerges from neuronal activity—then "the really hard problem," writes Owen Flanagan in this provocative book, is explaining how meaning is possible in the material world. How can we make sense of the magic and mystery of life naturalistically, without an appeal to the supernatural? How do we say truthful and enchanting things about being human if we accept the fact that we are finite material beings living in a material world, or, in Flanagan’s description, short-lived pieces of organized cells and tissue?
Flanagan's answer is both naturalistic and enchanting. We all wish to live in a meaningful way, to live a life that really matters, to flourish, to achieve eudaimonia—to be a "happy spirit." Flanagan calls his "empirical-normative" inquiry into the nature, causes, and conditions of human flourishing eudaimonics. Eudaimonics, systematic philosophical investigation that is continuous with science, is the naturalist's response to those who say that science has robbed the world of the meaning that fantastical, wishful stories once provided.
Flanagan draws on philosophy, neuroscience, evolutionary biology, and psychology, as well as on transformative mindfulness and self-cultivation practices that come from such nontheistic spiritual traditions as Buddhism, Confucianism, Aristotelianism, and Stoicism, in his quest. He gathers from these disciplines knowledge that will help us understand the nature, causes, and constituents of well-being and advance human flourishing. Eudaimonics can help us find out how to make a difference, how to contribute to the accumulation of good effects—how to live a meaningful life.
For much of the twentieth century, philosophy and science went their separate ways. In moral philosophy, fear of the so-called naturalistic fallacy kept moral philosophers from incorporating developments in biology and psychology. Since the 1990s, however, many philosophers have drawn on recent advances in cognitive psychology, brain science, and evolutionary psychology to inform their work. This collaborative trend is especially strong in moral philosophy, and these volumes bring together some of the most innovative work by both philosophers and psychologists in this emerging interdisciplinary field. The contributors to volume 1 discuss recent work on the evolution of moral beliefs, attitudes, and emotions. Each chapter includes an essay, comments on the essay by other scholars, and a reply by the author(s) of the original essay. Topics include a version of naturalism that avoids supposed fallacies, distinct neurocomputational systems for deontic reasoning, the evolutionary psychology of moral sentiments regarding incest, the sexual selection of moral virtues, the evolution of symbolic thought, and arguments both for and against innate morality. Taken together, the chapters demonstrate the value for both philosophy and psychology of collaborative efforts to understand the many complex aspects of morality.
Contributors: William Casebeer, Leda Cosmides, Oliver Curry, Michael Dietrich, Catherine Driscoll, Susan Dwyer, Owen Flanagan, Jerry Fodor, Gilbert Harman, Richard Joyce, Debra Lieberman, Ron Mallon, John Mikhail, Geoffrey Miller, Jesse Prinz, Peter Railton, Michael Ruse, Hagop Sarkissian, Walter Sinnott-Armstrong, Chandra Sekhar Sripada, Valerie Tiberius, John Tooby, Peter Tse, Kathleen Wallace, Arthur Wolf, David Wong Walter Sinnott-Armstrong is Professor of Philosophy and Hardy Professor of Legal Studies at Dartmouth College.
Human beings, like other organisms, are the products of evolution. Like other organisms, we exhibit traits that are the product of natural selection. Our psychological capacities are evolved traits as much as are our gait and posture. This much few would dispute. Evolutionary psychology goes further than this, claiming that our psychological traits—including a wide variety of traits, from mate preference and jealousy to language and reason—can be understood as specific adaptations to ancestral Pleistocene conditions. In Evolutionary Psychology as Maladapted Psychology, Robert Richardson takes a critical look at evolutionary psychology by subjecting its ambitious and controversial claims to the same sorts of methodological and evidential constraints that are broadly accepted within evolutionary biology.
The claims of evolutionary psychology may pass muster as psychology; but what are their evolutionary credentials? Richardson considers three ways adaptive hypotheses can be evaluated, using examples from the biological literature to illustrate what sorts of evidence and methodology would be necessary to establish specific evolutionary and adaptive explanations of human psychological traits. He shows that existing explanations within evolutionary psychology fall woefully short of accepted biological standards. The theories offered by evolutionary psychologists may identify traits that are, or were, beneficial to humans. But gauged by biological standards, there is inadequate evidence: evolutionary psychologists are largely silent on the evolutionary evidence relevant to assessing their claims, including such matters as variation in ancestral populations, heritability, and the advantage offered to our ancestors. As evolutionary claims they are unsubstantiated. Evolutionary psychology, Richardson concludes, may offer a program of research, but it lacks the kind of evidence that is generally expected within evolutionary biology. It is speculation rather than sound science—and we should treat its claims with skepticism.