Understanding speech in our native tongue seems natural and effortless; listening to speech in a nonnative language is a different experience. In this book, Anne Cutler argues that listening to speech is a process of native listening because so much of it is exquisitely tailored to the requirements of the native language. Her cross-linguistic study (drawing on experimental work in languages that range from English and Dutch to Chinese and Japanese) documents what is universal and what is language specific in the way we listen to spoken language.
Cutler describes the formidable range of mental tasks we carry out, all at once, with astonishing speed and accuracy, when we listen. These include evaluating probabilities arising from the structure of the native vocabulary, tracking information to locate the boundaries between words, paying attention to the way the words are pronounced, and assessing not only the sounds of speech but prosodic information that spans sequences of sounds. She describes infant speech perception, the consequences of language-specific specialization for listening to other languages, the flexibility and adaptability of listening (to our native languages), and how language-specificity and universality fit together in our language processing system.
Drawing on her four decades of work as a psycholinguist, Cutler documents the recent growth in our knowledge about how spoken-word recognition works and the role of language structure in this process. Her book is a significant contribution to a vibrant and rapidly developing field.
In this book, David Pesetsky argues that the peculiarities of Russian nominal phrases provide significant clues concerning the syntactic side of morphological case. Pesetsky argues against the traditional view that case categories such as nominative or genitive have a special status in the grammar of human languages. Supporting his argument with a detailed analysis of a complex array of morpho-syntactic phenomena in the Russian noun phrase (with brief excursions to other languages), he proposes instead that the case categories are just part-of-speech features copied as morphology from head to dependent as syntactic structure is built.
Pesetsky presents a careful investigation of one of the thorniest topics in Russian grammar, the morpho-syntax of noun phrases with numerals (including those traditionally called the paucals). He argues that these bewilderingly complex facts can be explained if case categories are viewed simply as parts of speech, assigned as morphology. Pesetsky’s analysis is notable for offering a new theoretical perspective on some of the most puzzling areas of Russian grammar, a highly original account of nominal case that significantly affects our understanding of an important property of language.
Dark Tongues constitutes a sustained exploration of a perplexing fact that has never received the attention it deserves. Wherever human beings share a language, they also strive to make from it something new: a cryptic idiom, built from the grammar that they know, which will allow them to communicate in secrecy. Such hidden languages come in many shapes. They may be playful or serious, children’s games or adults’ work. They may be as impenetrable as foreign tongues, or slightly different from the idioms from which they spring, or barely perceptible, their existence being the subject of uncertain, even unlikely, suppositions.
The first recorded jargons date to the time of the Renaissance, when writers across Europe noted that obscure languages had suddenly come into use. A varied cast of characters — lawyers, grammarians, and theologians — denounced these new forms of speech, arguing that they were tools of crime, plotted in tongues that honest people could not understand. Before the emergence of these modern jargons, however, the artificial twisting of languages served a different purpose. In epochs and regions as diverse as archaic Greece and Rome and medieval Provence and Scandinavia, singers and scribes also invented opaque varieties of speech. They did so not to defraud, but to reveal and record a divine thing: the language of the gods, which poets and priests alone were said to master.
Dark Tongues moves among these various artificial and hermetic tongues. From criminal jargons to sacred idioms, from Saussure’s work on anagrams to Jakobson’s theory of subliminal patterns in poetry, from the arcane arts of the Druids and Biblical copyists to the secret procedure that Tristan Tzara, founder of Dada, believed he had uncovered in Villon’s songs and ballads, Dark Tongues explores the common crafts of rogues and riddlers, which play sound and sense against each other.
This accessible, hands-on textbook not only introduces students to the important topics in historical linguistics but also shows them how to apply the methods described and how to think about the issues. Abundant examples and exercises allow students to focus on how to do historical linguistics. The book is distinctive for its integration of the standard topics with others now considered important to the field, including syntactic change, grammaticalization, sociolinguistic contributions to linguistic change, distant genetic relationships, areal linguistics, and linguistic prehistory. It also offers a defense of the family tree model, a response to recent claims on lexical diffusion/frequency, and a section on why languages diversify and spread. Examples are taken from a broad range of languages; those from the more familiar English, French, German, and Spanish make the topics more accessible, while those from non-Indo-European languages show the depth and range of the concepts they illustrate.
This third edition includes new material based on the latest developments in the field, increased coverage of computational approaches, and additional exercises. Many of the chapters have been revised or expanded, with new coverage of such topics as morphological change, language families, language isolates, language diversity, the Romani migration case, and misconceptions in recent work about historical linguistics. New for this edition is a downloadable instructor’s manual with answers to exercises.
Downloadable instructor resources available for this title: solution manual
Scholars have long been captivated by the parallels between birdsong and human speech and language. In this book, leading scholars draw on the latest research to explore what birdsong can tell us about the biology of human speech and language and the consequences for evolutionary biology. They examine the cognitive and neural similarities between birdsong learning and speech and language acquisition, considering vocal imitation, auditory learning, an early vocalization phase ("babbling"), the structural properties of birdsong and human language, and the striking similarities between the neural organization of learning and vocal production in birdsong and human speech.
After outlining the basic issues involved in the study of both language and evolution, the contributors compare birdsong and language in terms of acquisition, recursion, and core structural properties, and then examine the neurobiology of song and speech, genomic factors, and the emergence and evolution of language.
Contributors: Hermann Ackermann, Gabriël J.L. Beckers, Robert C. Berwick, Johan J. Bolhuis, Noam Chomsky, Frank Eisner, Martin Everaert, Michale S. Fee, Olga Fehér, Simon E. Fisher, W. Tecumseh Fitch, Jonathan B. Fritz, Sharon M.H. Gobes, Riny Huijbregts, Eric Jarvis, Robert Lachlan, Ann Law, Michael A. Long, Gary F. Marcus, Carolyn McGettigan, Daniel Mietchen, Richard Mooney, Sanne Moorman, Kazuo Okanoya, Christophe Pallier, Irene M. Pepperberg, Jonathan F. Prather, Franck Ramus, Eric Reuland, Constance Scharff, Sophie K. Scott, Neil Smith, Ofer Tchernichovski, Carel ten Cate, Christopher K. Thompson, Frank Wijnen, Moira Yip, Wolfram Ziegler, Willem Zuidema
Willard Van Orman Quine begins this influential work by declaring, "Language is a social art. In acquiring it we have to depend entirely on intersubjectively available cues as to what to say and when." As Patricia Smith Churchland notes in her foreword to this new edition, with Word and Object Quine challenged the tradition of conceptual analysis as a way of advancing knowledge. The book signaled twentieth-century philosophy's turn away from metaphysics and what Churchland calls the "phony precision" of conceptual analysis.
In the course of his discussion of meaning and the linguistic mechanisms of objective reference, Quine considers the indeterminacy of translation, brings to light the anomalies and conflicts implicit in our language's referential apparatus, clarifies semantic problems connected with the imputation of existence, and marshals reasons for admitting or repudiating each of various categories of supposed objects. In addition to Churchland's foreword, this edition offers a new preface by Quine's student and colleague Dagfinn Follesdal that describes the never-realized plans for a second edition of Word and Object, in which Quine would offer a more unified treatment of the public nature of meaning, modalities, and propositional attitudes.
How is the information we gather from the world through our sensory and motor apparatus converted into language? It is obvious that there is an interface between language and sensorimotor cognition because we can talk about what we see and do. In this book, Alistair Knott argues that this interface is more direct than commonly assumed. He proposes that the syntax of a concrete sentence—a sentence that reports a direct sensorimotor experience—closely reflects the sensorimotor processes involved in the experience. In fact, he argues, the syntax of the sentence can be interpreted as a description of these sensorimotor processes.
Knott focuses on a simple concrete episode: a man grabbing a cup. He presents detailed models of the sensorimotor processes involved in experiencing this episode (drawing on research in psychology and neuroscience) and of the syntactic structure of the transitive sentence reporting the episode (drawing on Chomskyan Minimalist syntactic theory). He proposes that these two independently motivated models are closely linked—that the logical form of the sentence can be given a detailed sensorimotor characterization and that, more generally, many of the syntactic principles understood in Minimalism as encoding innate linguistic knowledge are actually sensorimotor in origin.
Knott's sensorimotor reinterpretation of Chomsky opens the way for a psychological account of sentence processing that is compatible with a Chomskyan account of syntactic universals, suggesting a way to reconcile Chomsky's theory of syntax with the empiricist models of language often viewed as Mimimalism's competitors.
The pioneering linguist Benjamin Whorf (1897–1941) grasped the relationship between human language and human thinking: how language can shape our innermost thoughts. His basic thesis is that our perception of the world and our ways of thinking about it are deeply influenced by the structure of the languages we speak. The writings collected in this volume include important papers on the Maya, Hopi, and Shawnee languages, as well as more general reflections on language and meaning.
Whorf's ideas about the relation of language and thought have always appealed to a wide audience, but their reception in expert circles has alternated between dismissal and applause. Recently the language sciences have headed in directions that give Whorf's thinking a renewed relevance. Hence this new edition of Whorf's classic work is especially timely.
The second edition includes all the writings from the first edition as well as John Carroll's original introduction, a new foreword by Stephen Levinson of the Max Planck Institute for Psycholinguistics that puts Whorf's work in historical and contemporary context, and new indexes. In addition, this edition offers Whorf's "Yale Report," an important work from Whorf's mature oeuvre.
Since it was introduced to the English-speaking world in 1962, Lev Vygotsky's Thought and Language has become recognized as a classic foundational work of cognitive science. Its 1962 English translation must certainly be considered one of the most important and influential books ever published by the MIT Press. In this highly original exploration of human mental development, Vygotsky analyzes the relationship between words and consciousness, arguing that speech is social in its origins and that only as children develop does it become internalized verbal thought.
In 1986, the MIT Press published a new edition of the original translation by Eugenia Hanfmann and Gertrude Vakar, edited by Vygotsky scholar Alex Kozulin, that restored the work's complete text and added materials to help readers better understand Vygotsky's thought. Kozulin also contributed an introductory essay that offered new insight into Vygotsky's life, intellectual milieu, and research methods. This expanded edition offers Vygotsky's text, Kozulin's essay, a subject index, and a new foreword by Kozulin that maps the ever-growing influence of Vygotsky's ideas.
Stanley Kubrick’s 1968 film 2001: A Space Odyssey famously featured HAL, a computer with the ability to hold lengthy conversations with his fellow space travelers. More than forty years later, we have advanced computer technology that Kubrick never imagined, but we do not have computers that talk and understand speech as HAL did. Is it a failure of our technology that we have not gotten much further than an automated voice that tells us to “say or press 1”? Or is there something fundamental in human language and speech that we do not yet understand deeply enough to be able to replicate in a computer? In The Voice in the Machine, Roberto Pieraccini examines six decades of work in science and technology to develop computers that can interact with humans using speech and the industry that has arisen around the quest for these technologies. He shows that although the computers today that understand speech may not have HAL’s capacity for conversation, they have capabilities that make them usable in many applications today and are on a fast track of improvement and innovation.
Pieraccini describes the evolution of speech recognition and speech understanding processes from waveform methods to artificial intelligence approaches to statistical learning and modeling of human speech based on a rigorous mathematical model--specifically, Hidden Markov Models (HMM). He details the development of dialog systems, the ability to produce speech, and the process of bringing talking machines to the market. Finally, he asks a question that only the future can answer: will we end up with HAL-like computers or something completely unexpected?