Skip navigation

Language

  • Page 2 of 3
The Brain and the Enigma of Impossible Languages

In The Boundaries of Babel, Andrea Moro tells the story of an encounter between two cultures: contemporary theoretical linguistics and the cognitive neurosciences. The study of language within a biological context has been ongoing for more than fifty years. The development of neuroimaging technology offers new opportunities to enrich the "biolinguistic perspective" and extend it beyond an abstract framework for inquiry. As a leading theoretical linguist in the generative tradition and also a cognitive scientist schooled in the new imaging technology, Moro is uniquely equipped to explore this.

Moro examines what he calls the "hidden" revolution in contemporary science: the discovery that the number of possible grammars is not infinite and that their number is biologically limited. This radical but little-discussed change in the way we look at language, he claims, will require us to rethink not just the fundamentals of linguistics and neurosciences but also our view of the human mind. Moro searches for neurobiological correlates of "the boundaries of Babel"—the constraints on the apparent chaotic variation in human languages—by using an original experimental design based on artificial languages. He offers a critical overview of some of the fundamental results from linguistics over the last fifty years, in particular regarding syntax, then uses these essential aspects of language to examine two neuroimaging experiments in which he took part. He describes the two neuroimaging techniques used (positron emission topography, or PET, and functional magnetic resonance imaging, or fMRI), but makes it clear that techniques and machines do not provide interesting data without a sound theoretical framework. Finally, he discusses some speculative aspects of modern research in biolinguistics regarding the impact of the linear structure of linguistics expression on grammar, and more generally, some core aspects of language acquisition, genetics, and evolution.

Companion to Linguistics, Sixth Edition

A Linguistics Workbook is a supplement to Linguistics: An Introduction, sixth edition. It can also be used with other introductory and intermediate linguistics texts. Whereas most of the examples in the textbook are based on English, the workbook provides exercises in morphology, phonetics, phonology, syntax, and semantics, drawn from a wide variety of languages. This new edition has been updated, with exercises added.

Downloadable instructor resources available for this title: instructor’s manual

An Introduction to Language and Communication

This popular introductory linguistics text is unique for its integration of themes. Rather than treat morphology, phonetics, phonology, syntax, and semantics as completely separate fields, the book shows how they interact. It provides a sound introduction to linguistic methodology while encouraging students to consider why people are intrinsically interested in language--the ultimate puzzle of the human mind.

The text first treats such structural and interpretive parts of language as morphology, phonology, syntax, and semantics, then takes a cognitive perspective and covers such topics as pragmatics, psychology of language, language acquisition, and language and the brain. For this sixth edition, all chapters have been revised. New material includes updated examples, new special topics sections, and new discussions of the minimalist program, semantic minimalism, human genetic relationships and historical relationships among languages, Gricean theories, experimental pragmatics, and language acquisition.

The organization of the book gives instructors flexibility in designing their courses. Chapters have numerous subsections with core material presented first and additional material following as special topics. The accompanying workbook supplements the text with exercises drawn from a variety of languages. The goal is to teach basic conceptual foundations of linguistics and the methods of argumentation, justification, and hypothesis testing within the field. By presenting the most fundamental linguistics concepts in detail, the text allows students to get a feeling for how real work in different areas of linguistics is done.

Downloadable instructor resources available for this title: instructor's manual

Using Complex Lexical Descriptions in Natural Language Processing

The last decade has seen computational implementations of large hand-crafted natural language grammars in formal frameworks such as Tree-Adjoining Grammar (TAG), Combinatory Categorical Grammar (CCG), Head-driven Phrase Structure Grammar (HPSG), and Lexical Functional Grammar (LFG). Grammars in these frameworks typically associate linguistically motivated rich descriptions (Supertags) with words. With the availability of parse-annotated corpora, grammars in the TAG and CCG frameworks have also been automatically extracted while maintaining the linguistic relevance of the extracted Supertags. In these frameworks, Supertags are designed so that complex linguistic constraints are localized to operate within the domain of those descriptions. While this localization increases local ambiguity, the process of disambiguation (Supertagging) provides a unique way of combining linguistic and statistical information.

This volume investigates the theme of employing statistical approaches with linguistically motivated representations and its impact on Natural Language Processing tasks. In particular, the contributors describe research in which words are associated with Supertags that are the primitives of different grammar formalisms including Lexicalized Tree-Adjoining Grammar (LTAG).

Contributors: Jens Bäcker, Srinivas Bangalore, Akshar Bharati, Pierre Boullier, Tomas By, John Chen, Stephen Clark, Berthold Crysmann, James R. Curran, Kilian Foth, Robert Frank, Karin Harbusch, Mary Harper, Saša Hasan, Aravind Joshi,Vincenzo Lombardo, Takuya Matsuzaki, Alessandro Mazzei, Wolfgang Menzel, Yusuke Miyao, Richard Moot, Alexis Nasr, Günter Neumann, Martha Palmer, Owen Rambow, Rajeev Sangal, Anoop Sarkar, Giorgio Satta, Libin Shen, Patrick Sturt, Jun’ichi Tsujii, K. Vijay-Shanker, Wen Wang, Fei Xia

In Language and Equilibrium, Prashant Parikh offers a new account of meaning for natural language. He argues that equilibrium, or balance among multiple interacting forces, is a key attribute of language and meaning and shows how to derive the meaning of an utterance from first principles by modeling it as a system of interdependent games.

His account results in a novel view of semantics and pragmatics and describes how both may be integrated with syntax. It considers many aspects of meaning—including literal meaning and implicature—and advances a detailed theory of definite descriptions as an application of the framework.

Language and Equilibrium is intended for a wide readership in the cognitive sciences, including philosophers, linguists, and artificial intelligence researchers as well as neuroscientists, psychologists, and economists interested in language and communication.

Unifying Agreement-Based and Discourse-Configurational Languages

An unusual property of human language is the existence of movement operations. Modern syntactic theory from its inception has dealt with the puzzle of why movement should occur. In this monograph, Shigeru Miyagawa combines this question with another, that of the occurrence of agreement systems. Using data from a wide range of languages, he argues that movement and agreement work in tandem to achieve a specific goal: to imbue natural language with enormous expressive power. Without movement and agreement, he contends, human language would be merely a shadow of itself, with severe limitation on what can be expressed. Miyagawa investigates a variety of languages, including English, Japanese, Bantu languages, Romance languages, Finnish, and Chinese. He finds that every language manifests some kind of agreement, some in the form of the familiar person/number/gender system and others in the form of what Katalin É. Kiss calls “discourse configurational” features such as topic and focus. A key proposal of his argument is that the computational system in syntax deals with the wide range of agreement types uniformly--as if there were just one system--and an integral part of this computation turns out to be movement. Why Agree? Why Move? is unique in proposing a unified system for movement and agreement across language groups that are vastly diverse--Bantu languages, East Asian languages, Indo-European languages, and others.

Syntax is arguably the most human-specific aspect of language. Despite the proto-linguistic capacities of some animals, syntax appears to be the last major evolutionary transition in humans that has some genetic basis. Yet what are the elements to a scenario that can explain such a transition? In this book, experts from linguistics, neurology and neurobiology, cognitive psychology, ecology and evolutionary biology, and computer modeling address this question. Unlike most previous work on the evolution of language, Biological Foundations and Origin of Syntax follows through on a growing consensus among researchers that language can be profitably separated into a number of related and interacting but largely autonomous functions, each of which may have a distinguishable evolutionary history and neurological base. The contributors argue that syntax is such a function.The book describes the current state of research on syntax in different fields, with special emphasis on areas in which the findings of particular disciplines might shed light on problems faced by other disciplines. It defines areas where consensus has been established with regard to the nature, infrastructure, and evolution of the syntax of natural languages; summarizes and evaluates contrasting approaches in areas that remain controversial; and suggests lines for future research to resolve at least some of these disputed issues.

Contributors
Andrea Baronchelli, Derek Bickerton, Dorothy V. M. Bishop, Denis Bouchard, Robert Boyd, Jens Brauer, Ted Briscoe, David Caplan, Nick Chater, Morten H. Christiansen, Terrence W.Deacon, Francesco d’Errico, Anna Fedor, Julia Fischer, Angela D. Friederici, Tom Givón, Thomas Griffiths, Balázs Gulyás, Peter Hagoort, Austin Hilliard, James R. Hurford, Péter Ittzés, Gerhard Jäger, Herbert Jäger, Edith Kaan, Simon Kirby, Natalia L. Komarova, Tatjana Nazir, Frederick Newmeyer, Kazuo Okanoya, Csaba Plèh, Peter J. Richerson, Luigi Rizzi, Wolf Singer, Mark Steedman, Luc Steels, Szabolcs Számadó, Eörs Szathmáry, Maggie Tallerman, Jochen Triesch, Stephanie Ann White

Essays on Mental Structure

Ray Jackendoff's Language, Consciousness, Culture represents a breakthrough in developing an integrated theory of human cognition. It will be of interest to a broad spectrum of cognitive scientists, including linguists, philosophers, psycholinguists, neuroscientists, cognitive anthropologists, and evolutionary psychologists.

Jackendoff argues that linguistics has become isolated from the other cognitive sciences at least partly because of the syntax-based architecture assumed by mainstream generative grammar. He proposes an alternative parallel architecture for the language faculty that permits a greater internal integration of the components of language and connects far more naturally to such larger issues in cognitive neuroscience as language processing, the connection of language to vision, and the evolution of language.

Extending this approach beyond the language capacity, Jackendoff proposes sharper criteria for a satisfactory theory of consciousness, examines the structure of complex everyday actions, and investigates the concepts involved in an individual's grasp of society and culture. Each of these domains is used to reflect back on the question of what is unique about human language and what follows from more general properties of the mind.

Language, Consciousness, Culture extends Jackendoff's pioneering theory of conceptual semantics to two of the most important domains of human thought: social cognition and theory of mind. Jackendoff's formal framework allows him to draw new connections among a large variety of literatures and to uncover new distinctions and generalizations not previously recognized. The breadth of the approach will foster cross-disciplinary conversation; the vision is to develop a richer understanding of human nature.

Studies in Honor of Paul Kiparsky

Paul Kiparsky's work in linguistics has been wide-ranging and fundamental. His contributions as a scholar and teacher have transformed virtually every subfield of contemporary linguistics, from generative phonology to poetic theory. This collection of essays on the word—the fundamental entity of language—by Kiparsky's colleagues, students, and teachers reflects the distinctive focus of his own attention and his influence in the field.

As the editors of the volume observe, Kiparsky approaches words much as a botanist approaches plants, fascinated equally by their beauty, their structure, and their evolution. The essays in this volume reflect these multiple perspectives. The contributors discuss phonology, morphology, syntax and semantics bearing on the formal composition of the word; historical linguistic developments emphasizing the word's simultaneous idiosyncratic character and participation in a system; and metrical and poetic forms showing the significance of Kiparsky's ideas for literary theory. Collectively they develop the overarching idea that the nature of the word is not directly observable but nonetheless inferable.

Contributors:
Stephen R. Anderson, Arto Anttila, Juliette Blevins, Geert Booij, Young-mee Yu Cho, Cleo Condoravdi, B. Elan Dresher, Andrew Garrett, Carlos Gussenhoven, Morris Halle, Kristin Hanson, Bruce Hayes, Larry M. Hyman, Sharon Inkelas, S. D. Joshi, René Kager, Ellen Kaisse, Aditi Lahiri, K. P. Mohanan, Tara Mohanan, Cemil Orhan Orgun, Christopher Piñón, William J. Poser, Douglas Pulleyblank, J. A. F. Roodbergen, Háj Ross, Patricia Shaw, Galen Sibanda, Donca Steriade, John Stonham, Stephen Wechsler, Dieter Wunderlich, Draga Zec.

A Neural Theory of Language

In From Molecule to Metaphor, Jerome Feldman proposes a theory of language and thought that treats language not as an abstract symbol system but as a human biological ability that can be studied as a function of the brain, as vision and motor control are studied. This theory, he writes, is a "bridging theory" that works from extensive knowledge at two ends of a causal chain to explicate the links between. Although the cognitive sciences are revealing much about how our brains produce language and thought, we do not yet know exactly how words are understood or have any methodology for finding out. Feldman develops his theory in computer simulations—formal models that suggest ways that language and thought may be realized in the brain. Combining key findings and theories from biology, computer science, linguistics, and psychology, Feldman synthesizes a theory by exhibiting programs that demonstrate the required behavior while remaining consistent with the findings from all disciplines.

After presenting the essential results on language, learning, neural computation, the biology of neurons and neural circuits, and the mind/brain, Feldman introduces specific demonstrations and formal models of such topics as how children learn their first words, words for abstract and metaphorical concepts, understanding stories, and grammar (including "hot-button" issues surrounding the innateness of human grammar). With this accessible, comprehensive book Feldman offers readers who want to understand how our brains create thought and language a theory of language that is intuitively plausible and also consistent with existing scientific data at all levels.

  • Page 2 of 3