Skip navigation

Computational Linguistics

For the past forty years, linguistics has been dominated by the idea that language is categorical and linguistic competence discrete. It has become increasingly clear, however, that many levels of representation, from phonemes to sentence structure, show probabilistic properties, as does the language faculty. Probabilistic linguistics conceptualizes categories as distributions and views knowledge of language not as a minimal set of categorical constraints but as a set of gradient rules that may be characterized by a statistical distribution.

The Role of Geometric Constraints

With contributions from Tomás LozanoPérez and Daniel P. Huttenlocher.

Within the field of logic programming there have been numerous attempts to transform grammars into logic programs. This book describes a complementary approach that views logic programs as grammars and shows how this new presentation of the foundations of logic programming, based on the notion of proof trees, can enrich the field.

Parsing and Type Inference for Natural and Computer Languages

Intentions in Communication brings together major theorists from artificial intelligence and computer science, linguistics, philosophy, and psychology whose work develops the foundations for an account of the role of intentions in a comprehensive theory of communication. It demonstrates, for the first time, the emerging cooperation among disciplines concerned with the fundamental role of intention in communication.

Interpreting and Responding to Questions in Context

The Integration of Habits and Rules

Using sentence comprehension as a case study for all of cognitive science, David Townsend and Thomas Bever offer an integration of two major approaches, the symbolic-computational and the associative-connectionist. The symbolic-computational approach emphasizes the formal manipulation of symbols that underlies creative aspects of language behavior. The associative-connectionist approach captures the intuition that most behaviors consist of accumulated habits.

In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information.

Papers from the First Mind Articulation Project Symposium

Recent attempts to unify linguistic theory and brain science have grown out of recognition that a proper understanding of language in the brain must reflect the steady advances in linguistic theory of the last forty years. The first Mind Articulation Project Symposium addressed two main questions: How can the understanding of language from linguistic research be transformed through the study of the biological basis of language? And how can our understanding of the brain be transformed through this same research? The best model so far of such mutual constraint is research on vision.

Parallel texts (bitexts) are a goldmine of linguistic knowledge, because the translation of a text into another language can be viewed as a detailed annotation of what that text means. Knowledge about translational equivalence, which can be gleaned from bitexts, is of central importance for applications such as manual and machine translation, cross-language information retrieval, and corpus linguistics. The availability of bitexts has increased dramatically since the advent of the Web, making their study an exciting new area of research in natural language processing.