Skip navigation

Computational Linguistics

  • Page 2 of 6
The Role of Geometric Constraints

With contributions from Tomás LozanoPérez and Daniel P. Huttenlocher.An intelligent system must know what the objects are and where they are in its environment. Examples of this ubiquitous problem in computer vision arise in tasks involving hand-eye coordination (such as assembling or sorting), inspection tasks, gauging operations, and in navigation and localization of mobile robots. This book describes an extended series of experiments into the role of geometry in the critical area of object recognition.

Interpreting and Responding to Questions in Context

While much has been written about the areas of text generation, text planning, discourse modeling, and user modeling, Johanna Moore's book is one of the first to tackle modeling the complex dynamics of explanatory dialogues. It describes an explanation-planning architecture that enables a computational system to participate in an interactive dialogue with its users, focusing on the knowledge structures that a system must build in order to elaborate or clarify prior utterances, or to answer follow-up questions in the context of an ongoing dialogue.

The Integration of Habits and Rules

Using sentence comprehension as a case study for all of cognitive science, David Townsend and Thomas Bever offer an integration of two major approaches, the symbolic-computational and the associative-connectionist. The symbolic-computational approach emphasizes the formal manipulation of symbols that underlies creative aspects of language behavior. The associative-connectionist approach captures the intuition that most behaviors consist of accumulated habits.

In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information.

Papers from the First Mind Articulation Project Symposium

Recent attempts to unify linguistic theory and brain science have grown out of recognition that a proper understanding of language in the brain must reflect the steady advances in linguistic theory of the last forty years. The first Mind Articulation Project Symposium addressed two main questions: How can the understanding of language from linguistic research be transformed through the study of the biological basis of language? And how can our understanding of the brain be transformed through this same research? The best model so far of such mutual constraint is research on vision.

Parallel texts (bitexts) are a goldmine of linguistic knowledge, because the translation of a text into another language can be viewed as a detailed annotation of what that text means. Knowledge about translational equivalence, which can be gleaned from bitexts, is of central importance for applications such as manual and machine translation, cross-language information retrieval, and corpus linguistics. The availability of bitexts has increased dramatically since the advent of the Web, making their study an exciting new area of research in natural language processing.

Until now, most discourse researchers have assumed that full semantic understanding is necessary to derive the discourse structure of texts. This book documents the first serious attempt to construct automatically and use nonsemantic computational structures for text summarization. Daniel Marcu develops a semantics-free theoretical framework that is both general enough to be applicable to naturally occurring texts and concise enough to facilitate an algorithmic approach to discourse analysis.

Language for Knowledge and Knowledge for Language

Natural language (NL) refers to human language--complex, irregular, diverse, with all its philosophical problems of meaning and context. Setting a new direction in AI research, this book explores the development of knowledge representation and reasoning (KRR) systems that simulate the role of NL in human information and knowledge processing.Traditionally, KRR systems have incorporated NL as an interface to an expert system or knowledge base that performed tasks separate from NL processing.

Statistical approaches to processing natural language text have become dominant in recent years. This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear. The book contains all the theory and algorithms needed for building NLP tools. It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations.

The Resource Logic Approach
Edited by Mary Dalrymple

A new, deductive approach to the syntax-semantics interface integrates two mature and successful lines of research: logical deduction for semantic composition and the Lexical Functional Grammar (LFG) approach to the analysis of linguistic structure. It is often referred to as the "glue" approach because of the role of logic in "gluing" meanings together.

  • Page 2 of 6