Skip navigation

Computational Linguistics

  • Page 2 of 6
Parsing and Type Inference for Natural and Computer Languages

Intentions in Communication brings together major theorists from artificial intelligence and computer science, linguistics, philosophy, and psychology whose work develops the foundations for an account of the role of intentions in a comprehensive theory of communication. It demonstrates, for the first time, the emerging cooperation among disciplines concerned with the fundamental role of intention in communication.

Interpreting and Responding to Questions in Context

The Integration of Habits and Rules

Using sentence comprehension as a case study for all of cognitive science, David Townsend and Thomas Bever offer an integration of two major approaches, the symbolic-computational and the associative-connectionist. The symbolic-computational approach emphasizes the formal manipulation of symbols that underlies creative aspects of language behavior. The associative-connectionist approach captures the intuition that most behaviors consist of accumulated habits.

In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information.

Papers from the First Mind Articulation Project Symposium

Recent attempts to unify linguistic theory and brain science have grown out of recognition that a proper understanding of language in the brain must reflect the steady advances in linguistic theory of the last forty years. The first Mind Articulation Project Symposium addressed two main questions: How can the understanding of language from linguistic research be transformed through the study of the biological basis of language? And how can our understanding of the brain be transformed through this same research? The best model so far of such mutual constraint is research on vision.

Parallel texts (bitexts) are a goldmine of linguistic knowledge, because the translation of a text into another language can be viewed as a detailed annotation of what that text means. Knowledge about translational equivalence, which can be gleaned from bitexts, is of central importance for applications such as manual and machine translation, cross-language information retrieval, and corpus linguistics. The availability of bitexts has increased dramatically since the advent of the Web, making their study an exciting new area of research in natural language processing.

Until now, most discourse researchers have assumed that full semantic understanding is necessary to derive the discourse structure of texts. This book documents the first serious attempt to construct automatically and use nonsemantic computational structures for text summarization. Daniel Marcu develops a semantics-free theoretical framework that is both general enough to be applicable to naturally occurring texts and concise enough to facilitate an algorithmic approach to discourse analysis.

Language for Knowledge and Knowledge for Language

Natural language (NL) refers to human language--complex, irregular, diverse, with all its philosophical problems of meaning and context. Setting a new direction in AI research, this book explores the development of knowledge representation and reasoning (KRR) systems that simulate the role of NL in human information and knowledge processing.Traditionally, KRR systems have incorporated NL as an interface to an expert system or knowledge base that performed tasks separate from NL processing.

  • Page 2 of 6