Skip navigation

Computational Linguistics

  •  
  • Page 1 of 6
From Neural Computation to Optimality-Theoretic Grammar Volume I: Cognitive Architecture
From Neural Computation to Optimality-Theoretic Grammar Volume II: Linguistic and Philosophical Implications

Despite their apparently divergent accounts of higher cognition, cognitive theories based on neural computation and those employing symbolic computation can in fact strengthen one another. To substantiate this controversial claim, this landmark work develops in depth a cognitive architecture based in neural computation but supporting formally explicit higher-level symbolic descriptions, including new grammar formalisms.

Detailed studies in both phonology and syntax provide arguments that these grammatical theories and their neural network realizations enable deeper explanations of early acquisition, processing difficulty, cross-linguistic typology, and the possibility of genomically encoding universal principles of grammar. Foundational questions concerning the explanatory status of symbols for central problems such as the unbounded productivity of higher cognition are also given proper treatment.

The work is made accessible to scholars in different fields of cognitive science through tutorial chapters and numerous expository boxes providing background material from several disciplines. Examples common to different chapters facilitate the transition from more basic to more sophisticated treatments. Details of method, formalism, and foundation are presented in later chapters, offering a wealth of new results to specialists in psycholinguistics, language acquisition, theoretical linguistics, computational linguistics, computational neuroscience, connectionist modeling, and philosophy of mind.

The nature of the interplay between language learning and the evolution of a language over generational time is subtle. We can observe the learning of language by children and marvel at the phenomenon of language acquisition; the evolution of a language, however, is not so directly experienced. Language learning by children is robust and reliable, but it cannot be perfect or languages would never change--and English, for example, would not have evolved from the language of the Anglo-Saxon Chronicles. In this book Partha Niyogi introduces a framework for analyzing the precise nature of the relationship between learning by the individual and evolution of the population.Learning is the mechanism by which language is transferred from old speakers to new. Niyogi shows that the evolution of language over time will depend upon the learning procedure--that different learning algorithms may have different evolutionary consequences. He finds that the dynamics of language evolution are typically nonlinear, with bifurcations that can be seen as the natural explanatory construct for the dramatic patterns of change observed in historical linguistics. Niyogi investigates the roles of natural selection, communicative efficiency, and learning in the origin and evolution of language--in particular, whether natural selection is necessary for the emergence of shared languages.Over the years, historical linguists have postulated several accounts of documented language change. Additionally, biologists have postulated accounts of the evolution of communication systems in the animal world. This book creates a mathematical and computational framework within which to embed those accounts, offering a research tool to aid analysis in an area in which data is often sparse and speculation often plentiful.

The Core Language Engine presents the theoretical and engineering advances embodied in one of the most comprehensive natural language processing systems designed to date. Recent research results from different areas of computational linguistics are integrated into a single elegant design with potential for application to tasks ranging from machine translation to information system interfaces.

Bridging the gap between theoretical and implementation oriented literature, The Core Language Engine describes novel analyses and techniques developed by the contributors at SRI International's Cambridge Computer Science Research Centre. It spans topics that include a wide-coverage unification grammar for English syntax and semantics, context-dependent and contextually disambiguated logical form representations, interactive translation, efficient algorithms for parsing and generation, and mechanisms for quantifier scoping, reference resolution, and lexical acquisition.

Contents: Introduction to the CLE. Logical Forms. Categories and Rules. Unification Based Syntactic Analysis. Semantic Rules for English. Lexical Analysis. Syntactic and Semantic Processing. Quantifier Scoping. Sortal Restrictions. Resolving Quasi Logical Forms. Lexical Acquisition. The CLE in Application Development. Ellipsis, Comparatives, and Generation. Swedish-English QLF Translation.

Optimal and Costly Computations

In this monograph Tanya Reinhart discusses strategies enabling the interface of different cognitive systems, which she identifies as the systems of concepts, inference, context, and sound. Her point of departure is Noam Chomsky's hypothesis that language is optimally designed--namely, that in many cases, the bare minimum needed for constructing syntactic derivations is sufficient for the full needs of the interface. Deviations from this principle are viewed as imperfections.The book covers in depth four areas of the interface: quantifier scope, focus, anaphora resolution, and implicatures. The first question in each area is what makes the computational system (CS, syntax) legible to the other systems at the interface--how much of the information needed for the interface is coded already in the CS, and how it is coded. Next Reinhart argues that in each of these areas there are certain aspects of meaning and use that cannot be coded in the CS formal language, on both conceptual and empirical grounds. This residue is governed by interface strategies that can be viewed as repair of imperfections. They require constructing and comparing a reference set of alternative derivations to determine whether a repair operation is indeed the only way to meet the interface requirements.Evidence that reference-set computation applies in these four areas comes from language acquisition. The required computation poses a severe load on working memory. While adults can cope with this load, children, whose working memory is less developed, fail in tasks requiring this computation.

This book addresses a fundamental software engineering issue, applying formal techniques and rigorous analysis to a practical problem of great current interest: the incorporation of language-specific knowledge in interactive programming environments. It makes a basic contribution in this area by proposing an attribute-grammar framework for incremental semantic analysis and establishing its algorithmic foundations. The results are theoretically important while having immediate practical utility for implementing environment-generating systems.

The book's principal technical results include: an optimal-time algorithm to incrementally maintain a consistent attributed-tree of attribute grammar subclasses, allowing an optimizing environment-generator to select the most efficient applicable algorithm; a general method for sharing storage among attributes whose values are complex data structures; and two algorithms that carry out attribute evaluation while reducing the number of intermediate attribute values retained. While others have worked on this last problem, Reps's algorithms are the first to achieve sublinear worst-case behavior. One algorithm is optimal, achieving the log n lower space bound in nonlinear time, while the second algorithm uses as much as root n space but runs in linear time.

The field of machine translation (MT)—the automation of translation between human languages—has existed for more than fifty years. MT helped to usher in the field of computational linguistics and has influenced methods and applications in knowledge representation, information theory, and mathematical statistics.

This valuable resource offers the most historically significant English-language articles on MT. The book is organized in three sections. The historical section contains articles from MT's beginnings through the late 1960s. The second section, on theoretical and methodological issues, covers sublanguage and controlled input, the role of humans in machine-aided translation, the impact of certain linguistic approaches, the transfer versus interlingua question, and the representation of meaning and knowledge. The third section, on system design, covers knowledge-based, statistical, and example-based approaches to multilevel analysis and representation, as well as computational issues.

For the past forty years, linguistics has been dominated by the idea that language is categorical and linguistic competence discrete. It has become increasingly clear, however, that many levels of representation, from phonemes to sentence structure, show probabilistic properties, as does the language faculty. Probabilistic linguistics conceptualizes categories as distributions and views knowledge of language not as a minimal set of categorical constraints but as a set of gradient rules that may be characterized by a statistical distribution. Whereas categorical approaches focus on the endpoints of distributions of linguistic phenomena, probabilistic approaches focus on the gradient middle ground. Probabilistic linguistics integrates all the progress made by linguistics thus far with a probabilistic perspective.

This book presents a comprehensive introduction to probabilistic approaches to linguistic inquiry. It covers the application of probabilistic techniques to phonology, morphology, semantics, syntax, language acquisition, psycholinguistics, historical linguistics, and sociolinguistics. It also includes a tutorial on elementary probability theory and probabilistic grammars.

The Role of Geometric Constraints

With contributions from Tomás LozanoPérez and Daniel P. Huttenlocher.

An intelligent system must know what the objects are and where they are in its environment. Examples of this ubiquitous problem in computer vision arise in tasks involving hand-eye coordination (such as assembling or sorting), inspection tasks, gauging operations, and in navigation and localization of mobile robots. This book describes an extended series of experiments into the role of geometry in the critical area of object recognition. It provides precise definitions of the recognition and localization problems, describes the methods used to address them, analyzes the solutions to these problems, and addresses the implications of this analysis.

The solution to problems of object recognition are of fundamental importance in many real applications and versions of the techniques described here are already being used in industrial settings. Although a number of questions remain to be solved, the authors provide a valuable framework for understanding both the strengths and limitations of using object shape to guide recognition.

W. Eric L. Grimson is Matsushita Associate Professor in the Department of Electrical Engineering and Computer Science at MIT.

Contents: Introduction. Recognition as a Search Problem. Searching for Correspondences. Two-Dimensional Constraints. Three-Dimensional Constraints. Verifying Hypotheses. Controlling the Search Explosion. Selecting Subspaces of the Search Space. Empirical Testing. The Combinatorics of the Matching Process. The Combinatorics of Hough Transforms. The Combinatorics of Verification. The Combinatorics of Indexing. Evaluating the Methods. Recognition from Libraries. Parameterized Objects. The Role of Grouping. Sensing Strategies. Applications. The Next Steps.

Interpreting and Responding to Questions in Context


While much has been written about the areas of text generation, text planning, discourse modeling, and user modeling, Johanna Moore's book is one of the first to tackle modeling the complex dynamics of explanatory dialogues. It describes an explanation-planning architecture that enables a computational system to participate in an interactive dialogue with its users, focusing on the knowledge structures that a system must build in order to elaborate or clarify prior utterances, or to answer follow-up questions in the context of an ongoing dialogue.

Moore develops a model of explanation generation and describes a fully implemented natural-language system that is embedded in an existing expert system and that includes a generation component. Her main thesis is that shallow approaches to explanation such as paraphrasing the expert system's line of reasoning or filling in an explanation "schema" are not adequate for supporting dialogue, and that a more flexible approach is needed, one that is adaptive to context, aware of what is being said, and of what has gone before in the user's dialogue with the expert system. She argues that the problem with prior approaches is that they do not provide a representation of the intended effects of the components of an explanation, nor how these intentions are related to one another or to the rhetorical structure of the text. She proposes a computational solution to the question of how explanations can be synthesized in such a way that a system can later reason about the explanations it has produced to affect its subsequent utterances.

ACL-MIT Series in Natural Language Processing


  •  
  • Page 1 of 6