The use of computers to understand words continues to be an area of burgeoning research. Electric Words is the first general survey of and introduction to the entire range of work in lexical linguistics and corpora—the study of such on-line resources as dictionaries and other texts—in the broader fields of natural-language processing and artificial intelligence. The authors integrate and synthesize the goals and methods of computational lexicons in relation to AI's sister disciplines of philosophy, linguistics, and psychology.
Language in Action demonstrates the viability of mathematical research into the foundations of categorial grammar, a topic at the border between logic and linguistics. Since its initial publication it has become the classic work in the foundations of categorial grammar. A new introduction to this paperback edition updates the open research problems and records relevant results through pointers to the literature.
Cognitive Models of Speech Processing presents extensive reviews of current thinking on psycholinguistic and computational topics in speech recognition and natural-language processing, along with a substantial body of new experimental data and computational simulations. Topics range from lexical access and the recognition of words in continuous speech to syntactic processing and the relationship between syntactic and intonational structure.
A Bradford Book. ACL-MIT Press Series in Natural Language Processing
Today, large corpora consisting of hundreds of millions or even billions of words, along with new empirical and statistical methods for organizing and analyzing these data, promise new insights into the use of language. Already, the data extracted from these large corpora reveal that language use is more flexible and complex than most rule-based systems have tried to account for, providing a basis for progress in the performance of Natural Language Processing systems.
Although the theory of object-oriented programming languages is far from complete, this book brings together the most important contributions to its development to date, focusing in particular on how advances in type systems and semantic models can contribute to new language designs.
This book describes a novel, cross-linguistic approach to machine translation that solves certain classes of syntactic and lexical divergences by means of a lexical conceptual structure that can be composed and decomposed in language-specific ways. This approach allows the translator to operate uniformly across many languages, while still accounting for knowledge that is specific to each language.The translation model can be used to map a source-language sentence to a target-language sentence in a principled fashion.
Risto Miikkulainen draws on recent connectionist work in language comprehension to create a model that can understand natural language. Using the DISCERN system as an example, he describes a general approach to building high-level cognitive models from distributed neural networks and shows how the special properties of such networks are useful in modeling human performance. In this approach connectionist networks are not only plausible models of isolated cognitive phenomena, but also sufficient constituents for complete artificial intelligence systems.