Two psychologists, a computer scientist, and a philosopher have collaborated to present a framework for understanding processes of inductive reasoning and learning in organisms and machines. Theirs is the first major effort to bring the ideas of several disciplines to bear on a subject that has been a topic of investigation since the time of Socrates. The result is an integrated account that treats problem solving and induction in terms of rulebased mental models.
Computational Complexity and Natural Language heralds an entirely new way of looking at grammatical systems. It applies the recently developed computer science tool of complexity theory to the study of natural language. A unified and coherent account emerges of how complexity theory can probe the information-processing structure of grammars, discovering why a grammar is easy or difficult to process and suggesting where to look for additional grammatical constraints.