Skip navigation

Theory of Computation

  •  
  • Page 1 of 4
An Intuitive Approach

This book offers students and researchers a guide to distributed algorithms that emphasizes examples and exercises rather than the intricacies of mathematical models. It avoids mathematical argumentation, often a stumbling block for students, teaching algorithmic thought rather than proofs and logic. This approach allows the student to learn a large number of algorithms within a relatively short span of time. Algorithms are explained through brief, informal descriptions, illuminating examples, and practical exercises.

Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming.

Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, or request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications.

Logic-based formalizations of argumentation, which assume a set of formulae and then lay out arguments and counterarguments that can be obtained from these formulae, have been refined in recent years in an attempt to capture more closely real-world practical argumentation. In Elements of Argumentation, Philippe Besnard and Anthony Hunter introduce techniques for formalizing deductive argumentation in artificial intelligence, emphasizing emerging formalizations for practical argumentation.

The minimum description length (MDL) principle is a powerful method of inductive inference, the basis of statistical modeling, pattern recognition, and machine learning. It holds that the best explanation, given a limited set of observed data, is the one that permits the greatest compression of the data. MDL methods are particularly well-suited for dealing with model selection, prediction, and estimation problems in situations where the models under consideration can be arbitrarily complex, and overfitting the data is a serious concern.


Investigating meta-programming within the logic programming paradigm, Meta-Logics and Logic Programming presents original research on an important extension of logic programming that makes it more amenable for knowledge representation and programming in general. The 12 contributions, many written especially for this book, explore the foundations, language design issues, and applications of meta-programming in logic programming.

A System for Representing and Using Real-World Knowledge

"Consider for a moment the layers of structure and meaning that are attached to concepts like lawsuit, birthday party, fire, mother, walrus, cabbage, or king.... If I tell you that a house burned down, and that the fire started at a child's birthday party, you will think immediately of the candles on the cake and perhaps of the many paper decorations. You will not, In all probability, find yourself thinking about playing pin-the-tail-on-the-donkey or about the color of the cake's icing or about the fact that birthdays come once a year.

Although state variable concepts are a part of modern control theory, they have not been extensively applied in communication theory. The purpose of this book is to demonstrate how the concepts and methods of state variables can be used advantageously in analyzing a variety of communication theory problems.

Randomization is an important tool in the design of algorithms, and the ability of randomization to provide enhanced power is a major research topic in complexity theory. Noam Nisan continues the investigation into the power of randomization and the relationships between randomized and deterministic complexity classes by pursuing the idea of emulating randomness, or pseudorandom generation.


There is increasing interest in genetic programming by both researchers and professional software developers. These twenty-two invited contributions show how a wide variety of problems across disciplines can be solved using this new paradigm.

  •  
  • Page 1 of 4