Skip navigation

Computational Intelligence

  • Page 2 of 11
A Prolegomenon

Building a person has been an elusive goal in artificial intelligence. This failure, John Pollock argues, is because the problems involved are essentially philosophical; what is needed for the construction of a person is a physical system that mimics human rationality. Pollock describes an exciting theory of rationality and its partial implementation in OSCAR, a computer system whose descendants will literally be persons.

In developing the philosophical superstructure for this bold undertaking, Pollock defends the conception of man as an intelligent machine and argues that mental states are physical states and persons are physical objects as described in the fable of Oscar, the self conscious machine.

Pollock brings a unique blend of philosophy and artificial intelligence to bear on the vexing problem of how to construct a physical system that thinks, is self conscious, has desires, fears, intentions, and a full range of mental states. He brings together an impressive array of technical work in philosophy to drive theory construction in AI. The result is described in his final chapter on "cognitive carpentry."

A Bradford Book

Selected Research

Constraint logic programming, the notion of computing with partial information, is becoming recognized as a way of dramatically improving on the current generation of programming languages. This collection presents the best of current work on all aspects of constraint logic programming languages, from theory through language implementation.

Beginning in the mid-1980s constraint logic programming became a powerful and essential theoretical concept whose first practical application was the development of efficient programming languages based on Prolog. Benhamou and Colmerauer have taken care to illustrate the strong links between current research and existing CLP languages. The first part of the book focuses on significant theoretical studies that propose general models for constraint programming, and the two following parts develop current ideas on themes derived from these languages (numerical constraints, Booleans, and other finite domains). The concluding part on CLP language design gathers work on original constraints and on top-level implementation.

A New Reading of 'Representation and Reality'

With mind-brain identity theories no longer dominant in philosophy of mind in the late 1950s, scientific materialists turned to functionalism, the view that the identity of any mental state depends on its function in the cognitive system of which it is a part. The philosopher Hilary Putnam was one of the primary architects of functionalism and was the first to propose computational functionalism, which views the human mind as a computer or an information processor. But, in the early 1970s, Putnam began to have doubts about functionalism, and in his masterwork Representation and Reality (MIT Press, 1988), he advanced four powerful arguments against his own doctrine of computational functionalism.

In Gödel, Putnam, and Functionalism, Jeff Buechner systematically examines Putnam’s arguments against functionalism and contends that they are unsuccessful. Putnam’s first argument uses Gödel’s incompleteness theorem to refute the view that there is a computational description of human reasoning and rationality; his second, the “triviality argument,” demonstrates that any computational description can be attributed to any physical system; his third, the multirealization argument, shows that there are infinitely many computational realizations of an arbitrary intentional state; his fourth argument buttresses this assertion by showing that there cannot be local computational reductions because there is no computable partitioning of the infinity of computational realizations of an arbitrary intentional state into a single package or small set of packages (equivalence classes). Buechner analyzes these arguments and the important inferential connections among them—for example, the use of both the Gödel and triviality arguments in the argument against local computational reductions—and argues that none of Putnam’s four arguments succeeds in refuting functionalism. Gödel, Putnam, and Functionalism will inspire renewed discussion of Putnam’s influential book and will confirm Representation and Reality as a major work by a major philosopher.

Proceedings of the 2005 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December 2005 meeting, held in Vancouver.

In What Is Thought? Eric Baum proposes a computational explanation of thought. Just as Erwin Schrodinger in his classic 1944 work What Is Life? argued ten years before the discovery of DNA that life must be explainable at a fundamental level by physics and chemistry, Baum contends that the present-day inability of computer science to explain thought and meaning is no reason to doubt there can be such an explanation. Baum argues that the complexity of mind is the outcome of evolution, which has built thought processes that act unlike the standard algorithms of computer science and that to understand the mind we need to understand these thought processes and the evolutionary process that produced them in computational terms.

Baum proposes that underlying mind is a complex but compact program that corresponds to the underlying structure of the world. He argues further that the mind is essentially programmed by DNA. We learn more rapidly than computer scientists have so far been able to explain because the DNA code has programmed the mind to deal only with meaningful possibilities. Thus the mind understands by exploiting semantics, or meaning, for the purposes of computation; constraints are built in so that although there are myriad possibilities, only a few make sense. Evolution discovered corresponding subroutines or shortcuts to speed up its processes and to construct creatures whose survival depends on making the right choice quickly. Baum argues that the structure and nature of thought, meaning, sensation, and consciousness therefore arise naturally from the evolution of programs that exploit the compact structure of the world.

Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.

The Geometry of Thought

Within cognitive science, two approaches currently dominate the problem of modeling representations. The symbolic approach views cognition as computation involving symbolic manipulation. Connectionism, a special case of associationism, models associations using artificial neuron networks. Peter Gardenfors offers his theory of conceptual representations as a bridge between the symbolic and connectionist approaches.

Symbolic representation is particularly weak at modeling concept learning, which is paramount for understanding many cognitive phenomena. Concept learning is closely tied to the notion of similarity, which is also poorly served by the symbolic approach. Gardenfors's theory of conceptual spaces presents a framework for representing information on the conceptual level. A conceptual space is built up from geometrical structures based on a number of quality dimensions. The main applications of the theory are on the constructive side of cognitive science: as a constructive model the theory can be applied to the development of artificial systems capable of solving cognitive tasks. Gardenfors also shows how conceptual spaces can serve as an explanatory framework for a number of empirical theories, in particular those concerning concept formation, induction, and semantics. His aim is to present a coherent research program that can be used as a basis for more detailed investigations.

Reasoning about knowledge—particularly the knowledge of agents who reason about the world and each other's knowledge—was once the exclusive province of philosophers and puzzle solvers. More recently, this type of reasoning has been shown to play a key role in a surprising number of contexts, from understanding conversations to the analysis of distributed computer algorithms.

Reasoning About Knowledge is the first book to provide a general discussion of approaches to reasoning about knowledge and its applications to distributed systems, artificial intelligence, and game theory. It brings eight years of work by the authors into a cohesive framework for understanding and analyzing reasoning about knowledge that is intuitive, mathematically well founded, useful in practice, and widely applicable. The book is almost completely self-contained and should be accessible to readers in a variety of disciplines, including computer science, artificial intelligence, linguistics, philosophy, cognitive science, and game theory. Each chapter includes exercises and bibliographic notes.

Deductive Reasoning in Human Thinking

In this provocative book, Lance Rips describes a unified theory of natural deductive reasoning and fashions a working model of deduction, with strong experimental support, that is capable of playing a central role in mental life.Rips argues that certain inference principles are so central to our notion of intelligence and rationality that they deserve serious psychological investigation to determine their role in individuals' beliefs and conjectures. Asserting that cognitive scientists should consider deductive reasoning as a basis for thinking, Rips develops a theory of natural reasoning abilities and shows how it predicts mental successes and failures in a range of cognitive tasks.In parts I and II of the book Rips builds insights from cognitive psychology, logic, and artificial intelligence into a unified theoretical structure. He defends the idea that deduction depends on the ability to construct mental proofs - actual memory units that link given information to conclusions it warrants. From this base Rips develops a computational model of deduction based on two cognitive skills: the ability to make suppositions or assumptions and the ability to posit sub-goals for conclusions. A wide variety of original experiments support this model, including studies of human subjects evaluating logical arguments as well as following and remembering proofs. Unlike previous theories of mental proof, this one handles names and variables in a general way. This capability enables deduction to play a crucial role in other thought processes,such as classifying and problem solving.In part III Rips compares the theory to earlier approaches in psychology which confined the study of deduction to a small group of tasks, and examines whether the theory is too rational or too irrational in its mode of thought.Lance J. Rips is Professor of Psychology at Northwestern University.

The Computer Generation of Explanatory Dialogues

Explanation and Interaction describes the problems and issues involved in generating interactive user-sensitive explanations. It presents a particular computational system that generates tutorial, interactive explanations of how simple electronic circuits work. However, the approaches and ideas in the book can be applied to a wide range of computer applications where complex explanations are provided, such as documentation, advisory, and expert systems. The approach presented is based on an analysis of human explanatory discourse, and simple techniques for text planning, dialogue management, and user modeling are developed and used in the system.Cawsey describes in detail the issues involved in text planning, dialogue management, and user modeling, and presents a particular approach in enough detail that practical systems may be developed based on the ideas. Because the book addresses a wide range of issues in a single system, it is appropriate as a general introduction to discourse processing and user-adapted interaction.Alison Cawsey is Lecturer in Artificial Intelligence in the Department of Computing Science at the University of Glasgow.

  • Page 2 of 11