Skip navigation

Computational Intelligence

A Prolegomenon

Building a person has been an elusive goal in artificial intelligence. This failure, John Pollock argues, is because the problems involved are essentially philosophical; what is needed for the construction of a person is a physical system that mimics human rationality. Pollock describes an exciting theory of rationality and its partial implementation in OSCAR, a computer system whose descendants will literally be persons.

Proceedings of the 2006 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation and machine learning. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists—interested in theoretical and applied aspects of modeling, simulating, and building neural-like or intelligent systems.

Interest in developing an effective communication interface connecting the human brain and a computer has grown rapidly over the past decade. The brain-computer interface (BCI) would allow humans to operate computers, wheelchairs, prostheses, and other devices, using brain signals only.

Induction and Statistical Learning Theory

In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni—a philosopher and an engineer—argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors—a central topic in SLT.

Proceedings of the 2005 Conference

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications.

Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.

The Geometry of Thought

Within cognitive science, two approaches currently dominate the problem of modeling representations. The symbolic approach views cognition as computation involving symbolic manipulation. Connectionism, a special case of associationism, models associations using artificial neuron networks. Peter Gardenfors offers his theory of conceptual representations as a bridge between the symbolic and connectionist approaches.

Reasoning about knowledge—particularly the knowledge of agents who reason about the world and each other's knowledge—was once the exclusive province of philosophers and puzzle solvers. More recently, this type of reasoning has been shown to play a key role in a surprising number of contexts, from understanding conversations to the analysis of distributed computer algorithms.

In What Is Thought? Eric Baum proposes a computational explanation of thought. Just as Erwin Schrodinger in his classic 1944 work What Is Life? argued ten years before the discovery of DNA that life must be explainable at a fundamental level by physics and chemistry, Baum contends that the present-day inability of computer science to explain thought and meaning is no reason to doubt there can be such an explanation.

A Computer Model

The psychologist William James observed that "a native talent for perceiving analogies is ... the leading fact in genius of every order." The centrality and the ubiquity of analogy in creative thought have been noted again and again by scientists, artists, and writers, and understanding and modeling analogical thought have emerged as two of the most important challenges for cognitive science.