Skip navigation

Computational Intelligence

  • Page 4 of 11
The Scope and Limits of Computational Psychology

In this engaging book, Jerry Fodor argues against the widely held view that mental processes are largely computations, that the architecture of cognition is massively modular, and that the explanation of our innate mental structure is basically Darwinian. Although Fodor has praised the computational theory of mind as the best theory of cognition that we have got, he considers it to be only a fragment of the truth. In fact, he claims, cognitive scientists do not really know much yet about how the mind works (the book's title refers to Steve Pinker's How the Mind Works).

Fodor's primary aim is to explore the relationship among computational and modular theories of mind, nativism, and evolutionary psychology. Along the way, he explains how Chomsky's version of nativism differs from that of the widely received New Synthesis approach. He concludes that although we have no grounds to suppose that most of the mind is modular, we have no idea how nonmodular cognition could work. Thus, according to Fodor, cognitive science has hardly gotten started.

In this book Simon Parsons describes qualitative methods for reasoning under uncertainty, "uncertainty" being a catch-all term for various types of imperfect information. The advantage of qualitative methods is that they do not require precise numerical information. Instead, they work with abstractions such as interval values and information about how values change. The author does not invent completely new methods for reasoning under uncertainty but provides the means to create qualitative versions of existing methods. To illustrate this, he develops qualitative versions of probability theory, possibility theory, and the Dempster-Shafer theory of evidence.

According to Parsons, these theories are best considered complementary rather than exclusive. Thus the book supports the contention that rather than search for the one best method to handle all imperfect information, one should use whichever method best fits the problem. This approach leads naturally to the use of several different methods in the solution of a single problem and to the complexity of integrating the results problem to which qualitative methods provide a solution.

By the mid-1980s researchers from artificial intelligence, computer science, brain and cognitive science, and psychology realized that the idea of computers as intelligent machines was inappropriate. The brain does not run "programs"; it does something entirely different. But what? Evolutionary theory says that the brain has evolved not to do mathematical proofs but to control our behavior, to ensure our survival. Researchers now agree that intelligence always manifests itself in behavior—thus it is behavior that we must understand. An exciting new field has grown around the study of behavior-based intelligence, also known as embodied cognitive science, "new AI," and "behavior-based AI."

This book provides a systematic introduction to this new way of thinking. After discussing concepts and approaches such as subsumption architecture, Braitenberg vehicles, evolutionary robotics, artificial life, self-organization, and learning, the authors derive a set of principles and a coherent framework for the study of naturally and artificially intelligent systems, or autonomous agents. This framework is based on a synthetic methodology whose goal is understanding by designing and building.

The book includes all the background material required to understand the principles underlying intelligence, as well as enough detailed information on intelligent robotics and simulated agents so readers can begin experiments and projects on their own. The reader is guided through a series of case studies that illustrate the design principles of embodied cognitive science.

Proceedings of the 2000 Conference

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. The conference is interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and diverse applications. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented at the 2000 conference.

The idea of knowledge bases lies at the heart of symbolic, or "traditional," artificial intelligence. A knowledge-based system decides how to act by running formal reasoning procedures over a body of explicitly represented knowledge—a knowledge base. The system is not programmed for specific tasks; rather, it is told what it needs to know and expected to infer the rest.

This book is about the logic of such knowledge bases. It describes in detail the relationship between symbolic representations of knowledge and abstract states of knowledge, exploring along the way the foundations of knowledge, knowledge bases, knowledge-based systems, and knowledge representation and reasoning. Assuming some familiarity with first-order predicate logic, the book offers a new mathematical model of knowledge that is general and expressive yet more workable in practice than previous models. The book presents a style of semantic argument and formal analysis that would be cumbersome or completely impractical with other approaches. It also shows how to treat a knowledge base as an abstract data type, completely specified in an abstract way by the knowledge-level operations defined over it.

Among the many approaches to formal reasoning about programs, Dynamic Logic enjoys the singular advantage of being strongly related to classical logic. Its variants constitute natural generalizations and extensions of classical formalisms. For example, Propositional Dynamic Logic (PDL) can be described as a blend of three complementary classical ingredients: propositional calculus, modal logic, and the algebra of regular events. In First-Order Dynamic Logic (DL), the propositional calculus is replaced by classical first-order predicate calculus. Dynamic Logic is a system of remarkable unity that is theoretically rich as well as of practical value. It can be used for formalizing correctness specifications and proving rigorously that those specifications are met by a particular program. Other uses include determining the equivalence of programs, comparing the expressive power of various programming constructs, and synthesizing programs from specifications.

This book provides the first comprehensive introduction to Dynamic Logic. It is divided into three parts. The first part reviews the appropriate fundamental concepts of logic and computability theory and can stand alone as an introduction to these topics. The second part discusses PDL and its variants, and the third part discusses DL and its variants. Examples are provided throughout, and exercises and a short historical section are included at the end of each chapter.

Knowledge discovery and data mining (KDD) deals with the problem of extracting interesting associations, classifiers, clusters, and other patterns from data. The emergence of network-based distributed computing environments has introduced an important new dimension to this problem—distributed sources of data. Traditional centralized KDD typically requires central aggregation of distributed data, which may not always be feasible because of limited network bandwidth, security concerns, scalability problems, and other practical issues. Distributed knowledge discovery (DKD) works with the merger of communication and computation by analyzing data in a distributed fashion. This technology is particularly useful for large heterogeneous distributed environments such as the Internet, intranets, mobile computing environments, and sensor-networks.

When the data sets are large, scaling up the speed of the KDD process is crucial. Parallel knowledge discovery (PKD) techniques addresses this problem by using high-performance multiprocessor machines. This book presents introductions to DKD and PKD, extensive reviews of the field, and state-of-the-art techniques.

Rakesh Agrawal, Khaled AlSabti, Stuart Bailey, Philip Chan, David Cheung, Vincent Cho, Joydeep Ghosh, Robert Grossman, Yi-ke Guo, John Hale, John Hall, Daryl Hershberger, Ching-Tien Ho, Erik Johnson, Chris Jones, Chandrika Kamath, Hillol Kargupta, Charles Lo, Balinder Malhi, Ron Musick, Vincent Ng, Byung-Hoon Park, Srinivasan Parthasarathy, Andreas Prodromidis, Foster Provost, Jian Pun, Ashok Ramu, Sanjay Ranka, Mahesh Sreenivas, Salvatore Stolfo, Ramesh Subramonian, Janjao Sutiwaraphun, Kagan Tummer, Andrei Turinsky, Beat Wüthrich, Mohammed Zaki, Joshua Zhang.

Proceedings of the Seventeenth National Conference on Artificial Intelligence and The Twelfth Annual Conference on Innovative Applications of Artificial Intelligence

The annual AAAI National Conference provides a forum for information exchange and interaction among researchers from all disciplines of AI. Contributions include theoretical, experimental, and empirical results. Topics cover principles of cognition, perception, and action; the design, application, and evaluation of AI algorithms and systems; architectures and frameworks for classes of AI systems; and analyses of tasks and domains in which intelligent systems perform.

Distributed for AAAI Press

Proceedings of the 1999 Conference

The annual conference on Neural Information Processing System (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes computer science, neuroscience, statistics, physics, cognitive science, and many branches of engineering, including signal processing and control theory. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented.

Proceedings of the 1999 International Conference on Logic Programming

The International Conference on Logic Programming, sponsored by the Association for Logic Programming, includes tutorials, lectures, and refereed papers on all aspects of logic programming, including theoretical foundations, constraints, concurrency and parallelism, deductive databases, language design and implementation, nonmonotonic reasoning, and logic programming and the Internet.

  • Page 4 of 11