Skip navigation

Artificial Intelligence

  • Page 3 of 3
  •  

Computer science and artificial intelligence in particular have no curriculum in research methods, as other sciences do. This book presents empirical methods for studying complex computer programs: exploratory tools to help find patterns in data, experiment designs and hypothesis-testing tools to help data speak convincingly, and modeling tools to help explain data. Although many of these techniques are statistical, the book discusses statistics in the context of the broader empirical enterprise. The first three chapters introduce empirical questions, exploratory data analysis, and experiment design. The blunt interrogation of statistical hypothesis testing is postponed until chapters 4 and 5, which present classical parametric methods and computer-intensive (Monte Carlo) resampling methods, respectively. This is one of few books to present these new, flexible resampling techniques in an accurate, accessible manner.

Much of the book is devoted to research strategies and tactics, introducing new methods in the context of case studies. Chapter 6 covers performance assessment, chapter 7 shows how to identify interactions and dependencies among several factors that explain performance, and chapter 8 discusses predictive models of programs, including causal models. The final chapter asks what counts as a theory in AI, and how empirical methods—which deal with specific systems—can foster general theories.

Mathematical details are confined to appendixes and no prior knowledge of statistics or probability theory is assumed. All of the examples can be analyzed by hand or with commercially available statistics packages.

The Common Lisp Analytical Statistics Package (CLASP), developed in the author's laboratory for Unix and Macintosh computers, is available from The MIT Press.

A Bradford Book

Downloadable instructor resources available for this title: instructor’s manual

Recent decades have produced a blossoming of research in artificial systems that exhibit important properties of mind. But what exactly is this dramatic new work and how does it change the way we think about the mind, or even about who or what has mind?

Stan Franklin is the perfect tour guide through the contemporary interdisciplinary matrix of artificial intelligence, cognitive science, cognitive neuroscience, artificial neural networks, artificial life, and robotics that is producing a new paradigm of mind. Leisurely and informal, but always informed, his tour touches on all of the major facets of mechanisms of mind.

Along the way, Franklin makes the case for a perspective that rejects a rigid distinction between mind and non-mind in favor of a continuum from less to more mind, and for the role of mind as a control structure with the essential task of choosing the next action. Selected stops include the best of the work in these different fields, with the key concepts and results explained in just enough detail to allow readers to decide for themselves why the work is significant.

Major attractions include animal minds, Allan Newell's SOAR, the three Artificial Intelligence debates, John Holland's genetic algorithms, Wilson's Animat, Brooks' subsumption architecture, Jackson's pandemonium theory, Ornstein's multimind, Marvin Minsky's society of mind, Pattie Maes's behavior networks, Gerald Edelman's neural Darwinism, Drescher's schema mechanisms, Pentti Kanerva's sparse distributed memory, Douglas Hofstadter and Melanie Mitchell's Copycat, and Agre and Chapman's deictic representations.

A Bradford Book

Eugene Charniak breaks new ground in artificial intelligence research by presenting statistical language processing from an artificial intelligence point of view in a text for researchers and scientists with a traditional computer science background.

New, exacting empirical methods are needed to break the deadlock in such areas of artificial intelligence as robotics, knowledge representation, machine learning, machine translation, and natural language processing (NLP). It is time, Charniak observes, to switch paradigms. This text introduces statistical language processing techniques—word tagging, parsing with probabilistic context free grammars, grammar induction, syntactic disambiguation, semantic word classes, word-sense disambiguation—along with the underlying mathematics and chapter exercises.

Charniak points out that as a method of attacking NLP problems, the statistical approach has several advantages. It is grounded in real text and therefore promises to produce usable results, and it offers an obvious way to approach learning: "one simply gathers statistics."

Language, Speech, and Communication

On the Programming of Computers by Means of Natural Selection

Genetic programming may be more powerful than neural networks and other machine learning techniques, able to solve problems in a wider range of disciplines. In this ground-breaking book, John Koza shows how this remarkable paradigm works and provides substantial empirical evidence that solutions to a great variety of problems from many different fields can be found by genetically breeding populations of computer programs. Genetic Programming contains a great many worked examples and includes a sample computer code that will allow readers to run their own programs.

In getting computers to solve problems without being explicitly programmed, Koza stresses two points: that seemingly different problems from a variety of fields can be reformulated as problems of program induction, and that the recently developed genetic programming paradigm provides a way to search the space of possible computer programs for a highly fit individual computer program to solve the problems of program induction. Good programs are found by evolving them in a computer against a fitness measure instead of by sitting down and writing them.

John R. Koza is Consulting Associate Professor in the Computer Science Department at Stanford University.

The Very Idea

"Machines who think—how utterly preposterous," huff beleaguered humanists, defending their dwindling turf. "Artificial Intelligence—it's here and about to surpass our own," crow techno-visionaries, proclaiming dominion. It's so simple and obvious, each side maintains, only a fanatic could disagree.

Deciding where the truth lies between these two extremes is the main purpose of John Haugeland's marvelously lucid and witty book on what artificial intelligence is all about. Although presented entirely in non-technical terms, it neither oversimplifies the science nor evades the fundamental philosophical issues. Far from ducking the really hard questions, it takes them on, one by one.

Artificial intelligence, Haugeland notes, is based on a very good idea, which might well be right, and just as well might not. That idea, the idea that human thinking and machine computing are "radically the same," provides the central theme for his illuminating and provocative book about this exciting new field. After a brief but revealing digression in intellectual history, Haugeland systematically tackles such basic questions as: What is a computer really? How can a physical object "mean" anything? What are the options for computational organization? and What structures have been proposed and tried as actual scientific models for intelligence?

In a concluding chapter he takes up several outstanding problems and puzzles—including intelligence in action, imagery, feelings and personality—and their enigmatic prospects for solution.

A Contemporary Introduction to the Philosophy of Mind

In Matter and Consciousness, Paul Churchland clearly presents the advantages and disadvantages of such difficult issues in philosophy of mind as behaviorism, reductive materialism, functionalism, and eliminative materialism. This new edition incorporates the striking developments that have taken place in neuroscience, cognitive science, and artificial intelligence and notes their expanding relevance to philosophical issues.

Churchland organizes and clarifies the new theoretical and experimental results of the natural sciences for a wider philosophical audience, observing that this research bears directly on questions concerning the basic elements of cognitive activity and their implementation in real physical systems. (How is it, he asks, that living creatures perform some cognitive tasks so swiftly and easily, where computers do them only badly or not at all?) Most significant for philosophy, Churchland asserts, is the support these results tend to give to the reductive and the eliminative versions of materialism.

A Bradford Book.

Explorations in the Microstructure of Cognition: Psychological and Biological Models

What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind.

The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network.

Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.

Explorations in the Microstructure of Cognition: Foundations

What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind.

The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network.

Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.

A Bradford Book.

  • Page 3 of 3
  •