There is increasing interest in genetic programming by both researchers and professional software developers. These twenty-two invited contributions show how a wide variety of problems across disciplines can be solved using this new paradigm.
This book presents, within a conceptually unified theoretical framework, a body of methods that have been developed over the past fifteen years for building and simulating qualitative models of physical systems—bathtubs, tea kettles, automobiles, the physiology of the body, chemical processing plants, control systems, electrical systems—where knowledge of that system is incomplete. The primary tool for this work is the author's QSIM algorithm, which is discussed in detail.
The Simulation of Adaptive Behavior Conference brings together researchers from ethology, psychology, ecology, artificial intelligence, artificial life, robotics, computer science, engineering, and related fields to further understanding of the behaviors and underlying mechanisms that allow adaptation and survival in uncertain environments.
The effort to explain the imitative abilities of humans and other animals draws on fields as diverse as animal behavior, artificial intelligence, computer science, comparative psychology, neuroscience, primatology, and linguistics. This volume represents a first step toward integrating research from those studying imitation in humans and other animals, and those studying imitation through the construction of computer software and robots.
Einstein said that "the whole of science is nothing more than a refinement of everyday thinking." David Klahr suggests that we now know enough about cognition—and hence about everyday thinking—to advance our understanding of scientific thinking. In this book he sets out to describe the cognitive and developmental processes that have enabled scientists to make the discoveries that comprise the body of information we call "scientific knowledge."
Linear classifiers in kernel spaces have emerged as a major topic within the field of machine learning. The kernel technique takes the linear classifier—a limited, but well-established and comprehensively studied model—and extends its applicability to a wide range of nonlinear pattern-recognition tasks such as natural language processing, machine vision, and biological sequence analysis. This book provides the first comprehensive overview of both the theory and algorithms of kernel classifiers, including the most recent developments.
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels—for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics.