Michael Brady

Michael Brady is Senior Research Scientist at MIT's Artifical Intelligence Laboratory.

  • Robotics Science

    Robotics Science

    Michael Brady

    These 16 contributions provide a field guide to robotics science today.

    These 16 contributions provide a field guide to robotics science today. Each takes up current work the problems addressed, and future directions in the areas of perception, planning, control, design, and actuation. In a substantial introduction, Michael Brady summarizes a personal list of 30 problems, problem areas, and issues that lie on the path to development of a science of robotics. These involve sensing vision, mobility, design, control, manipulation, reasoning, geometric reasoning and systems integration.

    Contents The Problems of Robotics, Michael Brady • Perception. A Few Steps Toward Artificial 3-D Vision, Olivier D. Faugeras • Contact Sensing for Robot Active Touch, Paolo Dario • Learning and Recognition in Natural Environments, Alex Pentland and Robert Bolles • 3-D Vision for Outdoor Navigation by an Autonomous Vehicle, Martial Hebert and Takeo Kanade • Planning. Geometric Issues in Planning Robot Tasks, Tomas Lozano Perez and Russell Taylor • Robotic Manipulation: Mechanics and Planning, Matthew Mason • Control. A Survey of Manipulation and Assembly: Development of the Field and Open Research Issues, Daniel Whitney • Control, Suguru Arimoto • Kinematics and Dynamics for Control, John Hollerbach • The Whole Iguana, Rodney Brooks • Design and Actuation. Design and Kinematics for Force and Velocity Control of Manipulators and End Effectors, Bernard Roth • Arm Design, Haruhiko Asada • Behavior Based Design of Robot Effectors, Stephen Jacobsen, Craig Smith, Klaus Biggers, and Edwin Iversen • Using an Articulated Hand to Manipulate Objects, Kenneth Salisbury, David Brock and Patrick O'Donnell • Legged Robots, Marc Raibert

    Robotics Science is included in the System Development Foundation Benchmark series. System Development Foundation grants have contributed significantly to the development of robotics in the United States during the 1980s.

    • Hardcover $80.00 £55.95
  • Robotics Research

    Robotics Research

    The First International Symposium

    Michael Brady and Richard P. Paul

    The fifty-three contributions collected in this book present leading current research in one of the fastest moving fields of artificial intelligence. Organized around a view of robotics as "the intelligent connection of perception to action," they convey the excitement of cross-disciplinary discussion by scholars from the United States, Japan, France, the United Kingdom, West Germany, and Australia.

    Chapters in the book's first part explore the connection between perception and action in three sections that deal with task level programming, integrated systems, and walking machines. The second part reports recent progress on the perceptual basis of robotics, with chapters grouped in sections on visual inspection, three-dimensional vision, and (nonvisual) local sensing. The third part focuses on systems that facilitate action, with sections that discuss mechanisms, kinematics and dynamics, and feedback control. A final part considers the application of robot systems to manufacturing, with chapters divided into two sections: on systems for manufacture and on robots and manufacture. The editors have written introductions to each of the book's four major parts and eleven sections.

    • Hardcover $105.00
    • Paperback $110.00
  • Computational Models of Discourse

    Computational Models of Discourse

    Michael Brady and Robert C. Berwick

    As the contributions to this book make clear, a fundamental change is taking place in the study of computational linguistics analogous to that which has taken place in the study of computer vision over the past few years and indicative of trends that are likely to affect future work in artificial intelligence generally. The first wave of efforts on machine translation and the formal mathematical study of parsing yielded little real insight into how natural language could be understood by computers or how computers could lead to an understanding of natural language. The current wave of research seeks both to include a wider and more realistic range of features found in human languages and to limit the dimensions of program goals. Some of the new programs embody for the first time constraints on human parsing which Chomsky has uncovered, for example. The isolation of constraints and the representations for their expression, rather than the design of mechanisms and ideas about process organization, is central to the work reported in this volume. And if present goals are somewhat less ambitious, they are also more realistic and more realizable.

    Contents Computational Aspects of Discourse, Robert Berwick • Recognizing Intentions from Natural Language Utterances, James Allen • Cooperative Responses from a Portable Natural Language Data Base Query System, Jerrold Kaplan • Natural Language Generation as a Computational Problem: An Introduction, David McDonald • Focusing in the Comprehension of Definite Anaphor, Candace Sidner • So What Can We Talk About Now? Bonnie Webber • A Preface by David Israel relates these chapters to the general considerations of philosophers and psycholinguists

    • Hardcover $55.00
    • Paperback $45.00 £35.00
  • Robot Motion

    Robot Motion

    Planning and Control

    Michael Brady, John Hollerbach, Timothy L. Johnson, Tomás Lozano-Pérez, and Matthew T. Mason

    The book brings together nineteen papers of fundamental importance to the development of a science of robotics.

    The present surge of interest in robotics can be expected to continue through the 1980s. Major research efforts are springing up throughout industry and in the universities. Senior and graduate level courses are being developed or planned in many places to prepare students to contribute to the development of the field and its industrial applications. Robot Motion will serve this emerging audience as a single source of information on current research in the field.

    The book brings together nineteen papers of fundamental importance to the development of a science of robotics. These are grouped in five sections: Dynamics; Trajectory Planning; Compliance and Force Control; Feedback Control; and Spatial Planning. Each section is introduced by a substantial analytical survey that lays out the problems that arise in that area of robotics and the approaches and solutions that have been tried, with an evaluation of their strengths and shortcomings. In addition, there is an overall introduction that relates robotics research to general trends in the development of artificial intelligence.

    Individual papers are the work of H. Hanafusa, H. Asada, N. Hogan, M. T. Mason, R. Paul, B. Shimano, M. H. Raibert, J. J. Craig, R. H. Taylor, D. E. Whitney, J. M. Hollerbach, J. Luh, M. Walker, R. J. Popplestone, A. P. Ambler, I. M. Bellos, T. LozanoPerez, E. Freund, D. F. Golla, S. C. Garg, P. C. Hughes, and K. D. Young.

    • Hardcover $80.00 £62.00

Contributor

  • A Robot Ping-Pong Player

    A Robot Ping-Pong Player

    Experiments in Real-Time Intelligent Control

    Russell L. Andersson

    This tour de force in experimental robotics describes the first robot able to play, and even beat, human ping-pong players.

    This tour de force in experimental robotics paves the way toward understanding dynamic environments in vision and robotics. It describes the first robot able to play, and even beat, human ping-pong players.Constructing a machine to play ping-pong was proposed years ago as a particularly difficult problem requiring fast, accurate sensing and actuation, and the intelligence to play the game. The research reported here began as a series of experiments in building a true real-time vision system. The ping-pong machine incorporates sensor and processing techniques as well as the techniques needed to intelligently plan the robot's response in the fraction of a second available. It thrives on a constant stream of new data. Subjectively evaluating and improving its motion plan as the data arrives, it presages future robot systems with many joints and sensors that must do the same, no matter what the task.

    Contents Introduction • Robot Ping-Pong • System Design • Real-Time Vision System Robot Controller • Expert Controller Preliminaries • Expert Controller • Robot Ping-Pong Application • Conclusion

    A Robot Ping-Pong Player is included in the Artificial Intelligence Series, edited by Patrick Winston and Michael Brady.

    • Paperback $35.00 £27.00
  • Nonmonotonic Reasoning

    Nonmonotonic Reasoning

    Grigoris Antoniou

    Nonmonotonic reasoning provides formal methods that enable intelligent systems to operate adequately when faced with incomplete or changing information. In particular, it provides rigorous mechanisms for taking back conclusions that, in the presence of new information, turn out to be wrong and for deriving new, alternative conclusions instead. Nonmonotonic reasoning methods provide rigor similar to that of classical reasoning; they form a base for validation and verification and therefore increase confidence in intelligent systems that work with incomplete and changing information. Following a brief introduction to the concepts of predicate logic that are needed in the subsequent chapters, this book presents an in depth treatment of default logic. Other subjects covered include the major approaches of autoepistemic logic and circumscription, belief revision and its relationship to nonmonotonic inference, and briefly, the stable and well-founded semantics of logic programs.

    • Hardcover $55.00 £43.00
  • Solving the Frame Problem

    Solving the Frame Problem

    A Mathematical Investigation of the Common Sense Law of Inertia

    Murray Shanahan

    In 1969, John McCarthy and Pat Hayes uncovered a problem that has haunted the field of artificial intelligence ever since—the frame problem. The problem arises when logic is used to describe the effects of actions and events. Put simply, it is the problem of representing what remains unchanged as a result of an action or event. Many researchers in artificial intelligence believe that its solution is vital to the realization of the field's goals. Solving the Frame Problem presents the various approaches to the frame problem that have been proposed over the years. The author presents the material chronologically—as an unfolding story rather than as a body of theory to be learned by rote. There are lessons to be learned even from the dead ends researchers have pursued, for they deepen our understanding of the issues surrounding the frame problem. In the book's concluding chapters, the author offers his own work on event calculus, which he claims comes very close to a complete solution to the frame problem. Artificial Intelligence series

    • Hardcover $15.75 £12.99
  • The Art of Causal Conjecture

    The Art of Causal Conjecture

    Glenn Shafer

    In The Art of Causal Conjecture, Glenn Shafer lays out a new mathematical and philosophical foundation for probability and uses it to explain concepts of causality used in statistics, artificial intelligence, and philosophy.

    The various disciplines that use causal reasoning differ in the relative weight they put on security and precision of knowledge as opposed to timeliness of action. The natural and social sciences seek high levels of certainty in the identification of causes and high levels of precision in the measurement of their effects. The practical sciences—medicine, business, engineering, and artificial intelligence—must act on causal conjectures based on more limited knowledge. Shafer's understanding of causality contributes to both of these uses of causal reasoning. His language for causal explanation can guide statistical investigation in the natural and social sciences, and it can also be used to formulate assumptions of causal uniformity needed for decision making in the practical sciences.

    Causal ideas permeate the use of probability and statistics in all branches of industry, commerce, government, and science. The Art of Causal Conjecture shows that causal ideas can be equally important in theory. It does not challenge the maxim that causation cannot be proven from statistics alone, but by bringing causal ideas into the foundations of probability, it allows causal conjectures to be more clearly quantified, debated, and confronted by statistical evidence.

    • Hardcover $17.75 £14.95
  • Computational Theories of Interaction and Agency

    Computational Theories of Interaction and Agency

    Philip E. Agre and Stanley J. Rosenschein

    Over time the field of artificial intelligence has developed an "agent perspective" expanding its focus from thought to action, from search spaces to physical environments, and from problem-solving to long-term activity. Originally published as a special double volume of the journal Artificial Intelligence, this book brings together fundamental work by the top researchers in artificial intelligence, neural networks, computer science, robotics, and cognitive science on the themes of interaction and agency. It identifies recurring themes and outlines a methodology of the concept of "agency." The seventeen contributions cover the construction of principled characterizations of interactions between agents and their environments, as well as the use of these characterizations to guide analysis of existing agents and the synthesis of artificial agents. Artificial Intelligence series. Special Issues of Artificial Intelligence

    • Paperback $80.00 £62.00
  • Qualitative Reasoning

    Qualitative Reasoning

    Modeling and Simulation with Incomplete Knowledge

    Benjamin Kuipers

    A body of methods that have been developed for building and simulating qualitative models of physical systems where knowledge of that system is incomplete.

    This book presents, within a conceptually unified theoretical framework, a body of methods that have been developed for building and simulating qualitative models of physical systems—bathtubs, tea kettles, automobiles, the physiology of the body, chemical processing plants, control systems, electrical systems—where knowledge of that system is incomplete.The primary tool for this work is the author's QSIM algorithm, which is discussed in detail. Qualitative models are better able than traditional models to express states of incomplete knowledge about continuous mechanisms. Qualitative simulation guarantees to find all possible behaviors consistent with the knowledge in the model.This expressive power and coverage is important in problem solving for diagnosis, design, monitoring, explanation, and other applications of artificial intelligence. The framework is built around the QSIM algorithm for qualitative simulation and the QSIM representation for qualitative differential equations, both of which are carefully grounded in continuous mathematics. Qualitative simulation draws on a wide range of mathematical methods to keep a complete set of predictions tractable, including the use of partial quantitative information. Compositional modeling and component-connection methods for building qualitative models are also discussed in detail.

    Qualitative Reasoning is primarily intended for advanced students and researchers in AI or its applications. Scientists and engineers who have had a solid introduction to AI, however, will be able to use this book for self-instruction in qualitative modeling and simulation methods.

    • Hardcover $75.00 £51.95
    • Paperback $38.00 £30.00
  • Rules of Encounter

    Rules of Encounter

    Designing Conventions for Automated Negotiation among Computers

    Jeffrey S. Rosenschein and Gilad Zlotkin

    Provides a unified, coherent account of machine interaction at the level of the machine designers (the society of designers) and the level of the machine interaction itself (the resulting artificial society).

    Rules of Encounter applies the general approach and the mathematical tools of game theory in a formal analysis of rules (or protocols) governing the high-level behavior of interacting heterogeneous computer systems. It describes a theory of high-level protocol design that can be used to constrain manipulation and harness the potential of automated negotiation and coordination strategies to attain more effective interaction among machines that have been programmed by different entities to pursue different goals. While game theoretic ideas have been used to answer the question of how a computer should be programmed to act in a given specific interaction, here they are used in a new way, to address the question of how to design the rules of interaction themselves for automated agents. Rules of Encounter provides a unified, coherent account of machine interaction at the level of the machine designers (the society of designers) and the level of the machine interaction itself (the resulting artificial society). Taking into account such attributes of the artificial society as efficiency, and the self-interest of each member in the society of designers, it analyzes what kinds of rules should be instituted to govern interaction among these autonomous agents. The authors point out that adjusting the rules of public behavior—or the rules of the game—by which the programs must interact can influence the private strategies that designers set up in their machines, shaping design choices and run-time behavior, as well as social behavior. Artificial Intelligence series

    • Hardcover $50.00 £40.00
  • Contemplating Minds

    Contemplating Minds

    A Forum for Artificial Intelligence

    William J. Clancey, Stephen Smoliar, and Mark J. Stefik

    Contemplating Minds brings together a selection of reviews from Artificial Intelligence in a form suitable for the general scientific reader, seminar organizer, or student wanting a critical introduction that synthesizes and compares some of the most important and influential books and ideas to have emerged in AI over the past decade.

    The book review column in Artificial Intelligence has evolved from simple reviews to a forum where reviewers and authors debate in essays, even tutorial presentations, the latest, often competing, theories of human and artificial intelligence. Contemplating Minds brings together a selection of these reviews in a form suitable for the general scientific reader, seminar organizer, or student wanting a critical introduction that synthesizes and compares some of the most important and influential books and ideas to have emerged in AI over the past decade.

    Contemplating Minds is divided into four parts, each with a brief introduction, that address the major themes in artificial intelligence, human intelligence, and cognitive science research: Symbolic Models of Mind, Situated Action, Architectures of Interaction, and Memory and Consciousness. The books being debated include those by such influential authors as Allen Newell (Unified Theories of Cognition), Terry Winograd and F. Flores (Understanding Computers and Cognition: A New Foundation for Design), Herbert Simon (The Sciences of the Artificial, second edition), Lucy Suchman (Plans and Situated Actions: The Problem of Human-Machine Communication), Marvin Minsky (The Society of Mind), Gerald Edelman (Neural Darwinism: The Theory of Neuronal Group Selection, The Remembered Present: A Biological Theory of Consciousness, Bright Air, Brilliant Fire: On the Matter of the Mind), and Daniel Dennett (Consciousness Explained). The list of reviewers is equally distinguished.

    • Paperback $80.00 £62.00
  • Thinking Between the Lines

    Thinking Between the Lines

    Computers and the Comprehension of Causal Descriptions

    Gary C. Borchardt

    Thinking Between the Lines targets a challenge at the heart of the artificial intelligence enterprise: the design of programs that can read and reason on the basis of written causal descriptions such as those that appear in encyclopedias, user manuals, and related sources. This capability of "thinking between the lines"—codified in terms of a task called "causal reconstruction"—bears directly on the larger question of how computers can usefully exploit the vast repertory of human knowledge concerning causal phenomena.Central to the approach presented is a cognitively inspired representation called "transition space," implemented in a program called PATHFINDER. The transition space representation embodies a conceptual shift from viewing the world primarily in terms of states—or instantaneous snapshots of activity—to viewing it primarily in terms of transitions—ensembles of changes that can be articulated in language. Transitions, according to this view, serve as antecedents and consequents of causality, and the space of all possible transitions—or transition space—serves as an arena for working out paths of association between the events mentioned within particular causal descriptions. Thinking Between the Lines provides a computational framework and approach for realizing the significant opportunities that arise for intelligent, automated handling of technical material—in routing information, answering questions, elaborating or summarizing information to meet the needs of particular individuals, and performing other useful tasks. Artificial Intelligence series

    • Hardcover $50.00 £34.95
  • Building Problem Solvers

    Building Problem Solvers

    Kenneth D. Forbus and Johan De Kleer

    For nearly two decades, Kenneth Forbus and Johan de Kleer have accumulated a substantial body of knowledge about the principles and practice of creating problem solvers. In some cases they are the inventors of the ideas or techniques described, and in others, participants in their development.

    Building Problem Solvers communicates this knowledge in a focused, cohesive manner. It is unique among standard artificial intelligence texts in combining science and engineering, theory and craft to describe the construction of AI reasoning systems, and it includes code illustrating the ideas.

    After working through Building Problem Solvers, readers should have a deep understanding of pattern directed inference systems, constraint languages, and truth maintenance systems. The diligent reader will have worked through several substantial examples, including systems that perform symbolic algebra, natural deduction, resolution, qualitative reasoning, planning, diagnosis, scene analysis, and temporal reasoning.

    • Hardcover $90.00 £62.95
    • Paperback $92.00 £71.00
  • Three-Dimensional Computer Vision

    Three-Dimensional Computer Vision

    A Geometric Viewpoint

    Olivier Faugeras

    This monograph by one of the world's leading vision researchers provides a thorough, mathematically rigorous exposition of a broad and vital area in computer vision: the problems and techniques related to three-dimensional (stereo) vision and motion. The emphasis is on using geometry to solve problems in stereo and motion, with examples from navigation and object recognition. Faugeras takes up such important problems in computer vision as projective geometry, camera calibration, edge detection, stereo vision (with many examples on real images), different kinds of representations and transformations (especially 3-D rotations), uncertainty and methods of addressing it, and object representation and recognition. His theoretical account is illustrated with the results of actual working programs.Three-Dimensional Computer Vision proposes solutions to problems arising from a specific robotics scenario in which a system must perceive and act. Moving about an unknown environment, the system has to avoid static and mobile obstacles, build models of objects and places in order to be able to recognize and locate them, and characterize its own motion and that of moving objects, by providing descriptions of the corresponding three-dimensional motions. The ideas generated, however, can be used indifferent settings, resulting in a general book on computer vision that reveals the fascinating relationship of three-dimensional geometry and the imaging process.

    • Hardcover $160.00 £124.00
  • The Soar Papers

    The Soar Papers

    Research on Integrated Intelligence

    John E. Laird, Allen Newell, and Paul S. Rosenbloom

    Soar is a state-of-the art computational theory of the mind that has had a significant impact in both artificial intelligence and cognitive science. Begun by John E. Laird, Allen Newell, and Paul S. Rosenbloom at Carnegie Mellon in the early 1980s, the Soar Project is an investigation into the architecture underlying intelligent behavior with the goal of developing and applying a unified theory of natural and artificial intelligence. The Soar Papers - sixty-three articles in all - provide in one place the important ideas that have emerged from this project. The book is organized chronologically, with an introduction that provides multiple organizations according to major topics. Readers interested in the entire effort can read the articles in publication order, while readers interested only in a specific topic can go directly to a logical sequence of papers to read on that topic. Major topics covered in this volume include: the direct precursors of Soar; the Soar architecture; implementation issues; intelligent capabilities (such as problem solving and planning, learning, and external interaction); domains of application; psychological modeling; perspectives on Soar; and using Soar.

    • Hardcover $199.95
    • Paperback $90.00
  • Machine Translation

    Machine Translation

    A View from the Lexicon

    Bonnie Jean Dorr

    This book describes a novel, cross-linguistic approach to machine translation that solves certain classes of syntactic and lexical divergences by means of a lexical conceptual structure that can be composed and decomposed in language-specific ways. This approach allows the translator to operate uniformly across many languages, while still accounting for knowledge that is specific to each language. The translation model can be used to map a source-language sentence to a target-language sentence in a principled fashion. It is built on the basis of a parametric approach, making it easy to change from one language to another (by setting syntactic switches for each language and providing lexical descriptions for each language) without having to write a whole new processor for each language. Dorr's approach advances the field of machine translation in a number of important ways: it provides a uniform processor in which the same syntactic and lexical-semantic processing modules are used for each language; it is interlingual, able to derive an underlying language-independent form of the source language that allows any of the three target languages—Spanish, English, or German—to be produced from this form; and it describes a systematic mapping between the lexical-semantic level and the syntactic level that allows the appropriate target-language words to be selected and realized, despite the potential for syntactic and lexical divergences.

    • Hardcover $75.00 £51.95
    • Paperback $33.00 £26.00
  • Active Vision

    Active Vision

    Andrew Blake and Alan L. Yuille

    Active Vision explores important themes emerging from the active vision paradigm, which has only recently become an established area of machine vision. In four parts the contributions look in turn at tracking, control of vision heads, geometric and task planning, and architectures and applications, presenting research that marks a turning point for both the tasks and the processes of computer vision. The eighteen chapters in Active Vision draw on traditional work in computer vision over the last two decades, particularly in the use of concepts of geometrical modeling and optical flow; however, they also concentrate on relatively new areas such as control theory, recursive statistical filtering, and dynamical modeling. Active Vision documents a change in emphasis, one that is based on the premise that an observer (human or computer) may be able to understand a visual environment more effectively and efficiently if the sensor interacts with that environment, moving through and around it, culling information selectively, and analyzing visual sensory data purposefully in order to answer specific queries posed by the observer. This method is in marked contrast to the more conventional, passive approach to computer vision where the camera is supposed to take in the whole scene, attempting to make sense of all that it sees.

    • Hardcover $70.00 £54.00
    • Paperback $34.00 £27.00
  • Recent Advances in Qualitative Physics

    Recent Advances in Qualitative Physics

    Boi Faltings and Peter Struss

    These twenty-eight contributions report advances in one of the most active research areas in artificial intellgence. Qualitative modeling techniques are an essential part of building second generation knowledge-based systems. This book provides a timely overview of the field while also giving some indications about applications that appear to be feasible now or in the near future. Chapters are organized into sections covering modeling and simulation, ontologies, computational issues, and qualitative analysis. Modeling a physical system in order to simulate it or solve particular problems regarding the system is an important motivation of qualitative physics, involving formal procedures and concepts. The chapters in the section on modeling address the problem of how to set up and structure qualitative models, particularly for use in simulation. Ontology, or the science of being, is the basis for all modeling. Accordingly, chapters on ontologies discuss problems fundamental for finding representational formalism and inference mechanisms appropriate for different aspects of reasoning about physical systems. Computational issues arising from attempts to turn qualitative theories into practical software are then taken up. In addition to simulation and modeling, qualitative physics can be used to solve particular problems dealing with physical systems, and the concluding chapters present techniques for tasks ranging from the analysis of behavior to conceptual design.

    • Hardcover $62.00 £51.95
    • Paperback $55.00 £43.00
  • HANDEY

    HANDEY

    A Robot Task Planner

    Joseph L. Jones, Tomás Lozano-Pérez, Emmanuel Mazer, and Patrick A. O'Donnell

    HANDEY is a task-level robot system that requires only a geometric description of a pick-and-place task rather than the specific robot motions necessary to carry out the task. The system-building process this book describes is an important step toward eliminating the current programming bottleneck that is keeping robots from fulfilling their scientific and economic potential. The HANDEY system, the state-of-the art technologies for developing it, and the problems encountered are clearly presented, aided by numerous marginal illustrations.The development of HANDEY is part of the authors' long-term goal of achieving systems that can manipulate a variety of objects in different environments using a wide class of robots. HANDEY has been tested on numerous pick-and-place tasks, including parts ranging from wooden cubes to electric motors; it can be used to generate commands for different types of industrial robots, can coordinate two arms working in the same workspace, and has been tested with a module that locates the position of a specific part in a jumble of other parts.The first three chapters introduce the HANDEY system and task-level robot programming systems in general, address the problem of planning pick-and-place tasks, review areas of geometric modeling and kinematics required for subsequent chapters, and introduce the concept of configuration space, which plays a prominent role in HANDEY. The next four chapters describe how HANDEY operates.

    • Hardcover $45.00 £31.95
  • Geometric Invariance in Computer Vision

    Joseph L. Mundy and Andrew Zisserman

    These twenty-three contributions focus on the most recent developments in the rapidly evolving field of geometric invariants and their application to computer vision. The introduction summarizes the basics of invariant theory, discusses how invariants are related to problems in computer vision, and looks at the future possibilities, particularly the notion that invariant analysis might provide a solution to the elusive problem of recognizing general curved 3D objects from an arbitrary viewpoint. The remaining chapters consist of original papers that present important developments as well as tutorial articles that provide useful background material. These chapters are grouped into categories covering algebraic invariants, nonalgebraic invariants, invariants of multiple views, and applications. An appendix provides an extensive introduction to projective geometry and its applications to basic problems in computer vision.

    • Hardcover $70.00
  • Solving Geometric Constraint Systems

    Solving Geometric Constraint Systems

    A Case Study in Kinematics

    Glenn A. Kramer

    Solving Geometric Constraints records and explains the formal basis for graphical analysis techniques that have been used for decades in engineering disciplines. It describes a novel computer implementation of a 3D graphical analysis method - degrees of freedom analysis - for solving geometric constraint problems of the type encountered in the kinematic analysis of mechanical linkages, providing the best computational bounds yet achieved for this class of problems. The technique allows for the design of algorithms that provide significant speed increases and. will foster the development of interactive software tools for the simulation, optimization, and design of complex mechanical devices as well as provide leverage in other geometric domains.Kramer formalizes symbolic geometry, including explicit reasoning about degrees of freedom, as an alternative to symbolic algebraic or iterative numerical techniques for solving geometric constraint satisfaction problems. He discusses both the theoretical and practical advantages of degrees of freedom analysis, including a correctness proof of the procedure, and clearly defines its scope. He covers all nondegenerate cases and handles several classes of degeneracy, giving examples that are practical and of representative complexity.

    Glenn A. Kramer is Research Scientist at the Schlumberger Laboratory for Computer Science.

    • Hardcover $60.00 £41.95
    • Paperback $30.00 £24.00
  • KAM

    A System for Intelligently Guiding Numerical Experimentation by Computer

    Kenneth Man-Kam Yip

    In a cross-disciplinary study that has important implications for research in artificial intelligence and complex nonlinear dynamics, Yip shows how to automate key aspects of this style of reasoning.

    Scientists and engineers routinely use graphical representations to organize their thoughts and as parts of the process of solving verbally presented problems. In a cross-disciplinary study that has important implications for research in artificial intelligence and complex nonlinear dynamics, Yip shows how to automate key aspects of this style of reasoning. He demonstrates the basic feasibility of intelligently guided numerical experimentation in a computational theory and a system for implementing the theory. The system, called KAM, is the first computer system that can intelligently guide numerical experimentation and interpret the numerical results in high-level, conceptual terms. KAM's ability to steer numerical experiments arises from the fact that it not only produces images but also looks at the pictures it draws to guide its own actions. By combining techniques from computer vision with sophisticated dynamical invariants, KAM is able to exploit mathematical knowledge, encoded as visual consistency constraints on the phase space and parameter space, to constrain its search for interesting behaviors. The approach is applied to Hamiltonian systems with two degrees of freedom, an area that is currently of great physical interest, and its power is tested in a difficult problem in hydrodynamics, for which KAM helps derive previously unknown publishable results.

    • Hardcover $37.50
  • Vision, Instruction, and Action

    David Walker

    Vision, Instruction, and Action clearly and cleverly describes a sophisticated integrated system called Sonja that takes instruction, can interpret its environment visually, and can play games (in this case the video game, Amazon) on its own. Sonja integrates advances in intermediate visual processing, interactive activity, and natural language pragmatics. In demonstrating that such systems, rare in artificial intelligence, are possible, David Chapman shows how discoveries in visual psychophysics can be incorporated into Al, how complex activity can result from participation rather than plan following, and how physical context can be used to interpret indexical instructions. Sonja is able to play a competent beginner's game of Amazon autonomously and at the same time can also make flexible use of human instructions in knowing how to kill off monsters, pick up and use tools, and find its way in a dungeon maze. It extends the author's previous work in developing a new theory of activity by addressing linguistic issues and providing a better understanding of the architecture underlying activity, incorporating many technical improvements. Sonja also models several pragmatic issues in computational linguistics, focusing on external reference and including linguistic repair processing, and the use of temporal and spatial expressions. It connects language use with more detailed and realistic theories of vision and activity.In the field of vision research, Sonja provides an implementation of a unified visual architecture, demonstrating that this architecture can support a serious theory of activity. It demonstrates the first instance that various visual mechanisms previously proposed on psychophysical, neurophysiological, and speculative computational grounds can be made useful by connecting them with a natural task domain.

    David Chapman is a Computer Scientist with Teleos Research in Palo Alto.

    • Paperback $47.95
  • Do the Right Thing

    Do the Right Thing

    Studies in Limited Rationality

    Stuart Russell and Eric H. Wefald

    The authors argue that a new theoretical foundation for artificial intelligence can be constructed in which rationality is a property of "programs" within a finite architecture, and their behavior over time in the task environment, rather than a property of individual decisions.

    Like Mooki, the hero of Spike Lee's film "Do the Right Thing," artificially intelligent systems have a hard time knowing what to do in all circumstances. Classical theories of perfect rationality prescribe the "right thing" for any occasion, but no finite agent can compute their prescriptions fast enough. In Do the Right Thing, the authors argue that a new theoretical foundation for artificial intelligence can be constructed in which rationality is a property of "programs" within a finite architecture, and their behavior over time in the task environment, rather than a property of individual decisions. Do the Right Thing suggests that the rich structure that seems to be exhibited by humans, and ought to be exhibited by AI systems, is a necessary result of the pressure for optimal behavior operating within a system of strictly limited resources. It provides an outline for the design of new intelligent systems and describes theoretical and practical tools for bringing about intelligent behavior in finite machines. The tools are applied to game planning and realtime problem solving, with surprising results.

    This book builds on important philosophical and technical work by his coauthor, the late Eric Wefald.

    • Hardcover $40.00 £27.95
    • Paperback $20.00 £14.99
  • Made-Up Minds

    Made-Up Minds

    A Constructivist Approach to Artificial Intelligence

    Gary L. Drescher

    Made-Up Minds addresses fundamental questions of learning and concept invention by means of an innovative computer program that is based on the cognitive-developmental theory of psychologist Jean Piaget. Drescher uses Piaget's theory as a source of inspiration for the design of an artificial cognitive system called the schema mechanism, and then uses the system to elaborate and test Piaget's theory. The approach is original enough that readers need not have extensive knowledge of artificial intelligence, and a chapter summarizing Piaget assists readers who lack a background in developmental psychology. The schema mechanism learns from its experiences, expressing discoveries in its existing representational vocabulary, and extending that vocabulary with new concepts. A novel empirical learning technique, marginal attribution, can find results of an action that are obscure because each occurs rarely in general, although reliably under certain conditions. Drescher shows that several early milestones in the Piagetian infant's invention of the concept of persistent object can be replicated by the schema mechanism.

    • Hardcover $48.00 £33.95
    • Paperback $24.00 £18.99
  • Object Recognition by Computer

    Object Recognition by Computer

    The Role of Geometric Constraints

    W. Eric L. Grimson

    This book describes an extended series of experiments into the role of geometry in the critical area of object recognition.

    With contributions from Tomás Lozano Pérez and Daniel P. Huttenlocher. An intelligent system must know what the objects are and where they are in its environment. Examples of this ubiquitous problem in computer vision arise in tasks involving hand-eye coordination (such as assembling or sorting), inspection tasks, gauging operations, and in navigation and localization of mobile robots. This book describes an extended series of experiments into the role of geometry in the critical area of object recognition. It provides precise definitions of the recognition and localization problems, describes the methods used to address them, analyzes the solutions to these problems, and addresses the implications of this analysis. The solution to problems of object recognition are of fundamental importance in many real applications and versions of the techniques described here are already being used in industrial settings. Although a number of questions remain to be solved, the authors provide a valuable framework for understanding both the strengths and limitations of using object shape to guide recognition.

    Contents Introduction • Recognition as a Search Problem • Searching for Correspondences • Two-Dimensional Constraints • Three-Dimensional Constraints • Verifying Hypotheses • Controlling the Search Explosion • Selecting Subspaces of the Search Space • Empirical Testing • The Combinatorics of the Matching Process • The Combinatorics of Hough Transforms • The Combinatorics of Verification • The Combinatorics of Indexing • Evaluating the Methods • Recognition from Libraries • Parameterized Objects • The Role of Grouping • Sensing Strategies • Applications • The Next Steps

    • Hardcover $55.00
    • Paperback $56.00 £44.00
  • Representing and Reasoning with Probabilistic Knowledge

    A Logical Approach to Probabilities

    Fahiem Bacchus

    This book explores logical formalisms for representing and reasoning with probabilistic information that will be of particular value to researchers in nonmonotonic reasoning, applications of probabilities, and knowledge representation.

    Probabilistic information has many uses in an intelligent system. This book explores logical formalisms for representing and reasoning with probabilistic information that will be of particular value to researchers in nonmonotonic reasoning, applications of probabilities, and knowledge representation. It demonstrates that probabilities are not limited to particular applications, like expert systems; they have an important role to play in the formal design and specification of intelligent systems in general. Fahiem Bacchus focuses on two distinct notions of probabilities: one propositional, involving degrees of belief, the other proportional, involving statistics. He constructs distinct logics with different semantics for each type of probability that are a significant advance in the formal tools available for representing and reasoning with probabilities. These logics can represent an extensive variety of qualitative assertions, eliminating requirements for exact point-valued probabilities, and they can represent first­order logical information. The logics also have proof theories which give a formal specification for a class of reasoning that subsumes and integrates most of the probabilistic reasoning schemes so far developed in AI.Using the new logical tools to connect statistical with propositional probability, Bacchus also proposes a system of direct inference in which degrees of belief can be inferred from statistical knowledge and demonstrates how this mechanism can be applied to yield a powerful and intuitively satisfying system of defeasible or default reasoning.

    Contents Introduction • Propositional Probabilities • Statistical Probabilities • Combining Statistical and Propositional Probabilities Default Inferences from Statistical Knowledge

    • Hardcover $39.95 £27.95
  • Experiments in the Machine Interpretation of Visual Motion

    Experiments in the Machine Interpretation of Visual Motion

    David W. Murray and Bernard Buxton

    This book describes experimental advances made in the interpretation of visual motion over the last few years that have moved researchers closer to emulating the way in which we recover information about the surrounding world.

    If robots are to act intelligently in everyday environments, they must have a perception of motion and its consequences. This book describes experimental advances made in the interpretation of visual motion over the last few years that have moved researchers closer to emulating the way in which we recover information about the surrounding world. It describes algorithms that form a complete, implemented, and tested system developed by the authors to measure two-dimensional motion in an image sequence, then to compute three-dimensional structure and motion, and finally to recognize the moving objects.

    The authors develop algorithms to interpret visual motion around four principal constraints. The first and simplest allows the scene structure to be recovered on a pointwise basis. The second constrains the scene to a set of connected straight edges. The third makes the transition between edge and surface representations by demanding that the wireframe recovered is strictly polyhedral. And the final constraint assumes that the scene is comprised of planar surfaces, and recovers them directly.

    Contents Image, Scene, and Motion • Computing Image Motion • Structure from Motion of Points • The Structure and Motion of Edges • From Edges to Surfaces • Structure and Motion of Planes • Visual Motion Segmentation • Matching to Edge Models • Matching to Planar Surfaces

    • Hardcover $48.00 £33.95
    • Paperback $29.00 £23.00
  • Vector Models for Data-Parallel Computing

    Guy E. Blelloch

    Vector Models for Data-Parallel Computing describes a model of parallelism that extends and formalizes the Data-Parallel model on which the Connection Machine and other supercomputers are based. It presents many algorithms based on the model, ranging from graph algorithms to numerical algorithms, and argues that data-parallel models are not only practical and can be applied to a surprisingly wide variety of problems, they are also well suited for very-high-level languages and lead to a concise and clear description of algorithms and their complexity. Many of the author's ideas have been incorporated into the instruction set and into algorithms currently running on the Connection Machine. The book includes the definition of a parallel vector machine; an extensive description of the uses of the scan (also called parallel-prefix) operations; the introduction of segmented vector operations; parallel data structures for trees, graphs, and grids; many parallel computational-geometry, graph, numerical and sorting algorithms; techniques for compiling nested parallelism; a compiler for Paralation Lisp; and details on the implementation of the scan operations.

    Contents Introduction • Parallel Vector Models • The Scan Primitives • Computational-Geometry Algorithms • Graph Algorithms • Numerical Algorithms • Languages and Compilers • Correction-Oriented Languages • Flattening Nested Parallelism • A Compiler for Paralation Lisp • Paralation-Lisp Code • The Scan Vector Model • Data Structures • Implementing Parallel Vector Models • Implementing the Scan Operations • Conclusions • Glossary

    • Hardcover $42.00
  • Artificial Intelligence at MIT, 2-vol. set

    Expanding Frontiers

    Patrick Henry Winston and Sarah A. Shellard

    This collection of over 40 milestone contributions presents the latest state-of-the art research emerging from one of the worlds foremost centers for Artificial Intelligence. The topics range from immediately applicable, demonstrated advances to theoretical proposals. They include robotics, vision, natural language, learning and commonsense problem solving, model-based reasoning systems, engineering problem solving, programmer's apprentice, mixed symbolic and numerical computation, ultraconcurrent systems, and basic theory. Each new area is introduced and linked together with an overview by Patrick Winston.

    Contributors Harold Abelson, Gul Agha, Chae H. An, Christopher G. Atkeson, David J. Bennett, Robert C. Berwick, David Brock, Rodney A. Brooks, William J. Dally, Randall Davis, Bonnie J. Dorr, Brian Eberman, Michael Eisenberg, Sandiway Fong, W. Eric L. Grimson, Matthew Halfont, Walter C. Hamscher, Carl Hewitt, Jessica Hodgins, John M. Hollerbach, Berthold K. P. Horn, Joseph L. Jones, Boris Katz, Jacob Katzenelson, Christof Koch, Tomas Lozano-Pérez, Michael Levin, Matthew T. Mason, Emmanuel Mazer, David A. McAllester, Marvin Minsky, Patrick A. O'Donnell, Tomaso Poggio, Marc H. Raibert, Sajit Rao, David J. Reinkensmeyer, Charles Rich, Elisha Sacks, Kenneth Salisbury, Warren P. Seering, Neil C. Singer, Gerald J. Sussman, Russell H. Taylor, Vincent Torre, William Townsend, Shimon Ullman, Karl T. Ulrich, Richard C. Waters, E. J. Weldon Jr., Brian Williams, Linda Wills, Patrick H. Winston, Jack Wisdom, and Kenneth Yip

    Artificial Intelligence at MIT is included in the Artificial Intelligence Series, edited by Michael Brady, Daniel Bobrow, and Randall Davis.

    • Hardcover $115.00 £79.95
  • Artificial Intelligence at MIT, Volume 1

    Artificial Intelligence at MIT, Volume 1

    Expanding Frontiers

    Patrick Henry Winston and Sarah Alexandra Shellard

    The broad range of material included in these volumes suggests to the newcomer the nature of the field of artificial intelligence, while those with some background in AI will appreciate the detailed coverage of the work being done at MIT. The results presented are related to the underlying methodology. Each chapter is introduced by a short note outlining the scope of the problem begin taken up or placing it in its historical context.

    Contents, Volume I Expert Problem Solving: Qualitative and Quantitative Reasoning in Classical Mechanics • Problem Solving About Electrical Circuits • Explicit Control of Reasoning • A Glimpse of Truth Maintenance • Design of a Programmer's Apprentice • Natural Language Understanding and Intelligent Computer Coaches: A Theory of Syntactic Recognition for Natural Language • Disambiguating References and Interpreting Sentence Purpose in Discourse • Using Frames in Scheduling • Developing Support Systems for Information Analysis • Planning and Debugging in Elementary Programming • Representation and Learning: Learning by Creating and Justifying Transfer Frames • Descriptions and the Specialization of Concept • The Society Theory of Thinking • Representing and Using Real-World Knowledge

    • Hardcover $60.00 £41.95
    • Paperback $55.00 £43.00
  • Artificial Intelligence at MIT, Volume 2

    Artificial Intelligence at MIT, Volume 2

    Expanding Frontiers

    Patrick Henry Winston and Sarah Alexandra Shellard

    The broad range of material included in these volumes suggests to the newcomer the nature of the field of artificial intelligence, while those with some background in AI will appreciate the detailed coverage of the work being done at MIT. The results presented are related to the underlying methodology. Each chapter is introduced by a short note outlining the scope of the problem begin taken up or placing it in its historical context.

    Contents, Volume II Understanding Vision: Representing and Computing Visual Information • Visual Detection of Light Sources • Representing and Analyzing Surface Orientation • Registering Real Images Using Synthetic Images • Analyzing Curved Surfaces Using Reflectance Map Techniques • Analysis of Scenes from a Moving Viewpoint • Manipulation and Productivity Technology: Force Feedback in Precise Assembly Tasks • A Language for Automatic Mechanical Assembly • Kinematics, Statics, and Dynamics of Two-Dimensional Manipulators • Understanding Manipulator Control by Synthesizing Human Handwriting • Computer Design and Symbol Manipulation: The LISP Machine • Shallow Binding in LISP 1.5 • Optimizing Allocation and Garbage Collection of Spaces • Compiler Optimization Based on Viewing LAMBDA as RENAME Plus GOTO • Control Structure as Patterns of Passing Messages

    • Hardcover $60.00 £41.95
  • Theories of Comparative Analysis

    Daniel S. Weld

    Theories of Comparative Analysis provides a detailed examination of comparative analysis, the problem of predicting how a system will react to perturbations in its parameters, and why. It clearly formalizes the problem and presents two novel techniques - differential qualitative (DQ) analysis and exaggeration - that solve many comparative analysis problems, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation-based generalization.Weld first places comparative analysis within the context of qualitative physics and artificial intelligence. He then explains the theoretical basis for each technique and describes how they are implemented. He shows that they are essentially complementary: DQ analysis is sound, while exaggeration is a heuristic method: exaggeration, however, solves a wider variety of problems. Weld summarizes their similarities and differences and introduces a hybrid architecture that takes advantage of the strengths of each technique.

    Theories of Comparative Analysis is included in the Artificial Intelligence Series, edited by Michael Brady, Daniel Bobrow, and Randall Davis.

    • Hardcover $32.50
  • Solid Shape

    Solid Shape

    Jan J. Koenderink

    Solid Shape gives engineers and applied scientists access to the extensive mathematical literature on three dimensional shapes. Drawing on the author's deep and personal understanding of three-dimensional space, it adopts an intuitive visual approach designed to develop heuristic tools of real use in applied contexts. Increasing activity in such areas as computer aided design and robotics calls for sophisticated methods to characterize solid objects. A wealth of mathematical research exists that can greatly facilitate this work yet engineers have continued to "reinvent the wheel" as they grapple with problems in three dimensional geometry. Solid Shape bridges the gap that now exists between technical and modern geometry and shape theory or computer vision, offering engineers a new way to develop the intuitive feel for behavior of a system under varying situations without learning the mathematicians' formal proofs. Reliance on descriptive geometry rather than analysis and on representations most easily implemented on microcomputers reinforces this emphasis on transforming the theoretical to the practical. Chapters cover shape and space, Euclidean space, curved submanifolds, curves, local patches, global patches, applications in ecological optics, morphogenesis, shape in flux, and flux models. A final chapter on literature research and an appendix on how to draw and use diagrams invite readers to follow their own pursuits in threedimensional shape.

    Solid Shape is included in the Artificial Intelligence series, edited by Patrick Winston, Michael Brady, and Daniel Bobrow

    • Hardcover $19.75 £14.99
  • Automated Deduction in Nonclassical Logics

    Efficient Matrix Proof Methods for Modal and Intuitionistic Logics

    Lincoln A. Wallen

    This book develops and demonstrates efficient matrix proof methods for automated deduction within an important and comprehensive class of first order and intuitionistic logics. Traditional techniques for the design of efficient proof systems are abstracted from their original setting which allows their application to a wider class of mathematical logic. The logics discussed are used throughout computer science and artificial intelligence.

    Contents Introduction • I. Automated Deduction in Classical Logic • Proof search in classical sequent calculi • A matrix characterization of classical validity • II. Automated Proof Deduction in Modal Logics • The semantics and proof theory of modal logics • Proof search in modal sequent calculi • Matrix characterizations of modal validity • Alternative proof methods for modal logics • Matrix based proof search • III. Automated Deduction in Intuitionistic Logic • A Matrix proof method • Conclusions

    Automated Deduction in Nonclassical Logics is included in the Artificial Intelligence series, edited by Patrick Winston Michael Brady, and Daniel Bobrow.

    • Hardcover $45.00
  • ONTIC

    A Knowledge Representation System for Mathematics

    David A. McAllester

    ONTIC, the interactive system for verifying "natural" mathematical arguments that David McAllester describes in this book, represents a significant change of direction in the field of mechanical deduction, a key area in computer science and artificial intelligence. ONTIC is an interactive theorem prover based on novel forward chaining inference techniques. It is an important advance over such earlier systems for checking mathematical arguments as Automath, Nuprl, and the Boyer Moore system. The first half of the book provides a high-level description of the ONTIC system and compares it with these and other automated theorem proving and verification systems. The second half presents a complete formal specification of the inference mechanisms used. McAllester's is the only semi automated verification system based on classical Zermelo-Fraenkel set theory. It uses object oriented inference, a unique automated inference mechanism for a syntactic variant of first order predicate calculus. The book shows how the ONTIC system can be used to check such serious proofs as the proof of the Stone representation theorem without expanding them to excessive detail.

    ONTIC: A Knowledge Representation System for Mathematics is included in the Artificial Intelligence series, edited by Patrick Henry Winston and Michael J. Brady.

    • Hardcover $29.95
  • Shape From Shading

    Shape From Shading

    Berthold K.P. Horn and Michael J. Brooks

    Understanding how the shape of a three dimensional object may be recovered from shading in a two-dimensional image of the object is one of the most important—and still unresolved—problems in machine vision. Although this important subfield is now in its second decade, this book is the first to provide a comprehensive review of shape from shading. It brings together all of the seminal papers on the subject, shows how recent work relates to more traditional approaches, and provides a comprehensive annotated bibliography.

    The book's 17 chapters cover: Surface Descriptions from Stereo and Shading. Shape and Source from Shading. The Eikonal Equation: some Results Applicable to Computer Vision. A Method for Enforcing Integrability in Shape from Shading Algorithms. Obtaining Shape from Shading Information. The Variational Approach to Shape from Shading. Calculating the Reflectance Map. Numerical Shape from Shading and Occluding Boundaries. Photometric Invariants Related to Solid Shape. Improved Methods of Estimating Shape from Shading Using the Light Source Coordinate System. A Provably Convergent Algorithm for Shape from Shading. Recovering Three Dimensional Shape from a Single Image of Curved Objects. Perception of Solid Shape from Shading. Local Shading Analysis Pentland. Radarclinometry for the Venus Radar Mapper. Photometric Method for Determining Surface Orientation from Multiple Images.

    Shape from Shading is included in the Artificial Intelligence series, edited by Michael Brady, Daniel Bobrow, and Randall Davis.

    • Hardcover $85.00 £66.00
    • Paperback $51.00 £40.00
  • The Paralation Model

    Architecture-Independent Parallel Programming

    Gary Sabot

    The Paralation Model introduces a way of programming parallel computers that is easy to use for general problem solving, and will work for many different parallel computer architectures with any number of processors, from one to billions. The book includes working LISP source code for a mini compiler, along with many programming examples. Parallel computers can often be impossibly hard to program. The paralation model that Gary Sabot describes is a breakthrough in its simplicity and well defined semantics and can serve as a useful and stable semantic staging point for parallel language research. Consisting of a new data structure and a small, irreducible set of operators, the model has a number of useful features: it can be combined with any base language to produce a concrete parallel language; it makes explicit and transparent the cost of both processing and communication, often ignored by shared memory and data flow models; and it serves as a precise tool for a programmer while simultaneously supplying a compiler with an abundance of useful information for a variety of target architectures. This decouples advances and changes in parallel computer design from the design of application programs: old paralation programs can take advantage of new computers without sacrificing efficiency. The Paralation Model is included in the Artificial Intelligence series, edited by Patrick Winston, Michael Brady, and Daniel Bobrow.

    • Hardcover $35.00
  • Robotics Research

    The Fourth International Symposium

    Robert Bolles and Bernard Roth

    This book collects 52 contributions by prominent researchers from Japan, Europe, and the United States. The topics covered include kinematics and dynamics, mobile robots, design and mechanisms, and perception.

    The Fourth International Symposium on Robotics Research was held in the summer of 1987 in Santa Cruz, California. This book collects 52 contributions by prominent researchers from Japan, Europe, and the United States. The topics covered include kinematics and dynamics, mobile robots, design and mechanisms, and perception.

    Selected contents: ROBOTWORLD: A Multiple Robot Vision Guided Assembly System (Scheinman). Investigating Fast, Intelligent Systems with a Ping-Pong Playing Robot (Andersson). Behavior Based Design of Robot Effectors (Jacobsen, et al). Intrinsic Tactile Sensing for Artificial Hands (Bicchi, Dario). Whole Arm Manipulation (Salisbury). Grasping as a Contact Sport (Cutkosky, et al). MEISTER: A Model Enhanced Intelligent and Skillful Teleoperational Robot System (Sato and Hirai). Optical Range Finding Methods for Robotics (Idesawa). 4D-Dynamic Scene Analysis with Integral Spatio-Temporal Models (Dickmanns). Model-Based Object Motion Tracking (Mundy, Thompson). Sensor-Based Manipulation Planning as a Game with Nature (Taylor, Mason, Goldbert). Issues in the Design of Off-Line Programming Systems (Craig). HANDEY: A Task-Level Robot System (Lozano-Perez, et al). Design and SensorBased Robotic Assembly in the "Design to Product" Project (Fehrenbach, Smithers). Collision Detection among Moving Objects in Simulation (Kawabe, Okano, Shimada). An Automated Guided Vehicle with Map Building and Path Finding Capabilities (Jarvis, Byrne).

    Robotics Research: The Fourth International Symposium is included in the Artificial Intelligence Series edited by Patrick Winston and Michael Brady.

    • Hardcover $75.00
  • Model-Based Control of a Robot Manipulator

    Model-Based Control of a Robot Manipulator

    Chae H. An, Christopher G. Atkeson, and John Hollerbach

    Model-Based Control of a Robot Manipulator presents the first integrated treatment of many of the most important recent developments in using detailed dynamic models of robots to improve their control. The authors' work on automatic identification of kinematic and dynamic parameters, feedforward position control, stability in force control, and trajectory learning has significant implications for improving performance in future robot systems. All of the main ideas discussed in this book have been validated by experiments on a direct-drive robot arm. The book addresses the issues of building accurate robot models and of applying them for high performance control. It first describes how three sets of models - the kinematic model of the links and the inertial models of the links and of rigid-body loads - can be obtained automatically using experimental data. These models are then incorporated into position control, single trajectory learning, and force control. The MIT Serial Link Direct Drive Arm, on which these models were developed and applied to control, is one of the few manipulators currently suitable for testing such concepts.

    Contents Introduction • Direct Drive Arms • Kinematic Calibration • Estimation of Load Inertial Parameters • Estimation of Link Inertial Parameters • Feedforward and Computed Torque Control • Model-Based Robot Learning • Dynamic Stability Issues in Force Control • Kinematic Stability Issues in Force Control • Conclusion

    Model-Based Control of a Robot Manipulator is included in the Artificial Intelligence Series edited by Patrick Winston and Michael Brady.

    • Hardcover $45.00
    • Paperback $32.00 £25.00
  • Reasoning About Change

    Time and Causation from the Standpoint of Artificial Intelligence

    Yoav Shoham

    A comprehensive approach to temporal reasoning in artificial intelligence.

    The notions of time and change are central to the way we think about the world. Not surprisingly, both play a prominent role in artificial intelligence research, in diverse areas such as medical diagnosis, circuit debugging, naive physics, and robot planning. Reasoning About Change presents a comprehensive approach to temporal reasoning in artificial intelligence. Using techniques from temporal, nonmonotonic and epistemic logics, the author investigates issues that arise when one adopts a formal approach to temporal reasoning in artificial intelligence that is at once rigorous, efficient, and intuitive. Shoham develops a temporal logic that is based on temporal intervals rather than points in time, and presents a mathematical apparatus that simplifies and clarifies notions of nonmonotonic logic and the modal logic of knowledge. He constructs a specific logic, called Chronological Ignorance, and discusses both its practical utility and philosophical importance. In particular, he offers a new account of the concept of causation, and of its central role in commonsense reasoning.

    Reasoning About Change is included in the Artificial Intelligence Series, edited by Michael Brady and Patrick Henry Winston.

    • Hardcover $39.95 £27.95
  • Visual Reconstruction

    Visual Reconstruction

    Andrew Blake and Andrew Zisserman

    Visual Reconstruction presents a unified and highly original approach to the treatment of continuity in vision. It introduces, analyzes, and illustrates two new concepts. The first—the weak continuity constraint—is a concise, computational formalization of piecewise continuity. It is a mechanism for expressing the expectation that visual quantities such as intensity, surface color, and surface depth vary continuously almost everywhere, but with occasional abrupt changes. The second concept—the graduated nonconvexity algorithm—arises naturally from the first. It is an efficient, deterministic (nonrandom) algorithm for fitting piecewise continuous functions to visual data. The book first illustrates the breadth of application of reconstruction processes in vision with results that the authors' theory and program yield for a variety of problems. The mathematics of weak continuity and the graduated nonconvexity (GNC) algorithm are then developed carefully and progressively.

    Contents Modeling Piecewise Continuity • Applications of Piecewise Continuous Reconstruction • Introducing Weak Continuity Constraints • Properties of the Weak String and Membrane • Properties of Weak Rod and Plate • The Discrete Problem • The Graduated Nonconvexity (GNC) Algorithm • Appendixes: Energy Calculations for the String and Membrane • Noise Performance of the Weak Elastic String • Energy Calculations for the Rod and Plate • Establishing Convexity • Analysis of the GNC Algorithm

    Visual Reconstruction is included in the Artificial Intelligence series, edited by Michael Brady and Patrick Winston.

    • Hardcover $34.00
    • Paperback $30.00 £24.00
  • Knowledge-Based Tutoring

    The GUIDON Program

    William Clancy

    Knowledge-Based Tutoring describes the advantages and difficulties of adapting an expert system for use in teaching and problem solving. In this case the well-known rule-based expert system, MYCIN, which has been widely used in medical artificial intelligence to do infectious disease diagnosis and therapy selection, is used as a base for the instructional program GUIDON. MYCIN's rules are interpreted by GUIDON in order to evaluate a student's problem solving and provide assistance as the student gathers information about a patient and makes a diagnosis. The book describes what GUIDON does, how it is constructed, and the benefits and limitations of its design. This is the first attempt to adapt a rule base for tutoring and opens the door to what will most likely be a dramatic growth in interest in the use of expert systems for teaching. Clancey points out that it is easy to build an expert system that "works," but difficult to build one that makes knowledge explicit so that it can be taught. His dramatic demonstration of the separation of tutoring from subject matter knowledge will be of particular interest to researchers who are developing traditional computer-aided instruction programs. Clancey's program will also prove useful to cognitive science researchers in psychology and education who are interested in learning about AI techniques for explanation and student modeling, and to the many people who are currently developing computer-aided instruction programs. The book contains enough technical details for the work to be replicated, but has been generalized so the methods and lessons can be applied to other knowledge representations.

    Knowledge-Based Tutoring is included in The MIT Press Series in Artificial Intelligence, edited by Patrick Henry Winston and Michael Brady.

    • Hardcover $47.50
  • AI in the 1980s and Beyond

    AI in the 1980s and Beyond

    An MIT Survey

    W. Eric L. Grimson and Ramesh S. Patil

    This collection of essays by 12 members of the MIT staff, provides an inside report on the scope and expectations of current research in one of the world's major AI centers. The chapters on artificial intelligence, expert systems, vision, robotics, and natural language provide both a broad overview of current areas of activity and an assessment of the field at a time of great public interest and rapid technological progress.

    Contents Artificial Intelligence, Patrick H. Winston and Karen Prendergast • KnowledgeBased Systems, Randall Davis • Expert-System Tools and Techniques, Peter Szolovits • Medical Diagnosis: Evolution of Systems Building Expertise, Ramesh S. Patil • Artificial Intelligence and Software Engineering, Charles Rich and Richard C. Waters • Intelligent Natural Language Processing, Robert C. Berwick • Automatic Speech Recognition and Understanding, Victor Zue • Robot Programming and Artificial Intelligence, Tomas Lozano-Perez • Robot Hands and Tactile Sensing, John M. Hollerbach • Intelligent Vision, Michael Brady • Making Robots See, W. Eric L. Grimson • Autonomous Mobile Robots, Rodney A. Brooks

    AI in the 1980s and Beyond is included in the Artificial Intelligence Series, edited by Patrick H. Winston and Michael Brady.

    • Hardcover $30.00
    • Paperback $43.00 £34.00
  • Robotics Research

    The Third International Symposium

    Olivier Faugeras and George Giralt

    The Third International Symposium in Robotics Research was held in the Fall of 1985 in Gouvieux-Chantilly, France. This book collects 45 papers presented by the 66 participants from the U.S., Japan, and Europe, reflecting the wide variety of research issues currently under exploration. The topics covered include perception (visual and local), action And control, robot mechanisms, and modeling and systems.

    Selected Contents Two Sensors Are Better Than One: Examples of Integration of Vision and Touch (Bajcsy and Allen) • A Layered Intelligent Control System for a Mobile Robot (R. A. Brooks) • Design of a CAD/CAM System for Robotics on a Microcomputer (E. Dombre, A. Fournier, C. Quard, R. Zapata) • Tackling Uncertainty and Imprecision in Robotics (A. Farreny and A. Prade) • Robot Learning and Teach-In Based on Sensory Feedback (G. Hirzinger) • Estimation of Inertial Parameters of Manipulator Loads and Links (J. Hollerback) • Parallel Manipulator (H. Inoue, Y. Tsusaka, T. Fukuizumi) • The Operational Space Formulation in the Analysis, Design, and Control of Manipulators (O. Khatib) • Symmetry in Running (M. Raibert) • Integrated Language, Sensing, and Control for a Robot Hand (K. Salisbury) • A Stereo Method Using Disparity Histograms of Multi-Resolution Channels (Y. Shirai)

    Robotics Research is eighteenth in the Artificial Intelligence Series, edited by Patrick Winston and Michael Brady.

    • Hardcover $65.00
  • Machine Interpretation of Line Drawings

    Kokichi Sugihara

    This book solves a long-standing problem in computer vision, the interpretation of line drawings and, in doing so answers many of the concerns raised by this problem, particularly with regard to errors in the placement of lines and vertices in the images. Sugihara presents a computational mechanism that functionally mimics human perception in being able to generate three-dimensional descriptions of objects from two-dimensional line drawings. The objects considered are polyhedrons or solid objects bounded by planar faces, and the line drawings are single-view pictures of these objects. Sugihara's mechanism has several potential applications. It can facilitate man-machine communication by extracting object structures automatically from pictures drawn by a designer, which can be particularly useful in the computer-aided design of geometric objects, such as mechanical parts and buildings. It can also be used in the intermediate stage of computer vision systems used to obtain and analyze images in the outside world. The computational mechanism itself is not accompanied by a large database but is composed of several simple procedures based on linear algebra and combinatorial theory.

    Contents Introduction • Candidates for Spatial Interpretation • Discrimination between Correct and Incorrect Pictures • Correctness of HiddenPart-Drawn Pictures • Algebraic Structures of Line Drawings • Combinatorial Structures of Line Drawings • Overcoming Superstrictness • Algorithmic Aspects of Generic Reconstructibility • Specification of Unique Shapes • Recovery of Shape from Surface Information • Polyhedrons and Rigidity

    Machine interpretation of Line Drawings is included in The MIT Press Series in Artificial Intelligence, edited by Patrick Henry Winston and Michael Brady.

    • Hardcover $35.00
  • Legged Robots That Balance

    Marc Raibert

    This book, by a leading authority on legged locomotion, presents exciting engineering and science, along with fascinating implications for theories of human motor control. It lays fundamental groundwork in legged locomotion, one of the least developed areas of robotics, addressing the possibility of building useful legged robots that run and balance. The book describes the study of physical machines that run and balance on just one leg, including analysis, computer simulation, and laboratory experiments. Contrary to expectations, it reveals that control of such machines is not particularly difficult. It describes how the principles of locomotion discovered with one leg can be extended to systems with several legs and reports preliminary experiments with a quadruped machine that runs using these principles. Raibert's work is unique in its emphasis on dynamics and active balance, aspects of the problem that have played a minor role in most previous work. His studies focus on the central issues of balance and dynamic control, while avoiding several problems that have dominated previous research on legged machines.

    Legged Robots That Balance is fifteenth in the Artificial Intelligence Series, edited by Patrick Winston and Michael Brady.

    • Hardcover $35.00
    • Paperback $45.00 £35.00
  • The Connection Machine

    The Connection Machine

    W. Daniel Hillis

    The Connection Machine describes a fundamentally different kind of computer. It offers a preview of a parallel processing computer that Daniel Hillis and others are now developing to perform tasks that no conventional, sequential machine can solve in a reasonable time.

    The Connection Machine is included in the Artificial Intelligence series, edited by Patrick Winston, Michael Brady, and Daniel Bobrow.

    • Hardcover $27.50
    • Paperback $25.00
  • The Acquisition of Syntactic Knowledge

    The Acquisition of Syntactic Knowledge

    Robert C. Berwick

    This landmark work in computational linguistics is of great importance both theoretically and practically because it shows that much of English grammar can be learned by a simple program. The Acquisition of Syntactic Knowledge investigates the central questions of human and machine cognition: How do people learn language? How can we get a machine to learn language? It first presents an explicit computational model of language acquisition which can actually learn rules of English syntax given a sequence of grammatical, but otherwise unprepared, sentences. It shows that natural languages are designed to be easily learned and easily processed-an exciting breakthrough from the point of view of artificial intelligence and the design of expert systems because it shows how extensive knowledge might be acquired automatically, without outside intervention. Computationally, the book demonstrates how constraints that may be reasonably assumed to aid sentence processing also aid language acquisition. Chapters in the book's second part apply computational methods to the general problem of developmental growth, particularly the thorny problem of the interaction between innate genetic endowment and environmental input, with the intent of uncovering the constraints on the acquisition of syntactic knowledge. A number of "mini-theories" of learning are incorporated in this study of syntax with results that should appeal to a wide range of scholarly interests. These include how lexical categories, phonological rule systems, and phrase structure rules are learned; the role of semantic-syntactic interaction in language acquisition; how a "parameter setting" model may be formalized as a learning procedure; how multiple constraints (from syntax, thematic knowledge, or phrase structure) interact to aid acquisition; how transformational-type rules may be learned; and, the role of lexical ambiguity in language acquisition.

    The Acquisition of Syntactic Knowledge is sixteenth in the Artificial Intelligence Series, edited by Patrick Winston and Michael Brady.

    • Hardcover $13.75 £10.50
  • Robot Hands and the Mechanics of Manipulation

    Matthew T. Mason and J. Kenneth Salisbury

    Robot Hands and the Mechanics of Manipulation explores several aspects of the basic mechanics of grasping, pushing, and in general, manipulating objects. It makes a significant contribution to the understanding of the motion of objects in the presence of friction, and to the development of fine position and force controlled articulated hands capable of doing useful work.In the book's first section, kinematic and force analysis is applied to the problem of designing and controlling articulated hands for manipulation. The analysis of the interface between fingertip and grasped object then becomes the basis for the specification of acceptable hand kinematics. A practical result of this work has been the development of the Stanford/JPL robot hand - a tendon-actuated, 9 degree-of-freedom hand which is being used at various laboratories around the country to study the associated control and programming problems aimed at improving robot dexterity. Chapters in the second section study the characteristics of object motion in the presence of friction. Systematic exploration of the mechanics of pushing leads to a model of how an object moves under the combined influence of the manipulator and the forces of sliding friction. The results of these analyses are then used to demonstrate verification and automatic planning of some simple manipulator operations.

    Robot Hands and the Mechanics of Manipulation is 14th in the Artificial Intelligence Series, edited by Patrick Henry Winston and Michael Brady.

    • Hardcover $13.75 £10.50
  • Robotics Research

    The Second International Symposium

    Hideo Hanafusa and Hirochika Inoue

    The sixty-two contributions in this book are by the world's leading researchers provide a unique opportunity to view the future shape of robotics in such areas as arm and hand design, dynamics, image understanding, locomotion, touch and compliance, systems, kinematics, visual inspection, control, assembly, and sensing.

    The sixty-two contributions in this book are by the world's leading researchers from Japan, the United States, France, The United Kingdom, Australia, and West Germany. They provide a unique opportunity to view the future shape of robotics in such areas as arm and hand design, dynamics, image understanding, locomotion, touch and compliance, systems, kinematics, visual inspection, control, assembly, and sensing. In five parts the book covers visual perception (including topics about representation and recognition of three-dimensional objects, sensory interaction, and vision processors), the computational aspect of manipulator control (with chapters on kinematics and design and control theory), implementation of action (covering manipulator and end effecter and mobile robots), task level planning and theory of manipulation, and discussions of industrial applications of robots and key issues of robotics research. The first international symposium on robotics research was organized around a view of robotics as the "intelligent connection of perception to action." Edited by Michael Brady and Richard Paul, it was published by The MIT Press in 1984.

    This book is thirteenth in The MIT Press Series in Artificial Intelligence, edited by Patrick Henry Winston and Michael Brady.

    • Hardcover $70.00
  • In-Depth Understanding

    In-Depth Understanding

    A Computer Model of Integrated Processing for Narrative Comprehension

    Michael George Dyer

    This book describes a theory of memory representation, organization, and processing for understanding complex narrative texts. The theory is implemented as a computer program called BORIS which reads and answers questions about divorce, legal disputes, personal favors, and the like. The system is unique in attempting to understand stories involving emotions and in being able to deduce adages and morals, in addition to answering fact and event based questions about the narratives it has read. BORIS also manages the interaction of many different knowledge sources such as goals, plans, scripts, physical objects, settings, interpersonal relationships, social roles, emotional reactions, and empathetic responses. The book makes several original technical contributions as well. In particular, it develops a class of knowledge constructs called Thematic Abstraction Units (TAUs) which share similarities with other representational systems such as Schank's Thematic Organization Packets and Lehnert's Plot Units. TAUs allow BORIS to represent situations which are more abstract than those captured by scripts, plans, and goals. They contain processing knowledge useful in dealing with the kinds of planning and expectation failures that characters often experience in narratives; and, they often serve as episodic memory structures, organizing events which involve similar kinds of planning failures and divergent domains. An appendix contains a detailed description of a demon-based parser, a kernel of the BORIS system, as well as the actual LISP code of a microversion of this parser and a number of exercises for expanding it into a full-fledged story-understander.

    In-Depth Understanding is included in The MIT Press Artificial Intelligence Series.

    • Hardcover $52.50
    • Paperback $54.00 £42.00
  • From Images to Surfaces

    From Images to Surfaces

    A Computational Study of the Human Early Visual System

    W. Eric L. Grimson

    The projection of light rays onto the retina of the eye forms a two-dimensional image, but through combining the stereoscopic aspect of vision with other optical clues by means of some remarkably effective image-processing procedures, the viewer is able to perceive three-dimensional representations of scenes. From Images to Surfaces proposes and examines a specific image-processing procedure to account for this remarkable effect-a computational approach that provides a framework for understanding the transformation of a set of images into a representation of the shapes of surfaces visible in a scene. Although much of the analysis is applicable to any visual information processing system-biological or artificial-Grimson constrains his final choice of computational algorithms to those that are biologically feasible and consistent with what is known about the human visual system. In order to clarify the analysis, the approach distinguishes three independent levels: the computational theory itself, the algorithms employed, and the underlying implementation of the computation, in this case through the human neural mechanisms. This separation into levels facilitates the generation of specific models from general concepts. This research effort had its origin in a theory of human stereo vision recently developed by David Marr and Tomaso Poggio. Grimson presents a computer implementation of this theory that serves to test its adequacy and provide feedback for the identification of unsuspected problems embedded in it. The author then proceeds to apply and extend the theory in his analysis of surface interpolation through the computational methodology. This methodology allows the activity of the human early visual system to be followed through several stages: the Primal Sketch, in which intensity changes at isolated points on a surface are noted; the Raw 2.5-D Sketch, in which surface values at these points are computed; and the Full 2.5-D Sketch, in which these values—ncluding stereo and motion perception—are interpolated over the entire surface. These stages lead to the final 3-D Model, in which the three-dimensional shapes of objects, in object-centered coordinates, are made explicit.

    • Hardcover $35.00
    • Paperback $34.00 £27.00
  • Turtle Geometry

    Turtle Geometry

    The Computer as a Medium for Exploring Mathematics

    Harold Abelson and Andrea diSessa

    Turtle Geometry presents an innovative program of mathematical discovery that demonstrates how the effective use of personal computers can profoundly change the nature of a student's contact with mathematics. Using this book and a few simple computer programs, students can explore the properties of space by following an imaginary turtle across the screen. The concept of turtle geometry grew out of the Logo Group at MIT. Directed by Seymour Papert, author of Mindstorms, this group has done extensive work with preschool children, high school students and university undergraduates.

    • Hardcover $75.00
    • Paperback $45.00 £35.00
  • The Interpretation of Visual Motion

    The Interpretation of Visual Motion

    Shimon Ullman

    This book uses the methodology of artificial intelligence to investigate the phenomena of visual motion perception: how the visual system constructs descriptions of the environment in terms of objects, their three-dimensional shape, and their motion through space, on the basis of the changing image that reaches the eye. The author has analyzed the computations performed in the course of visual motion analysis. Workable schemes able to perform certain tasks performed by the visual system have been constructed and used as vehicles for investigating the problems faced by the visual system and its methods for solving them. Two major problems are treated: first, the correspondence problem, which concerns the identification of image elements that represent the same object at different times, thereby maintaining the perceptual identity of the object in motion or in change. The second problem is the three-dimensional interpretation of the changing image once a correspondence has been established.The author's computational approach to visual theory makes the work unique, and it should be of interest to psychologists working in visual perception and readers interested in cognitive studies in general, as well as computer scientists interested in machine vision, theoretical neurophysiologists, and philosophers of science.

    • Hardcover
    • Paperback $30.00 £24.00
  • NETL

    NETL

    A System for Representing and Using Real-World Knowledge

    Scott Fahlman

    The system presented in this book consists of two more-or-less independent parts. The first is the system's parallel network memory scheme; the second part of the knowledge-base system presented here is NETL, "a vocabulary of conventions and processing algorithms—in some sense, a language—for representing various kinds of knowledge as nodes and links in the network...."

    "Consider for a moment the layers of structure and meaning that are attached to concepts like lawsuit, birthday party, fire, mother, walrus, cabbage, or king.... If I tell you that a house burned down, and that the fire started at a child's birthday party, you will think immediately of the candles on the cake and perhaps of the many paper decorations. You will not, In all probability, find yourself thinking about playing pin-the- tall-on-the-donkey or about the color of the cake's icing or about the fact that birthdays come once a year. These concepts are there when you need them, but they do not seem to slow down the search for a link between fires and birthday parties." The human mind can do many remarkable things. One of the most remarkable Is its ability to store an enormous quantity and variety of knowledge and to locate and retrieve whatever part of it is relevant in a particular context quickly and in most cases almost without effort. "If we are ever to create an artificial intelligence with human-like abilities," Fahlman writes, "we will have to endow it with a comparable knowledge-handling facility; current knowledge-base systems fall far short of this goal. This report describes an approach to the problem of representing and using realworld knowledge in a computer." The system developed by Fahlman and presented in this book consists of two more-or-less independent parts. The first is the system's parallel network memory scheme: "Knowledge Is stored as a pattern of interconnections of very simple parallel processing elements: node units that can store a dozen or so distinct marker-bits, and link units that can propagate those markers from node to node, in parallel through the network. Using these marker-bit movements, the parallel network system can perform searches and many common deductions very quickly." The second (and more traditional) part of the knowledge-base system presented here is NETL, "a vocabulary of conventions and processing algorithms—in some sense, a language—for representing various kinds of knowledge as nodes and links in the network.... NETL incorporates a number of representational techniques—new ideas and new combinations of old ideas-which allow it to represent certain real-world concepts more precisely and more efficiently than earlier systems.... NETL has been designed to operate efficiently on the parallel network machine described above, and to exploit this machine's special abilities. Most of the ideas in NETL are applicable to knowledge-base systems on serial machines as well."

    • Hardcover $40.00
    • Paperback $35.00 £27.00