Mobile robots range from the Mars Pathfinder mission's teleoperated Sojourner to the cleaning robots in the Paris Metro. This text offers students and other interested readers an introduction to the fundamentals of mobile robotics, spanning the mechanical, motor, sensory, perceptual, and cognitive layers the field comprises. The text focuses on mobility itself, offering an overview of the mechanisms that allow a mobile robot to move through a real world environment to perform its tasks, including locomotion, sensing, localization, and motion planning. It synthesizes material from such fields as kinematics, control theory, signal analysis, computer vision, information theory, artificial intelligence, and probability theory.
The book presents the techniques and technology that enable mobility in a series of interacting modules. Each chapter treats a different aspect of mobility, as the book moves from low-level to high-level details. It covers all aspects of mobile robotics, including software and hardware design considerations, related technologies, and algorithmic techniques.] This second edition has been revised and updated throughout, with 130 pages of new material on such topics as locomotion, perception, localization, and planning and navigation. Problem sets have been added at the end of each chapter. Bringing together all aspects of mobile robotics into one volume, Introduction to Autonomous Mobile Robots can serve as a textbook or a working tool for beginning practitioners.
Curriculum developed by Dr. Robert King, Colorado School of Mines, and Dr. James Conrad, University of North Carolina-Charlotte, to accompany the National Instruments LabVIEW Robotics Starter Kit, is found here and here. Included are 13 (6 by Dr. King and 7 by Dr. Conrad) laboratory exercises for using the LabVIEW Robotics Starter Kit to teach mobile robotics concepts.
Downloadable instructor resources available for this title: file of figures in the book
In The Allure of Machinic Life, John Johnston examines new forms of nascent life that emerge through technical interactions within human-constructed environments—"machinic life"—in the sciences of cybernetics, artificial life, and artificial intelligence. With the development of such research initiatives as the evolution of digital organisms, computer immune systems, artificial protocells, evolutionary robotics, and swarm systems, Johnston argues, machinic life has achieved a complexity and autonomy worthy of study in its own right.
Drawing on the publications of scientists as well as a range of work in contemporary philosophy and cultural theory, but always with the primary focus on the "objects at hand"—the machines, programs, and processes that constitute machinic life—Johnston shows how they come about, how they operate, and how they are already changing. This understanding is a necessary first step, he further argues, that must precede speculation about the meaning and cultural implications of these new forms of life.
Developing the concept of the "computational assemblage" (a machine and its associated discourse) as a framework to identify both resemblances and differences in form and function, Johnston offers a conceptual history of each of the three sciences. He considers the new theory of machines proposed by cybernetics from several perspectives, including Lacanian psychoanalysis and "machinic philosophy." He examines the history of the new science of artificial life and its relation to theories of evolution, emergence, and complex adaptive systems (as illustrated by a series of experiments carried out on various software platforms). He describes the history of artificial intelligence as a series of unfolding conceptual conflicts—decodings and recodings—leading to a "new AI" that is strongly influenced by artificial life. Finally, in examining the role played by neuroscience in several contemporary research initiatives, he shows how further success in the building of intelligent machines will most likely result from progress in our understanding of how the human brain actually works.
Online decision making under uncertainty and time constraints represents one of the most challenging problems for robust intelligent agents. In an increasingly dynamic, interconnected, and real-time world, intelligent systems must adapt dynamically to uncertainties, update existing plans to accommodate new requests and events, and produce high-quality decisions under severe time constraints. Such online decision-making applications are becoming increasingly common: ambulance dispatching and emergency city-evacuation routing, for example, are inherently online decision-making problems; other applications include packet scheduling for Internet communications and reservation systems. This book presents a novel framework, online stochastic optimization, to address this challenge.
This framework assumes that the distribution of future requests, or an approximation thereof, is available for sampling, as is the case in many applications that make either historical data or predictive models available. It assumes additionally that the distribution of future requests is independent of current decisions, which is also the case in a variety of applications and holds significant computational advantages. The book presents several online stochastic algorithms implementing the framework, provides performance guarantees, and demonstrates a variety of applications. It discusses how to relax some of the assumptions in using historical sampling and machine learning and analyzes different underlying algorithmic problems. And finally, the book discusses the framework's possible limitations and suggests directions for future research.
Today, when computing is pervasive and deployed over a range of devices by a multiplicity of users, we need to develop computer software to interact with both the ever-increasing complexity of the technical world and the growing fluidity of social organizations. The Art of Agent-Oriented Modeling presents a new conceptual model for developing software systems that are open, intelligent, and adaptive. It describes an approach for modeling complex systems that consist of people, devices, and software agents in a changing environment (sometimes known as distributed sociotechnical systems). The authors take an agent-oriented view, as opposed to the more common object-oriented approach. Thinking in terms of agents (which they define as the human and man-made components of a system), they argue, can change the way people think of software and the tasks it can perform. The book offers an integrated and coherent set of concepts and models, presenting the models at three levels of abstraction corresponding to a motivation layer (where the purpose, goals, and requirements of the system are described), a design layer, and an implementation layer. It compares platforms by implementing the same models in four different languages; compares methodologies by using a common example; includes extensive case studies; and offers exercises suitable for either class use or independent study.
Learning to perform complex action strategies is an important problem in the fields of artificial intelligence, robotics, and machine learning. Filled with interesting new experimental results, Learning in Embedded Systems explores algorithms that learn efficiently from trial-and error experience with an external world. It is the first detailed exploration of the problem of learning action strategies in the context of designing embedded systems that adapt their behavior to a complex, changing environment; such systems include mobile robots, factory process controllers, and long-term software databases.
Kaelbling investigates a rapidly expanding branch of machine learning known as reinforcement learning, including the important problems of controlled exploration of the environment, learning in highly complex environments, and learning from delayed reward. She reviews past work in this area and presents a number of significant new results. These include the intervalestimation algorithm for exploration, the use of biases to make learning more efficient in complex environments, a generate-and-test algorithm that combines symbolic and statistical processing into a flexible learning method, and some of the first reinforcement-learning experiments with a real robot.
New approaches to artificial intelligence spring from the idea that intelligence emerges as much from cells, bodies, and societies as it does from evolution, development, and learning. Traditionally, artificial intelligence has been concerned with reproducing the abilities of human brains; newer approaches take inspiration from a wider range of biological structures that that are capable of autonomous self-organization. Examples of these new approaches include evolutionary computation and evolutionary electronics, artificial neural networks, immune systems, biorobotics, and swarm intelligence--to mention only a few. This book offers a comprehensive introduction to the emerging field of biologically inspired artificial intelligence that can be used as an upper-level text or as a reference for researchers. Each chapter presents computational approaches inspired by a different biological system; each begins with background information about the biological system and then proceeds to develop computational models that make use of biological concepts. The chapters cover evolutionary computation and electronics; cellular systems; neural systems, including neuromorphic engineering; developmental systems; immune systems; behavioral systems--including several approaches to robotics, including behavior-based, bio-mimetic, epigenetic, and evolutionary robots; and collective systems, including swarm robotics as well as cooperative and competitive co-evolving systems. Chapters end with a concluding overview and suggested reading.
Building a person has been an elusive goal in artificial intelligence. This failure, John Pollock argues, is because the problems involved are essentially philosophical; what is needed for the construction of a person is a physical system that mimics human rationality. Pollock describes an exciting theory of rationality and its partial implementation in OSCAR, a computer system whose descendants will literally be persons.
In developing the philosophical superstructure for this bold undertaking, Pollock defends the conception of man as an intelligent machine and argues that mental states are physical states and persons are physical objects as described in the fable of Oscar, the self conscious machine.
Pollock brings a unique blend of philosophy and artificial intelligence to bear on the vexing problem of how to construct a physical system that thinks, is self conscious, has desires, fears, intentions, and a full range of mental states. He brings together an impressive array of technical work in philosophy to drive theory construction in AI. The result is described in his final chapter on "cognitive carpentry."
The Core Language Engine presents the theoretical and engineering advances embodied in one of the most comprehensive natural language processing systems designed to date. Recent research results from different areas of computational linguistics are integrated into a single elegant design with potential for application to tasks ranging from machine translation to information system interfaces.
Bridging the gap between theoretical and implementation oriented literature, The Core Language Engine describes novel analyses and techniques developed by the contributors at SRI International's Cambridge Computer Science Research Centre. It spans topics that include a wide-coverage unification grammar for English syntax and semantics, context-dependent and contextually disambiguated logical form representations, interactive translation, efficient algorithms for parsing and generation, and mechanisms for quantifier scoping, reference resolution, and lexical acquisition.
Contents: Introduction to the CLE. Logical Forms. Categories and Rules. Unification Based Syntactic Analysis. Semantic Rules for English. Lexical Analysis. Syntactic and Semantic Processing. Quantifier Scoping. Sortal Restrictions. Resolving Quasi Logical Forms. Lexical Acquisition. The CLE in Application Development. Ellipsis, Comparatives, and Generation. Swedish-English QLF Translation.
Constraint logic programming, the notion of computing with partial information, is becoming recognized as a way of dramatically improving on the current generation of programming languages. This collection presents the best of current work on all aspects of constraint logic programming languages, from theory through language implementation.
Beginning in the mid-1980s constraint logic programming became a powerful and essential theoretical concept whose first practical application was the development of efficient programming languages based on Prolog. Benhamou and Colmerauer have taken care to illustrate the strong links between current research and existing CLP languages. The first part of the book focuses on significant theoretical studies that propose general models for constraint programming, and the two following parts develop current ideas on themes derived from these languages (numerical constraints, Booleans, and other finite domains). The concluding part on CLP language design gathers work on original constraints and on top-level implementation.
Logic-based formalizations of argumentation, which assume a set of formulae and then lay out arguments and counterarguments that can be obtained from these formulae, have been refined in recent years in an attempt to capture more closely real-world practical argumentation. In Elements of Argumentation, Philippe Besnard and Anthony Hunter introduce techniques for formalizing deductive argumentation in artificial intelligence, emphasizing emerging formalizations for practical argumentation. Besnard and Hunter discuss how arguments can be constructed, how key intrinsic and extrinsic factors can be identified, and how these analyses can be harnessed for formalizing argumentation for use in real-world problem analysis and decision making. The book focuses on a monological approach to argumentation, in which there is a set of possibly conflicting pieces of information (each represented by a formula) that has been collated by an agent or a pool of agents. The role of argumentation is to construct a collection of arguments and counterarguments pertaining to some particular claim of interest to be used for analysis or presentation. Elements of Argumentation is the first book to elucidate and formalize key elements of deductive argumentation. It will be a valuable reference for researchers in computer science and artificial intelligence and of interest to scholars in such fields as logic, philosophy, linguistics, and cognitive science.Philippe Besnard is CNRS (Centre National de la Recherche Scientifique) Research Director in the Logic, Interaction, Language, and Computation Group of the Institut de Recherche et Informatique Toulouse at Université Paul Sabatier. Anthony Hunter is Reader in Intelligent Systems and Head of the Intelligent Systems Research Group in the Department of Computer Science at University College London.