Sarah Buss

Sarah Buss is Associate Professor of Philosophy at the University of Michigan.

  • The Contours of Agency

    The Contours of Agency

    Essays on Themes from Harry Frankfurt

    Sarah Buss and Lee Overton

    A wide range of philosophical essays informed by the work of Harry Frankfurt, who offers a response to each essay.

    The original essays in this book address Harry Frankfurt's influential writing on personal identity, love, value, moral responsibility, and the freedom and limits of the human will. Many of Frankfurt's deepest insights come from exploring the self-reflective nature of human agents and the psychic conflicts that self-reflection often produces. His work has informed discussions in metaphysics, metaethics, normative ethics, and action theory.

    The authors, recognized for their own contributions to the understanding of human agency, defend their original philosophical positions at the same time that they respond to Frankfurt's. Each essay is followed by a response from Frankfurt, in which he clarifies and elaborates on his views.

    • Hardcover $53.00
    • Paperback $30.00

Contributor

  • Reinforcement Learning, Second Edition

    Reinforcement Learning, Second Edition

    An Introduction

    Richard S. Sutton and Andrew G. Barto

    The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence.

    Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics.

    Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.

    • Hardcover $80.00
  • Reinforcement Learning

    Reinforcement Learning

    An Introduction

    Richard S. Sutton and Andrew G. Barto

    Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications.

    Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.

    The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

    • Hardcover $75.00
  • Neural Networks for Control

    Neural Networks for Control

    W. Thomas Miller, III, Richard S. Sutton, and Paul J. Werbos

    Neural Networks for Control highlights key issues in learning control and identifies research directions that could lead to practical solutions for control problems in critical application domains. It addresses general issues of neural network based control and neural network learning with regard to specific problems of motion planning and control in robotics, and takes up application domains well suited to the capabilities of neural network controllers. The appendix describes seven benchmark control problems.

    Contributors Andrew G. Barto, Ronald J. Williams, Paul J. Werbos, Kumpati S. Narendra, L. Gordon Kraft, III, David P. Campagna, Mitsuo Kawato, Bartlett W. Met, Christopher G. Atkeson, David J. Reinkensmeyer, Derrick Nguyen, Bernard Widrow, James C. Houk, Satinder P. Singh, Charles Fisher, Judy A. Franklin, Oliver G. Selfridge, Arthur C. Sanderson, Lyle H. Ungar, Charles C. Jorgensen, C. Schley, Martin Herman, James S. Albus, Tsai-Hong Hong, Charles W. Anderson, W. Thomas Miller, III

    • Hardcover $95.00
    • Paperback $11.75