Marvin Minsky

Marvin Minsky (1927–2016) was Toshiba Professor of Media Arts and Sciences and Donner Professor of Electrical Engineering and Computer Science at MIT. He was a cofounder of the MIT Media Lab and a consultant for the One Laptop Per Child project.

  • Inventive Minds

    Inventive Minds

    Marvin Minsky on Education

    Marvin Minsky, Cynthia Solomon, and Xiao Xiao

    Six essays by artificial intelligence pioneer Marvin Minsky on how education can foster inventiveness, paired with commentary by Minsky's former colleagues and students.

    Marvin Minsky was a pioneering researcher in artificial intelligence whose work led to both theoretical and practical advances. His work was motivated not only by technological advancement but also by the desire to understand the workings of our own minds. Minsky's insights about the mind provide fresh perspectives on education and how children learn. This book collects for the first time six essays by Minsky on children, learning, and the potential of computers in school to enrich children's development. In these essays Minsky discusses the shortcomings of conventional education (particularly in mathematics) and considers alternative approaches; reflects on the role of mentors; describes higher-level strategies for thinking across domains; and suggests projects for children to pursue. Each essay is paired with commentary by one of Minsky's former colleagues or students, which identifies Minsky's key ideas and connects his writings to current research. Minsky once observed that in traditional teaching, “instead of promoting inventiveness, we focus on preventing mistakes.” These essays offer Minsky's unique insights into how education can foster inventiveness.

    Commentary byHal Abelson, Walter Bender, Alan Kay, Margaret Minsky, Brian Silverman, Gary Stager, Mike Travers, Patrick Henry Winston

    • Hardcover $32.00
  • Perceptrons, Reissue Of The 1988 Expanded Edition With A New Foreword By Léon Bottou

    Perceptrons, Reissue Of The 1988 Expanded Edition With A New Foreword By Léon Bottou

    An Introduction to Computational Geometry

    Marvin Minsky and Seymour A. Papert

    The first systematic study of parallelism in computation by two pioneers in the field.

    Reissue of the 1988 Expanded Edition with a new foreword by Léon Bottou

    In 1969, ten years after the discovery of the perceptron—which showed that a machine could be taught to perform certain tasks using examples—Marvin Minsky and Seymour Papert published Perceptrons, their analysis of the computational capabilities of perceptrons for specific tasks. As Léon Bottou writes in his foreword to this edition, “Their rigorous work and brilliant technique does not make the perceptron look very good.” Perhaps as a result, research turned away from the perceptron. Then the pendulum swung back, and machine learning became the fastest-growing field in computer science. Minsky and Papert's insistence on its theoretical foundations is newly relevant.

    Perceptrons—the first systematic study of parallelism in computation—marked a historic turn in artificial intelligence, returning to the idea that intelligence might emerge from the activity of networks of neuron-like entities. Minsky and Papert provided mathematical analysis that showed the limitations of a class of computing machines that could be considered as models of the brain. Minsky and Papert added a new chapter in 1987 in which they discuss the state of parallel computers, and note a central theoretical challenge: reaching a deeper understanding of how “objects” or “agents” with individuality can emerge in a network. Progress in this area would link connectionism with what the authors have called “society theories of mind.”

  • Perceptrons, Expanded Edition

    Perceptrons, Expanded Edition

    An Introduction to Computational Geometry

    Marvin Minsky and Seymour A. Papert

    Perceptrons—the first systematic study of parallelism in computation—has remained a classical work on threshold automata networks for nearly two decades. It marked a historical turn in artificial intelligence, and it is required reading for anyone who wants to understand the connectionist counterrevolution that is going on today.

    Artificial-intelligence research, which for a time concentrated on the programming of Von Neumann computers, is swinging back to the idea that intelligence might emerge from the activity of networks of neuronlike entities. Minsky and Papert's book was the first example of a mathematical analysis carried far enough to show the exact limitations of a class of computing machines that could seriously be considered as models of the brain. Now the new developments in mathematical tools, the recent interest of physicists in the theory of disordered matter, the new insights into and psychological models of how the brain works, and the evolution of fast computers that can simulate networks of automata have given Perceptrons new importance.

    Witnessing the swing of the intellectual pendulum, Minsky and Papert have added a new chapter in which they discuss the current state of parallel computers, review developments since the appearance of the 1972 edition, and identify new research directions related to connectionism. They note a central theoretical challenge facing connectionism: the challenge to reach a deeper understanding of how "objects" or "agents" with individuality can emerge in a network. Progress in this area would link connectionism with what the authors have called "society theories of mind."

  • Perceptrons

    An Introduction to Computational Geometry

    Marvin Minsky and Seymour A. Papert

    It is the author's view that although the time is not yet ripe for developing a really general theory of automata and computation, it is now possible and desirable to move more explicitly in this direction. This can be done by studying in an extremely thorough way well-chosen particular situations that embody the basic concepts. This is the aim of the present book, which seeks general results from the close study of abstract versions of devices known as perceptrons.

    A perceptron is a parallel computer containing a number of readers that scan a field independently and simultaneously, and it makes decisions by linearly combining the local and partial data gathered, weighing the evidence, and deciding if events fit a given “pattern,” abstract or geometric. The rigorous and systematic study of the perceptron undertaken here convincingly demonstrates the authors' contention that there is both a real need for a more basic understanding of computation and little hope of imposing one from the top, as opposed to working up such an understanding from the detailed consideration of a limited but important class of concepts, such as those underlying perceptron operations. “Computer science,” the authors suggest, is beginning to learn more and more just how little it really knows. Not only does science not know much about how brains compute thoughts or how the genetic code computes organisms, it also has no very good idea about how computers compute, in terms of such basic principles as how much computation a problem of what degree of complexity is most suitable to deal with it. Even the language in which the questions are formulated is imprecise, including for example the exact nature of the opposition or complementarity implicit in the distinction “analogue” vs. “digital,” “local” vs. “global,” “parallel” vs. “serial,” “addressed” vs. “associative.” Minsky and Papert strive to bring these concepts into a sharper focus insofar as they apply to the perceptron. They also question past work in the field, which too facilely assumed that perceptronlike devices would, automatically almost, evolve into universal “pattern recognizing,” “learning,” or “self-organizing” machines. The work recognizes fully the inherent impracticalities, and proves certain impossibilities, in various system configurations. At the same time, the real and lively prospects for future advance are accentuated.

    The book divides in a natural way into three parts – the first part is “algebraic” in character, since it considers the general properties of linear predicate families which apply to all perceptrons, independently of the kinds of patterns involved; the second part is “geometric” in that it looks more narrowly at various interesting geometric patterns and derives theorems that are sharper than those of Part One, if thereby less general; and finally the third part views perceptrons as practical devices, and considers the general questions of pattern recognition and learning by artificial systems.

    • Hardcover $17.50
    • Paperback $10.95
  • Semantic Information Processing

    Semantic Information Processing

    Marvin Minsky

    This book collects a group of experiments directed toward making intelligent machines. Each of the programs described here demonstrates some aspect of behavior that anyone would agree require some intelligence, and each program solves its own kinds of problems. These include resolving ambiguities in word meanings, finding analogies between things, making logical and nonlogical inferences, resolving inconsistencies in information, engaging in coherent discourse with a person, and building internal models for representation of newly acquired information. Each of the programs has serious limitations, but the chapter authors provide clear perspectives for viewing both the achievements and limitations of their programs. But what is much more important than what these particular programs achieve are the methods they use to achieve what they do.

    • Hardcover $59.95
    • Paperback $30.00