Michael J. Kearns

Michael J. Kearns is Professor of Computer and Information Science at the University of Pennsylvania.

  • Advances in Neural Information Processing Systems 11

    Advances in Neural Information Processing Systems 11

    Proceedings of the 1998 Conference

    Michael J. Kearns, Sara A. Solla, and David A. Cohn

    The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes computer science, neuroscience, statistics, physics, cognitive science, and many branches of engineering, including signal processing and control theory. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented.

    • Hardcover $20.75 £16.99
  • Advances in Neural Information Processing Systems 10

    Advances in Neural Information Processing Systems 10

    Proceedings of the 1997 Conference

    Michael I. Jordan, Michael J. Kearns, and Sara A. Solla

    The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes computer science, neuroscience, statistics, physics, cognitive science, and many branches of engineering, including signal processing and control theory. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented.

    • Hardcover $20.75 £16.99
  • An Introduction to Computational Learning Theory

    An Introduction to Computational Learning Theory

    Michael J. Kearns and Umesh Vazirani

    Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.

    Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics. Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning. Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs. The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.

    • Hardcover $70.00 £58.00
  • Computational Learning Theory and Natural Learning Systems, Volume 2

    Computational Learning Theory and Natural Learning Systems, Volume 2

    Intersections between Theory and Experiment

    Stephen José Hanson, Michael J. Kearns, Thomas Petsche, and Ronald L. Rivest

    As with Volume I, this second volume represents a synthesis of issues in three historically distinct areas of learning research: computational learning theory, neural network research, and symbolic machine learning. While the first volume provided a forum for building a science of computational learning across fields, this volume attempts to define plausible areas of joint research: the contributions are concerned with finding constraints for theory while at the same time interpreting theoretic results in the context of experiments with actual learning systems. Subsequent volumes will focus on areas identified as research opportunities.

    Computational learning theory, neural networks, and AI machine learning appear to be disparate fields; in fact they have the same goal: to build a machine or program that can learn from its environment. Accordingly, many of the papers in this volume deal with the problem of learning from examples. In particular, they are intended to encourage discussion between those trying to build learning algorithms (for instance, algorithms addressed by learning theoretic analyses are quite different from those used by neural network or machine-learning researchers) and those trying to analyze them.

    The first section provides theoretical explanations for the learning systems addressed, the second section focuses on issues in model selection and inductive bias, the third section presents new learning algorithms, the fourth section explores the dynamics of learning in feedforward neural networks, and the final section focuses on the application of learning algorithms.

    A Bradford Book

    • Paperback $55.00
  • Computational Complexity of Machine Learning

    Michael J. Kearns

    The Computational Complexity of Machine Learning is a mathematical study of the possibilities for efficient learning by computers. It works within recently introduced models for machine inference that are based on the theory of computational complexity and that place an explicit emphasis on efficient and general algorithms for learning. Theorems are presented that help elucidate the boundary of what is efficiently learnable from examples. These results take the form of both algorithms with proofs of their performance, and hardness results demonstrating the intractability of learning in certain natural settings. In addition the book contains lower bounds on the resources required for learning, an extensive study of learning in the presence of errors in the sample data, and several theorems demonstrating reducibilities between learning problems.

    Contents Definitions, Notations, and Motivation • Overview of Recent Research in Computational Learning Theory • Useful Tools for Distribution-Free Learning • Learning in the Presence of Errors • Lower Bounds on Sample Complexity • Cryptographic Limitations on Polynomial-Time Learning • Distribution-Specific Learning in Polynomial Time • Equivalence of Weak Learning and Group Learning

    • Hardcover $8.75 £6.99