Thomas Petsche

  • Advances in Neural Information Processing Systems 9

    Advances in Neural Information Processing Systems 9

    Proceedings of The 1996 Conference

    Michael C. Mozer, Michael I. Jordan, and Thomas Petsche

    The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes neural networks and genetic algorithms, cognitive science, neuroscience and biology, computer science, AI, applied mathematics, physics, and many branches of engineering. Only about 30% of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. All of the papers presented appear in these proceedings.

    • Hardcover $20.75 £16.99
  • Computational Learning Theory and Natural Learning Systems, Volume 4

    Computational Learning Theory and Natural Learning Systems, Volume 4

    Making Learning Systems Practical

    Russell Greiner, Thomas Petsche, and Stephen José Hanson

    This is the fourth and final volume of papers from a series of workshops called "Computational Learning Theory and `Natural' Learning Systems." The purpose of the workshops was to explore the emerging intersection of theoretical learning research and natural learning systems. The workshops drew researchers from three historically distinct styles of learning research: computational learning theory, neural networks, and machine learning (a subfield of AI).

    Volume I of the series introduces the general focus of the workshops. Volume II looks at specific areas of interaction between theory and experiment. Volumes III and IV focus on key areas of learning systems that have developed recently. Volume III looks at the problem of "Selecting Good Models." The present volume, Volume IV, looks at ways of "Making Learning Systems Practical." The editors divide the twenty-one contributions into four sections. The first three cover critical problem areas: 1) scaling up from small problems to realistic ones with large input dimensions, 2) increasing efficiency and robustness of learning methods, and 3) developing strategies to obtain good generalization from limited or small data samples. The fourth section discusses examples of real-world learning systems.

    Contributors Klaus Abraham-Fuchs, Yasuhiro Akiba, Hussein Almuallim, Arunava Banerjee, Sanjay Bhansali, Alvis Brazma, Gustavo Deco, David Garvin, Zoubin Ghahramani, Mostefa Golea, Russell Greiner, Mehdi T. Harandi, John G. Harris, Haym Hirsh, Michael I. Jordan, Shigeo Kaneda, Marjorie Klenin, Pat Langley, Yong Liu, Patrick M. Murphy, Ralph Neuneier, E. M. Oblow, Dragan Obradovic, Michael J. Pazzani, Barak A. Pearlmutter, Nageswara S. V. Rao, Peter Rayner, Stephanie Sage, Martin F. Schlang, Bernd Schürmann, Dale Schuurmans, Leon Shklar, V. Sundareswaran, Geoffrey Towell, Johann Uebler, Lucia M. Vaina, Takefumi Yamazaki, Anthony M. Zador

    • Paperback $55.00 £45.00
  • Computational Learning Theory and Natural Learning Systems, Volume 3

    Computational Learning Theory and Natural Learning Systems, Volume 3

    Selecting Good Models

    Thomas Petsche, Stephen José Hanson, and Jude Shavlik

    This is the third in a series of edited volumes exploring the evolving landscape of learning systems research which spans theory and experiment, symbols and signals. It continues the exploration of the synthesis of the machine learning subdisciplines begun in volumes I and II. The nineteen contributions cover learning theory, empirical comparisons of learning algorithms, the use of prior knowledge, probabilistic concepts, and the effect of variations over time in the concepts and feedback from the environment.

    The goal of this series is to explore the intersection of three historically distinct areas of learning research: computational learning theory, neural networks andAI machine learning. Although each field has its own conferences, journals, language, research, results, and directions, there is a growing intersection and effort to bring these fields into closer coordination.

    Can the various communities learn anything from one another? These volumes present research that should be of interest to practitioners of the various subdisciplines of machine learning, addressing questions that are of interest across the range of machine learning approaches, comparing various approaches on specific problems and expanding the theory to cover more realistic cases.

    A Bradford Book

    • Paperback $50.00 £40.00
  • Computational Learning Theory and Natural Learning Systems, Volume 2

    Computational Learning Theory and Natural Learning Systems, Volume 2

    Intersections between Theory and Experiment

    Stephen José Hanson, Michael J. Kearns, Thomas Petsche, and Ronald L. Rivest

    As with Volume I, this second volume represents a synthesis of issues in three historically distinct areas of learning research: computational learning theory, neural network research, and symbolic machine learning. While the first volume provided a forum for building a science of computational learning across fields, this volume attempts to define plausible areas of joint research: the contributions are concerned with finding constraints for theory while at the same time interpreting theoretic results in the context of experiments with actual learning systems. Subsequent volumes will focus on areas identified as research opportunities.

    Computational learning theory, neural networks, and AI machine learning appear to be disparate fields; in fact they have the same goal: to build a machine or program that can learn from its environment. Accordingly, many of the papers in this volume deal with the problem of learning from examples. In particular, they are intended to encourage discussion between those trying to build learning algorithms (for instance, algorithms addressed by learning theoretic analyses are quite different from those used by neural network or machine-learning researchers) and those trying to analyze them.

    The first section provides theoretical explanations for the learning systems addressed, the second section focuses on issues in model selection and inductive bias, the third section presents new learning algorithms, the fourth section explores the dynamics of learning in feedforward neural networks, and the final section focuses on the application of learning algorithms.

    A Bradford Book

    • Paperback $55.00