Stephen José Hanson

Stephen José Hanson is Professor of Psychology (Newark Campus) and Member of the Cognitive Science Center (New Brunswick Campus) at Rutgers University.

  • Foundational Issues in Human Brain Mapping

    Foundational Issues in Human Brain Mapping

    Stephen José Hanson and Martin Bunzl

    Neuroimagers and philosophers of mind explore critical issues and controversies that have arisen from the use of brain mapping in cognitive neuroscience and cognitive science.

    The field of neuroimaging has reached a watershed. Brain imaging research has been the source of many advances in cognitive neuroscience and cognitive science over the last decade, but recent critiques and emerging trends are raising foundational issues of methodology, measurement, and theory. Indeed, concerns over interpretation of brain maps have created serious controversies in social neuroscience, and, more important, point to a larger set of issues that lie at the heart of the entire brain mapping enterprise. In this volume, leading scholars—neuroimagers and philosophers of mind—reexamine these central issues and explore current controversies that have arisen in cognitive science, cognitive neuroscience, computer science, and signal processing. The contributors address both statistical and dynamical analysis and modeling of neuroimaging data and interpretation, discussing localization, modularity, and neuroimagers' tacit assumptions about how these two phenomena are related; controversies over correlation of fMRI data and social attributions (recently characterized for good or ill as "voodoo correlations"); and the standard inferential design approach in neuroimaging. Finally, the contributors take a more philosophical perspective, considering the nature of measurement in brain imaging, and offer a framework for novel neuroimaging data structures (effective and functional connectivity—"graphs").

    Contributors William Bechtel, Bharat Biswal, Matthew Brett, Martin Bunzl, Max Coltheart, Karl J. Friston, Joy J. Geng, Clark Glymour, Kalanit Grill-Spector, Stephen José Hanson, Trevor Harley, Gilbert Harman, James V. Haxby, Rik N. Henson, Nancy Kanwisher, Colin Klein, Richard Loosemore, Sébastien Meriaux, Chris Mole, Jeanette A. Mumford, Russell A. Poldrack, Jean-Baptiste Poline, Richard C. Richardson, Alexis Roche, Adina L. Roskies, Pia Rotshtein, Rebecca Saxe, Philipp Sterzer, Bertrand Thirion, Edward Vul

    • Hardcover $16.75 £13.99
    • Paperback $19.75 £15.99
  • Computational Learning Theory and Natural Learning Systems, Volume 4

    Computational Learning Theory and Natural Learning Systems, Volume 4

    Making Learning Systems Practical

    Russell Greiner, Thomas Petsche, and Stephen José Hanson

    This is the fourth and final volume of papers from a series of workshops called "Computational Learning Theory and `Natural' Learning Systems." The purpose of the workshops was to explore the emerging intersection of theoretical learning research and natural learning systems. The workshops drew researchers from three historically distinct styles of learning research: computational learning theory, neural networks, and machine learning (a subfield of AI).

    Volume I of the series introduces the general focus of the workshops. Volume II looks at specific areas of interaction between theory and experiment. Volumes III and IV focus on key areas of learning systems that have developed recently. Volume III looks at the problem of "Selecting Good Models." The present volume, Volume IV, looks at ways of "Making Learning Systems Practical." The editors divide the twenty-one contributions into four sections. The first three cover critical problem areas: 1) scaling up from small problems to realistic ones with large input dimensions, 2) increasing efficiency and robustness of learning methods, and 3) developing strategies to obtain good generalization from limited or small data samples. The fourth section discusses examples of real-world learning systems.

    Contributors Klaus Abraham-Fuchs, Yasuhiro Akiba, Hussein Almuallim, Arunava Banerjee, Sanjay Bhansali, Alvis Brazma, Gustavo Deco, David Garvin, Zoubin Ghahramani, Mostefa Golea, Russell Greiner, Mehdi T. Harandi, John G. Harris, Haym Hirsh, Michael I. Jordan, Shigeo Kaneda, Marjorie Klenin, Pat Langley, Yong Liu, Patrick M. Murphy, Ralph Neuneier, E. M. Oblow, Dragan Obradovic, Michael J. Pazzani, Barak A. Pearlmutter, Nageswara S. V. Rao, Peter Rayner, Stephanie Sage, Martin F. Schlang, Bernd Schürmann, Dale Schuurmans, Leon Shklar, V. Sundareswaran, Geoffrey Towell, Johann Uebler, Lucia M. Vaina, Takefumi Yamazaki, Anthony M. Zador

    • Paperback $55.00 £45.00
  • Computational Learning Theory and Natural Learning Systems, Volume 3

    Computational Learning Theory and Natural Learning Systems, Volume 3

    Selecting Good Models

    Thomas Petsche, Stephen José Hanson, and Jude Shavlik

    This is the third in a series of edited volumes exploring the evolving landscape of learning systems research which spans theory and experiment, symbols and signals. It continues the exploration of the synthesis of the machine learning subdisciplines begun in volumes I and II. The nineteen contributions cover learning theory, empirical comparisons of learning algorithms, the use of prior knowledge, probabilistic concepts, and the effect of variations over time in the concepts and feedback from the environment.

    The goal of this series is to explore the intersection of three historically distinct areas of learning research: computational learning theory, neural networks andAI machine learning. Although each field has its own conferences, journals, language, research, results, and directions, there is a growing intersection and effort to bring these fields into closer coordination.

    Can the various communities learn anything from one another? These volumes present research that should be of interest to practitioners of the various subdisciplines of machine learning, addressing questions that are of interest across the range of machine learning approaches, comparing various approaches on specific problems and expanding the theory to cover more realistic cases.

    A Bradford Book

    • Paperback $50.00 £40.00
  • Computational Learning Theory and Natural Learning Systems, Volume 2

    Computational Learning Theory and Natural Learning Systems, Volume 2

    Intersections between Theory and Experiment

    Stephen José Hanson, Michael J. Kearns, Thomas Petsche, and Ronald L. Rivest

    As with Volume I, this second volume represents a synthesis of issues in three historically distinct areas of learning research: computational learning theory, neural network research, and symbolic machine learning. While the first volume provided a forum for building a science of computational learning across fields, this volume attempts to define plausible areas of joint research: the contributions are concerned with finding constraints for theory while at the same time interpreting theoretic results in the context of experiments with actual learning systems. Subsequent volumes will focus on areas identified as research opportunities.

    Computational learning theory, neural networks, and AI machine learning appear to be disparate fields; in fact they have the same goal: to build a machine or program that can learn from its environment. Accordingly, many of the papers in this volume deal with the problem of learning from examples. In particular, they are intended to encourage discussion between those trying to build learning algorithms (for instance, algorithms addressed by learning theoretic analyses are quite different from those used by neural network or machine-learning researchers) and those trying to analyze them.

    The first section provides theoretical explanations for the learning systems addressed, the second section focuses on issues in model selection and inductive bias, the third section presents new learning algorithms, the fourth section explores the dynamics of learning in feedforward neural networks, and the final section focuses on the application of learning algorithms.

    A Bradford Book

    • Paperback $55.00
  • Computational Learning Theory and Natural Learning Systems, Volume 1

    Computational Learning Theory and Natural Learning Systems, Volume 1

    Constraints and Prospects

    George Drastal, Stephen José Hanson, and Ronald L. Rivest

    These original contributions converge on an exciting and fruitful intersection of three historically distinct areas of learning research: computational learning theory, neural networks, and symbolic machine learning. Bridging theory and practice, computer science and psychology, they consider general issues in learning systems that could provide constraints for theory and at the same time interpret theoretical results in the context of experiments with actual learning systems.In all, nineteen chapters address questions such as, What is a natural system? How should learning systems gain from prior knowledge? If prior knowledge is important, how can we quantify how important? What makes a learning problem hard? How are neural networks and symbolic machine learning approaches similar? Is there a fundamental difference in the kind of task a neural network can easily solve as opposed to those a symbolic algorithm can easily solve?

    • Paperback $65.00
  • Connectionist Modeling and Brain Function

    A Developing Interface

    Stephen José Hanson and Carl R. Olson

    Bringing together contributions in biology, neuroscience, computer science, physics, and psychology, this book offers a solid tutorial on current research activity in connectionist-inspired biology-based modeling. It describes specific experimental approaches and also confronts general issues related to learning associative memory, and sensorimotor development. Introductory chapters by editors Hanson and Olson, along with Terrence Sejnowski, Christof Koch, and Patricia S. Churchland, provide an overview of computational neuroscience, establish the distinction between "realistic" brain models and "simplified" brain models, provide specific examples of each, and explain why each approach might be appropriate in a given context. The remaining chapters are organized so that material on the anatomy and physiology of a specific part of the brain precedes the presentation of modeling studies. The modeling itself ranges from simplified models to more realistic models and provides examples of constraints arising from known brain detail as well as choices modelers face when including or excluding such constraints. There are three sections, each focused on a key area where biology and models have converged.

    Connectionist Modeling and Brain Function is included in the Network Modeling and Connectionism series, edited by Jeffrey Elman.

    • Hardcover $57.00