Skip navigation

Stephen José Hanson

Stephen José Hanson is Professor of Psychology (Newark Campus) and Member of the Cognitive Science Center (New Brunswick Campus) at Rutgers University.

Titles by This Editor

The field of neuroimaging has reached a watershed. Brain imaging research has been the source of many advances in cognitive neuroscience and cognitive science over the last decade, but recent critiques and emerging trends are raising foundational issues of methodology, measurement, and theory. Indeed, concerns over interpretation of brain maps have created serious controversies in social neuroscience, and, more important, point to a larger set of issues that lie at the heart of the entire brain mapping enterprise. In this volume, leading scholars—neuroimagers and philosophers of mind—reexamine these central issues and explore current controversies that have arisen in cognitive science, cognitive neuroscience, computer science, and signal processing. The contributors address both statistical and dynamical analysis and modeling of neuroimaging data and interpretation, discussing localization, modularity, and neuroimagers' tacit assumptions about how these two phenomena are related; controversies over correlation of fMRI data and social attributions (recently characterized for good or ill as "voodoo correlations"); and the standard inferential design approach in neuroimaging. Finally, the contributors take a more philosophical perspective, considering the nature of measurement in brain imaging, and offer a framework for novel neuroimaging data structures (effective and functional connectivity—"graphs").

Contributors: William Bechtel, Bharat Biswal, Matthew Brett, Martin Bunzl, Max Coltheart, Karl J. Friston, Joy J. Geng, Clark Glymour, Kalanit Grill-Spector, Stephen José Hanson, Trevor Harley, Gilbert Harman, James V. Haxby, Rik N. Henson, Nancy Kanwisher, Colin Klein, Richard Loosemore, Sébastien Meriaux, Chris Mole, Jeanette A. Mumford, Russell A. Poldrack, Jean-Baptiste Poline, Richard C. Richardson, Alexis Roche, Adina L. Roskies, Pia Rotshtein, Rebecca Saxe, Philipp Sterzer, Bertrand Thirion, Edward Vul

Making Learning Systems Practical

This is the fourth and final volume of papers from a series of workshops called "Computational Learning Theory and 'Natural' Learning Systems." The purpose of the workshops was to explore the emerging intersection of theoretical learning research and natural learning systems. The workshops drew researchers from three historically distinct styles of learning research: computational learning theory, neural networks, and machine learning (a subfield of AI).

Volume I of the series introduces the general focus of the workshops. Volume II looks at specific areas of interaction between theory and experiment. Volumes III and IV focus on key areas of learning systems that have developed recently. Volume III looks at the problem of "Selecting Good Models." The present volume, Volume IV, looks at ways of "Making Learning Systems Practical." The editors divide the twenty-one contributions into four sections. The first three cover critical problem areas: 1) scaling up from small problems to realistic ones with large input dimensions, 2) increasing efficiency and robustness of learning methods, and 3) developing strategies to obtain good generalization from limited or small data samples. The fourth section discusses examples of real-world learning systems.

Contributors: Klaus Abraham-Fuchs, Yasuhiro Akiba, Hussein Almuallim, Arunava Banerjee, Sanjay Bhansali, Alvis Brazma, Gustavo Deco, David Garvin, Zoubin Ghahramani, Mostefa Golea, Russell Greiner, Mehdi T. Harandi, John G. Harris, Haym Hirsh, Michael I. Jordan, Shigeo Kaneda, Marjorie Klenin, Pat Langley, Yong Liu, Patrick M. Murphy, Ralph Neuneier, E. M. Oblow, Dragan Obradovic, Michael J. Pazzani, Barak A. Pearlmutter, Nageswara S. V. Rao, Peter Rayner, Stephanie Sage, Martin F. Schlang, Bernd Schürmann, Dale Schuurmans, Leon Shklar, V. Sundareswaran, Geoffrey Towell, Johann Uebler, Lucia M. Vaina, Takefumi Yamazaki, Anthony M. Zador

Selecting Good Models


This is the third in a series of edited volumes exploring the evolving landscape of learning systems research which spans theory and experiment, symbols and signals. It continues the exploration of the synthesis of the machine learning subdisciplines begun in volumes I and II. The nineteen contributions cover learning theory, empirical comparisons of learning algorithms, the use of prior knowledge, probabilistic concepts, and the effect of variations over time in the concepts and feedback from the environment.

The goal of this series is to explore the intersection of three historically distinct areas of learning research: computational learning theory, neural networks and

AI machine learning. Although each field has its own conferences, journals, language, research, results, and directions, there is a growing intersection and effort to bring these fields into closer coordination.

Can the various communities learn anything from one another? These volumes present research that should be of interest to practitioners of the various subdisciplines of machine learning, addressing questions that are of interest across the range of machine learning approaches, comparing various approaches on specific problems and expanding the theory to cover more realistic cases.

A Bradford Book