Gail A. Carpenter

Gail A. Carpenter is Professor of Mathematics and Cognitive and Neural Systems and Director of the CNS Technology Lab at Boston University.

  • Neural Networks for Vision and Image Processing

    Gail A. Carpenter and Stephen Grossberg

    This interdisciplinary survey brings together recent models and experiments on how the brain sees and learns to recognize objects. It shows how to use these insights in technology and describes how neural networks provide a unifying computational framework for reaching these goals. Several chapters describe experiments in neurobiology and visual perception that clarify properties of biological vision and key conceptual issues that biological models need to address. Other chapters describe neural and computational models of biological vision that address such issues and clarify processes whereby biological vision derives its remarkable flexibility and power. Still other chapters use biologically derived models or heuristics to suggest neural network solutions to challenging technological problems in computer vision. Topics range from analyses of motion, depth, color and form to new concepts about learning, attention, pattern recognition, and hardware implementation.

    • Paperback $14.75 £11.99
  • Pattern Recognition by Self-Organizing Neural Networks

    Pattern Recognition by Self-Organizing Neural Networks

    Gail A. Carpenter and Stephen Grossberg

    Pattern Recognition by Self-Organizing Neural Networks presents the most recent advances in an area of research that is becoming vitally important in the fields of cognitive science, neuroscience, artificial intelligence, and neural networks in general.

    Pattern Recognition by Self-Organizing Neural Networks presents the most recent advances in an area of research that is becoming vitally important in the fields of cognitive science, neuroscience, artificial intelligence, and neural networks in general. The 19 articles take up developments in competitive learning and computational maps, adaptive resonance theory, and specialized architectures and biological connections.

    Introductory survey articles provide a framework for understanding the many models involved in various approaches to studying neural networks. These are followed in Part 2 by articles that form the foundation for models of competitive learning and computational mapping, and recent articles by Kohonen, applying them to problems in speech recognition, and by Hecht-Nielsen, applying them to problems in designing adaptive lookup tables. Articles in Part 3 focus on adaptive resonance theory (ART) networks, selforganizing pattern recognition systems whose top-down template feedback signals guarantee their stable learning in response to arbitrary sequences of input patterns. In Part 4, articles describe embedding ART modules into larger architectures and provide experimental evidence from neurophysiology, event-related potentials, and psychology that support the prediction that ART mechanisms exist in the brain.

    Contributors J.-P. Banquet, G.A. Carpenter, S. Grossberg, R. Hecht-Nielsen, T. Kohonen, B. Kosko, T.W. Ryan, N.A. Schmajuk, W. Singer, D. Stork, C. von der Malsburg, C.L. Winter

    • Hardcover $100.00 £82.00
    • Paperback $70.00 £58.00