Although William James declared in 1890, "Everyone knows what attention is," today there are many different and sometimes opposing views on the subject. This fragmented theoretical landscape may be because most of the theories and models of attention offer explanations in natural language or in a pictorial manner rather than providing a quantitative and unambiguous statement of the theory. They focus on the manifestations of attention instead of its rationale. In this book, John Tsotsos develops a formal model of visual attention with the goal of providing a theoretical explanation for why humans (and animals) must have the capacity to attend. He takes a unique approach to the theory, using the full breadth of the language of computation—rather than simply the language of mathematics—as the formal means of description. The result, the Selective Tuning model of vision and attention, explains attentive behavior in humans and provides a foundation for building computer systems that see with human-like characteristics. The overarching conclusion is that human vision is based on a general purpose processor that can be dynamically tuned to the task and the scene viewed on a moment-by-moment basis.
Tsotsos offers a comprehensive, up-to-date overview of attention theories and models and a full description of the Selective Tuning model, confining the formal elements to two chapters and two appendixes. The text is accompanied by more than 100 illustrations in black and white and color; additional color illustrations and movies are available on the book's Web site
Can a blind person see? The very idea seems paradoxical. And yet, if we conceive of "seeing" as the ability to generate internal mental representations that may contain visual details, the idea of blind vision becomes a concept subject to investigation. In this book, Zaira Cattaneo and Tomaso Vecchi examine the effects of blindness and other types of visual deficit on the development and functioning of the human cognitive system. Drawing on behavioral and neurophysiological data, Cattaneo and Vecchi analyze research on mental imagery, spatial cognition, and compensatory mechanisms at the sensorial, cognitive, and cortical levels in individuals with complete or profound visual impairment. They find that our brain does not need our eyes to "see."
Cattaneo and Vecchi address critical questions of broad importance: the relationship of visual perception to imagery and working memory and the extent to which mental imagery depends on normal vision; the functional and neural relationships between vision and the other senses; the specific aspects of the visual experience that are crucial to cognitive development or specific cognitive mechanisms; and the extraordinary plasticity of the brain—as illustrated by the way that, in the blind, the visual cortex may be reorganized to support other perceptual or cognitive funtions. In the absence of vision, the other senses work as functional substitutes and are often improved. With Blind Vision, Cattaneo and Vecchi take on the "tyranny of the visual," pointing to the importance of the other senses in cognition.
This book breaks with the conventional model of perception that views vision as a mere inference to an objective reality on the basis of "inverse optics." The authors offer the alternative view that perception is an expressive and awareness-generating process. Perception creates semantic information in such a way as to enable the observer to deal efficaciously with the chaotic and meaningless structure present at the physical boundary between the body and its surroundings. Vision is intentional by its very nature; visual qualities are essential and real, providing an aesthetic and meaningful interface to the structures of physics and the state of the brain. This view brings perception firmly in line with ethology and modern evolutionary biology and suggests new approaches in all disciplines that study, or require an understanding of, the ontology of mind.
The book is the joint effort of a multidisciplinary group of authors. Topics covered include the relationships among stimuli, neuronal processes, and visual awareness. After considering the mind-dependent growing of information, the book treats time and dynamics; color, shape, and space; language and perception; perception, art, and design.
InThings and Places, Zenon Pylyshyn argues that the process of incrementally constructing perceptual representations, solving the binding problem (determining which properties go together), and, more generally, grounding perceptual representations in experience arise from the nonconceptual capacity to pick out and keep track of a small number of sensory individuals. He proposes a mechanism in early vision that allows us to select a limited number of sensory objects, to reidentify each of them under certain conditions as the same individual seen before, and to keep track of their enduring individuality despite radical changes in their properties—all without the machinery of concepts, identity, and tenses. This mechanism, which he calls FINSTs (for "Fingers of Instantiation"), is responsible for our capacity to individuate and track several independently moving sensory objects—an ability that we exercise every waking minute, and one that can be understood as fundamental to the way we see and understand the world and to our sense of space.
Pylyshyn examines certain empirical phenomena of early vision in light of the FINST mechanism, including tracking and attentional selection. He argues provocatively that the initial selection of perceptual individuals is our primary nonconceptual contact with the perceptual world (a contact that does not depend on prior encoding of any properties of the thing selected) and then draws upon a wide range of empirical data to support a radical externalist theory of spatial representation that grows out of his indexing theory.
The recognition of faces is a fundamental visual function with importance for social interaction and communication. Scientific interest in facial recognition has increased dramatically over the last decade. Researchers in such fields as psychology, neurophysiology, and functional imaging have published more than 10,000 studies on face processing. Almost all of these studies focus on the processing of static pictures of faces, however, with little attention paid to the recognition of dynamic faces, faces as they change over time—a topic in neuroscience that is also relevant for a variety of technical applications, including robotics, animation, and human-computer interfaces. This volume offers a state-of-the-art, interdisciplinary overview of recent work on dynamic faces from both biological and computational perspectives.
The chapters cover a broad range of topics, including the psychophysics of dynamic face perception, results from electrophysiology and imaging, clinical deficits in patients with impairments of dynamic face processing, and computational models that provide insights about the brain mechanisms for the processing of dynamic faces. The book offers neuroscientists and biologists an essential reference for designing new experiments, and provides computer scientists with knowledge that will help them improve technical systems for the recognition, processing, synthesizing, and animating of dynamic faces.
The uniqueness of shape as a perceptual property lies in the fact that it is both complex and structured. Shapes are perceived veridically—perceived as they really are in the physical world, regardless of the orientation from which they are viewed. The constancy of the shape percept is the sine qua non of shape perception; you are not actually studying shape if constancy cannot be achieved with the stimulus you are using. Shape is the only perceptual attribute of an object that allows unambiguous identification. In this first book devoted exclusively to the perception of shape by humans and machines, Zygmunt Pizlo describes how we perceive shapes and how to design machines that can see shapes as we do. He reviews the long history of the subject, allowing the reader to understand why it has taken so long to understand shape perception, and offers a new theory of shape.
Until recently, shape was treated in combination with such other perceptual properties as depth, motion, speed, and color. This resulted in apparently contradictory findings, which made a coherent theoretical treatment of shape impossible. Pizlo argues that once shape is understood to be unique among visual attributes and the perceptual mechanisms underlying shape are seen to be different from other perceptual mechanisms, the research on shape becomes coherent and experimental findings no longer seem to contradict each other. A single theory of shape perception is thus possible, and Pizlo offers a theoretical treatment that explains how a three-dimensional shape percept is produced from a two-dimensional retinal image, assuming only that the image has been organized into two-dimensional shapes.
Pizlo focuses on discussion of the main concepts, telling the story of shape without interruption. Appendixes provide the basic mathematical and computational information necessary for a technical understanding of the argument. References point the way to more in-depth reading in geometry and computational vision.
David Marr's posthumously published Vision (1982) influenced a generation of brain and cognitive scientists, inspiring many to enter the field. In Vision, Marr describes a general framework for understanding visual perception and touches on broader questions about how the brain and its functions can be studied and understood. Researchers from a range of brain and cognitive sciences have long valued Marr’s creativity, intellectual power, and ability to integrate insights and data from neuroscience, psychology, and computation. This MIT Press edition makes Marr's influential work available to a new generation of students and scientists.
In Marr's framework, the process of vision constructs a set of representations, starting from a description of the input image and culminating with a description of three-dimensional objects in the surrounding environment. A central theme, and one that has had far-reaching influence in both neuroscience and cognitive science, is the notion of different levels of analysis—in Marr's framework, the computational level, the algorithmic level, and the hardware implementation level.
Now, thirty years later, the main problems that occupied Marr remain fundamental open problems in the study of perception. Vision provides inspiration for the continuing efforts to integrate knowledge from cognition and computation to understand vision and the brain.
Seeing has puzzled scientists and philosophers for centuries and it continues to do so. This new edition of a classic text offers an accessible but rigorous introduction to the computational approach to understanding biological visual systems. The authors of Seeing, taking as their premise David Marr’s statement that “to understand vision by studying only neurons is like trying to understand bird flight by studying only feathers,” make use of Marr’s three different levels of analysis in the study of vision: the computational level, the algorithmic level, and the hardware implementation level. Each chapter applies this approach to a different topic in vision by examining the problems the visual system encounters in interpreting retinal images and the constraints available to solve these problems; the algorithms that can realize the solution; and the implementation of these algorithms in neurons.Seeing has been thoroughly updated for this edition and expanded to more than three times its original length. It is designed to lead the reader through the problems of vision, from the common (but mistaken) idea that seeing consists just of making pictures in the brain to the minutiae of how neurons collectively encode the visual features that underpin seeing. Although it assumes no prior knowledge of the field, some chapters present advanced material, This makes it the only textbook suitable for both undergraduate and graduate students that takes a consistently computational perspective, offering a firm conceptual basis for tackling the vast literature on vision. It covers a wide range of topics, including aftereffects, the retina, receptive fields, object recognition, brain maps, Bayesian perception, motion, color, and stereopsis. MatLab code is available on the book’s Web site, which includes a simple demonstration of image convolution.
Downloadable instructor resources available for this title: file of figures in the book
This classic work in vision science, written by a leading figure in Germany's Gestalt movement in psychology and first published in 1936, addresses topics that remain of major interest to vision researchers today. Wolfgang Metzger's main argument, drawn from Gestalt theory, is that the objects we perceive in visual experience are not the objects themselves but perceptual effigies of those objects constructed by our brain according to natural rules. Gestalt concepts are currently being increasingly integrated into mainstream neuroscience by researchers proposing network processing beyond the classical receptive field. Metzger's discussion of such topics as ambiguous figures, hidden forms, camouflage, shadows and depth, and three-dimensional representations in paintings will interest anyone working in the field of vision and perception, including psychologists, biologists, neurophysiologists, and researchers in computational vision--and artists, designers, and philosophers.Each chapter is accompanied by compelling visual demonstrations of the phenomena described; the book includes 194 illustrations, drawn from visual science, art, and everyday experience, that invite readers to verify Metzger's observations for themselves. Today's researchers may find themselves pondering the intriguing question of what effect Metzger's theories might have had on vision research if Laws of Seeing and its treasure trove of perceptual observations had been available to the English-speaking world at the time of its writing.
Recent years have seen a burst of studies on the mouse eye and visual system, fueled in large part by the relatively recent ability to produce mice with precisely defined changes in gene sequence. Mouse models have contributed to a wide range of scientific breakthroughs for a number of ocular and neurological diseases and have allowed researchers to address fundamental issues that were difficult to approach with other experimental models. This comprehensive guide to current research captures the first wave of studies in the field, with fifty-nine chapters by leading scholars that demonstrate the usefulness of mouse models as a bridge between experimental and clinical research.
The opening chapters introduce the mouse as a species and research model, discussing such topics as the mouse's evolutionary history and the mammalian visual system. Subsequent sections explore more specialized subjects, considering optics, psychophysics, and the visual behaviors of mice; the organization of the adult mouse eye and central visual system; the development of the mouse eye (including comparisons to human development); the development and plasticity of retinal projections and visuotopic maps; mouse models for human eye disease (including glaucoma and cataracts); and the application of advanced genomic technologies (including gene therapy and genetic knockouts) to the mouse visual system. Readers of this reference will see that the study of mouse models has already demonstrated real translational prowess in vision research.