The psychologist William James observed that "a native talent for perceiving analogies is ... the leading fact in genius of every order." The centrality and the ubiquity of analogy in creative thought have been noted again and again by scientists, artists, and writers, and understanding and modeling analogical thought have emerged as two of the most important challenges for cognitive science.
Rethinking Innateness asks the question, "What does it really mean to say that a behavior is innate?" The authors describe a new framework in which interactions, occurring at all levels, give rise to emergent forms and behaviors. These outcomes often may be highly constrained and universal, yet are not themselves directly contained in the genes in any domain-specific way.
Drawing on ideas from cognitive linguistics, connectionism, and perception, The Human Semantic Potential describes a connectionist model that learns perceptually grounded semantics for natural language in spatial terms. Languages differ in the ways in which they structure space, and Regier's aim is to have the model perform its learning task for terms from any natural language. The system has so far succeeded in learning spatial terms from English, German, Russian, Japanese, and Mixtec.
Risto Miikkulainen draws on recent connectionist work in language comprehension to create a model that can understand natural language. Using the DISCERN system as an example, he describes a general approach to building high-level cognitive models from distributed neural networks and shows how the special properties of such networks are useful in modeling human performance. In this approach connectionist networks are not only plausible models of isolated cognitive phenomena, but also sufficient constituents for complete artificial intelligence systems.
What do people learn when they do not know that they are learning? Until recently all of the work in the area of implicit learning focused on empirical questions and methods. In this book, Axel Cleeremans explores unintentional learning from an information-processing perspective. He introduces a theoretical framework that unifies existing data and models on implicit learning, along with a detailed computational model of human performance in sequence-learning situations.
Using the tools of complexity theory, Stephen Judd develops a formal description of associative learning in connectionist networks. He rigorously exposes the computational difficulties in training neural networks and explores how certain design principles will or will not make the problems easier.Judd looks beyond the scope of any one particular learning rule, at a level above the details of neurons.