The increased sophistication and availability of massively parallel supercomputers has had two major impacts on research in artificial intelligence, both of which are addressed in this collection of exciting new AI theories and experiments. Massively parallel computers have been used to push forward research in traditional AI topics such as vision, search, and speech. More important, these machines allow AI to expand in exciting new ways by taking advantage of research in neuroscience and developing new models and paradigms, among them associate memory, neural networks, genetic algorithms, artificial life, society-of-mind models, and subsumption architectures. A number of chapters show that massively parallel computing enables AI researchers to handle significantly larger amounts of data in real time, which changes the way that AI systems can be built, which in turn makes memory-based reasoning and neural-network-based vision systems become practical. Other chapters present the contrasting view that massively parallel computing provides a platform to model and build intelligent systems by simulating the (massively parallel) processes that occur in nature.