A distinguishing feature of video games is their interactivity, and sound plays an important role in this: a player’s actions can trigger dialogue, sound effects, ambient sound, and music. And yet game sound has been neglected in the growing literature on game studies. This book fills that gap, introducing readers to the many complex aspects of game audio, from its development in early games to theoretical discussions of immersion and realism. In Game Sound, Karen Collins draws on a range of sources--including composers, sound designers, voice-over actors and other industry professionals, Internet articles, fan sites, industry conferences, magazines, patent documents, and, of course, the games themselves--to offer a broad overview of the history, theory, and production practice of video game audio. Game Sound has two underlying themes: how and why games are different from or similar to film or other linear audiovisual media; and technology and the constraints it has placed on the production of game audio. Collins focuses first on the historical development of game audio, from penny arcades through the rise of home games and the recent rapid developments in the industry. She then examines the production process for a contemporary game at a large game company, discussing the roles of composers, sound designers, voice talent, and audio programmers; considers the growing presence of licensed intellectual property (particularly popular music and films) in games; and explores the function of audio in games in theoretical terms. Finally, she discusses the difficulties posed by nonlinearity and interactivity for the composer of game music.
The art of sound organization, also known as electroacoustic music, uses sounds not available to traditional music making, including pre-recorded, synthesized, and processed sounds. The body of work of such sound-based music (which includes electroacoustic art music, turntable composition, computer games, and acoustic and digital sound installations) has developed more rapidly than its musicology. Understanding the Art of Sound Organization proposes the first general foundational framework for the study of the art of sound organization, defining terms, discussing relevant forms of music, categorizing works, and setting sound-based music in interdisciplinary contexts.
Leigh Landy's goal in this book is not only to create a theoretical framework but also to make sound-based music more accessible—to give a listener what he terms "something to hold on to," for example, by connecting elements in a work to everyday experience. Landy considers the difficulties of categorizing works and discusses such types of works as sonic art and electroacoustic music, pointing out where they overlap and how they are distinctive. He proposes a "sound-based music paradigm" that transcends such traditional categories as art and pop music. Landy defines patterns that suggest a general framework and places the study of sound-based music in interdisciplinary contexts, from acoustics to semiotics, proposing a holistic research approach that considers the interconnectedness of a given work's history, theory, technological aspects, and social impact.
The author's ElectroAcoustic Resource Site (EARS, www.ears.dmu.ac.uk), the architecture of which parallels this book's structure, offers updated bibliographic resource abstracts and related information.
Digital media handles music as encoded physical energy, but humans consider music in terms of beliefs, intentions, interpretations, experiences, evaluations, and significations. In this book, drawing on work in computer science, psychology, brain science, and musicology, Marc Leman proposes an embodied cognition approach to music research that will help bridge this gap. Assuming that the body plays a central role in all musical activities, and basing his approach on a hypothesis about the relationship between musical experience (mind) and sound energy (matter), Leman argues that the human body is a biologically designed mediator that transfers physical energy to a mental level—engaging experiences, values, and intentions—and, reversing the process, transfers mental representation into material form. He suggests that this idea of the body as mediator offers a promising framework for thinking about music mediation technology. Leman proposes that, under certain conditions, the natural mediator (the body) can be extended with artificial technology-based mediators. He explores the necessary conditions and analyzes ways in which they can be studied. Leman outlines his theory of embodied music cognition, introducing a model that describes the relationship between a human subject and its environment, analyzing the coupling of action and perception, and exploring different degrees of the body's engagement with music. He then examines possible applications in two core areas: interaction with music instruments and music search and retrieval in a database or digital library. The embodied music cognition approach, Leman argues, can help us develop tools that integrate artistic expression and contemporary technology.
Early Western music and the art music of the non-Western world both lack highly specified, standardized systems of notation. A serious impediment to the systematic study of early and non-Western music arises when a repertory has no extensive notational system, or multiple, non-standardized ones. In different ways, these conditions pertain to medieval and Renaissance music in the West, and to the art music of Asia, which has traditionally depended on oral tradition rather than notation. Computers hold great potential for the analysis of early music repertories and for the study of music that lies outside the Western tradition. This volume of Computing in Musicology considers approaches to the computer representation, interchange, and analysis of music that predates Western European art music, lies outside the bounds of Western European art music, or both. It describes efforts to provide new tools that may make such work more practical in the future, and it brings fresh insights to the repertories themselves. Initial articles in this issue also treat current work on data interchange involving XML, since interchangeability remains an important ingredient of representational designs for all kinds of music.
Contributors come from the fields of musicology and ethnomusicology, audio and software engineering, and mathematics and computer science. They include Parag Chordia, Sachiko Deguchi, Annalisa Doneda, Michael Good, Christine Jeanneret, Arvindh Krishnaswamy, Panayotis Mavromatis, Laurent Pugin, Craig Stuart Sapp, Eleanor Selfridge-Field, Katsuhiko Shirai, Iman S. H. Suyoto, Alexandra L. Uitdenbogerd, and Joshua Veltman.
In this original and provocative study of computational creativity in music, David Cope asks whether computer programs can effectively model creativity—and whether computer programs themselves can create. Defining musical creativity, and distinguishing it from creativity in other arts, Cope presents a series of experimental models that illustrate salient features of musical creativity. He makes the case that musical creativity results from a process that he calls inductive association, and he contends that such a computational process can in fact produce music creatively. Drawing on the work of many other scholars and musicians—including Douglas Hofstadter, Margaret Boden, Selmer Bringsjord, and Kathleen Lennon—Cope departs from the views expressed by most with his contentions that computer programs can create and that those who do not believe this have probably defined creativity so narrowly that even humans could not be said to create.
After examining the foundations of creativity and musical creativity, Cope describes a number of possible models for computationally imitating human creativity in music. He discusses such issues as recombinance and pattern matching, allusions, learning, inference, analogy, musical hierarchy, and influence, and finds that these experimental models solve only selected aspects of creativity. He then describes a model that integrates these different aspects—an inductive-association computational process that can create music. Cope's writing style is lively and nontechnical; the reader needs neither knowledge of computer programming nor specialized computer hardware or software to follow the text.
The computer programs discussed in the text, along with MP3 versions of all the musical examples, are available at the author's website, http://arts.ucsc.edu/faculty/cope, by clicking on the link to the left.
The field of music query has grown from tentative beginnings in bibliographical systems of earlier decades to a substantial area of interdisciplinary studies in little more than a decade. This volume assembles recent studies from Europe and North America concerned with the query and analysis of musical data. Among these, methods for the synchronization of sound and symbolic data, for automatic analysis through perceptual rules, and for computing a "transportation" distance for thematic comparison are described. The modeling of rhythmic motifs, of melodic traits, and of cognitive distance are discussed. User studies report on human preferences in modes of query (humming vs. tapping, etc.) and on the comparative success rates of more than two dozen proposed metrics for melodic comparison.
In this book, David Temperley addresses a fundamental question about music cognition: how do we extract basic kinds of musical information, such as meter, phrase structure, counterpoint, pitch spelling, harmony, and key from music as we hear it? Taking a computational approach, Temperley develops models for generating these aspects of musical structure. The models he proposes are based on preference rules, which are criteria for evaluating a possible structural analysis of a piece of music. A preference rule system evaluates many possible interpretations and chooses the one that best satisfies the rules.
After an introductory chapter, Temperley presents preference rule systems for generating six basic kinds of musical structure: meter, phrase structure, contrapuntal structure, harmony, and key, as well as pitch spelling (the labeling of pitch events with spellings such as A flat or G sharp). He suggests that preference rule systems not only show how musical structures are inferred, but also shed light on other aspects of music. He substantiates this claim with discussions of musical ambiguity, retrospective revision, expectation, and music outside the Western canon (rock and traditional African music). He proposes a framework for the description of musical styles based on preference rule systems and explores the relevance of preference rule systems to higher-level aspects of music, such as musical schemata, narrative and drama, and musical tension.
Below the level of the musical note lies the realm of microsound, of sound particles lasting less than one-tenth of a second. Recent technological advances allow us to probe and manipulate these pinpoints of sound, dissolving the traditional building blocks of music—notes and their intervals—into a more fluid and supple medium. The sensations of point, pulse (series of points), line (tone), and surface (texture) emerge as particle density increases. Sounds coalesce, evaporate, and mutate into other sounds. Composers have used theories of microsound in computer music since the 1950s. Distinguished practitioners include Karlheinz Stockhausen and Iannis Xenakis. Today, with the increased interest in computer and electronic music, many young composers and software synthesis developers are exploring its advantages. Covering all aspects of composition with sound particles, Microsound offers composition theory, historical accounts, technical overviews, acoustical experiments, descriptions of musical works, and aesthetic reflections. The book is accompanied by an audio CD of examples.
Virtual Music is about artificial creativity. Focusing on the author's Experiments in Musical Intelligence computer music composing program, the author and a distinguished group of experts discuss many of the issues surrounding the program, including artificial intelligence, music cognition, and aesthetics.
The book is divided into four parts. The first part provides a historical background to Experiments in Musical Intelligence, including examples of historical antecedents, followed by an overview of the program by Douglas Hofstadter. The second part follows the composition of an Experiments in Musical Intelligence work, from the creation of a database to the completion of a new work in the style of Mozart. It includes, in sophisticated lay terms, relatively detailed explanations of how each step in the process contributes to the final composition. The third part consists of perspectives and analyses by Jonathan Berger, Daniel Dennett, Bernard Greenberg, Douglas R. Hofstadter, Steve Larson, and Eleanor Selfridge-Field. The fourth part presents the author's responses to these commentaries, as well as his thoughts on the implications of artificial creativity.
The book (and corresponding Web site) includes an appendix providing extended musical examples referred to and discussed in the book, including composers such as Scarlatti, Bach, Mozart, Beethoven, Schubert, Chopin, Puccini, Rachmaninoff, Prokofiev, Debussy, Bartok, and others. It is also accompanied by a CD containing performances of the music in the text.
Musicians begin formal training by acquiring a body of musical concepts commonly known as musicianship. These concepts underlie the musical skills of listening, performance, and composition. Like humans, computer music programs can benefit from a systematic foundation of musical knowledge. This book explores the technology of implementing musical processes such as segmentation, pattern processing, and interactive improvisation in computer programs. It shows how the resulting applications can be used to accomplish tasks ranging from the solution of simple musical problems to the live performance of interactive compositions and the design of musically responsive installations and Web sites.Machine Musicianship is both a programming tutorial and an exploration of the foundational concepts of musical analysis, performance, and composition. The theoretical foundations are derived from the fields of music theory, computer music, music cognition, and artificial intelligence. The book will be of interest to practitioners of those fields, as well as to performers and composers.The concepts are programmed using C++ and Max. The accompanying CD-ROM includes working versions of the examples, as well as source code and a hypertext document showing how the code leads to the program's musical functionality.