In the 1960s, a team of Stanford musicians, engineers, computer scientists, and psychologists used computing in an entirely novel way: to produce and manipulate sound and create the sonic basis of new musical compositions. This group of interdisciplinary researchers at the nascent Center for Computer Research in Music and Acoustics (CCRMA, pronounced “karma”) helped to develop computer music as an academic field, invent the technologies that underlie it, and usher in the age of digital music.
Music in video games is often a sophisticated, complex composition that serves to engage the player, set the pace of play, and aid interactivity. Composers of video game music must master an array of specialized skills not taught in the conservatory, including the creation of linear loops, music chunks for horizontal resequencing, and compositional fragments for use within a generative framework.
A decade ago, the customizable ringtone was ubiquitous. Almost any crowd of cell phone owners could produce a carillon of tinkly, beeping, synthy, musicalized ringer signals. Ringtones quickly became a multi-billion-dollar global industry and almost as quickly faded away. In The Ringtone Dialectic, Sumanth Gopinath charts the rise and fall of the ringtone economy and assesses its effect on cultural production.
Sound is an integral part of every user experience but a neglected medium in design disciplines. Design of an artifact’s sonic qualities is often limited to the shaping of functional, representational, and signaling roles of sound. The interdisciplinary field of sonic interaction design (SID) challenges these prevalent approaches by considering sound as an active medium that can enable novel sensory and social experiences through interactive technologies.
In Playing with Sound, Karen Collins examines video game sound from the player’s perspective. She explores the many ways that players interact with a game’s sonic aspects—which include not only music but also sound effects, ambient sound, dialogue, and interface sounds—both within and outside of the game. She investigates the ways that meaning is found, embodied, created, evoked, hacked, remixed, negotiated, and renegotiated by players in the space of interactive sound in games.
Volume 2 of Musimathics continues the story of music engineering begun in Volume 1, focusing on the digital and computational domain. Loy goes deeper into the mathematics of music and sound, beginning with digital audio, sampling, and binary numbers, as well as complex numbers and how they simplify representation of musical signals. Chapters cover the Fourier transform, convolution, filtering, resonance, the wave equation, acoustical systems, sound synthesis, the short-time Fourier transform, and the wavelet transform.
Mathematics can be as effortless as humming a tune, if you know the tune,” writes Gareth Loy. In Musimathics, Loy teaches us the tune, providing a friendly and spirited tour of the mathematics of music—a commonsense, self-contained introduction for the nonspecialist reader. It is designed for musicians who find their art increasingly mediated by technology, and for anyone who is interested in the intersection of art and science.
SuperCollider is one of the most important domain-specific audio programming languages, with potential applications that include real-time interaction, installations, electroacoustic pieces, generative music, and audiovisuals. The SuperCollider Book is the essential reference to this powerful and flexible language, offering students and professionals a collection of tutorials, essays, and projects.
This comprehensive handbook of mathematical and programming techniques for audio signal processing will be an essential reference for all computer musicians, computer scientists, engineers, and anyone interested in audio. Designed to be used by readers with varying levels of programming expertise, it not only provides the foundations for music and audio development but also tackles issues that sometimes remain mysterious even to experienced software designers.
Designing Sound teaches students and professional sound designers to understand and create sound effects starting from nothing. Its thesis is that any sound can be generated from first principles, guided by analysis and synthesis. The text takes a practitioner’s perspective, exploring the basic principles of making ordinary, everyday sounds using an easily accessed free software. Readers use the Pure Data (Pd) language to construct sound objects, which are more flexible and useful than recordings.