The digital era promises, as did many other technological developments before it, the transformation of society: with the computer, we can transcend time, space, and politics-as-usual. In The Digital Sublime, Vincent Mosco goes beyond the usual stories of technological breakthrough and economic meltdown to explore the myths constructed around the new digital technology and why we feel compelled to believe in them. He tells us that what kept enthusiastic investors in the dotcom era bidding up stocks even after the crash had begun was not willful ignorance of the laws of economics but belief in the myth that cyberspace was opening up a new world.Myths are not just falsehoods that can be disproved, Mosco points out, but stories that lift us out of the banality of everyday life into the possibility of the sublime. He argues that if we take what we know about cyberspace and situate it within what we know about culture -- specifically the central post-Cold War myths of the end of history, geography, and politics -- we will add to our knowledge about the digital world; we need to see it "with both eyes" -- that is, to understand it both culturally and materially.After examining the myths of cyberspace and going back in history to look at the similar mythic pronouncements prompted by past technological advances -- the telephone, the radio, and television, among others -- Mosco takes us to Ground Zero. In the final chapter he considers the twin towers of the World Trade Center -- our icons of communication, information, and trade -- and their part in the politics, economics, and myths of cyberspace.
We live in the era of Big Data, with storage and transmission capacity measured not just in terabytes but in petabytes (where peta- denotes a quadrillion, or a thousand trillion). Data collection is constant and even insidious, with every click and every “like” stored somewhere for something. This book reminds us that data is anything but “raw,” that we shouldn’t think of data as a natural resource but as a cultural one that needs to be generated, protected, and interpreted. The book’s essays describe eight episodes in the history of data from the predigital to the digital. Together they address such issues as the ways that different kinds of data and different domains of inquiry are mutually defining; how data are variously “cooked” in the processes of their collection and use; and conflicts over what can—or can’t—be “reduced” to data. Contributors discuss the intellectual history of data as a concept; describe early financial modeling and some unusual sources for astronomical data; discover the prehistory of the database in newspaper clippings and index cards; and consider contemporary “dataveillance” of our online habits as well as the complexity of scientific data curation. Essay authors: Geoffrey C. Bowker, Kevin R. Brine, Ellen Gruber Garvey, Lisa Gitelman, Steven J. Jackson, Virginia Jackson, Markus Krajewski, Mary Poovey, Rita Raley, David Ribes, Daniel Rosenberg, Matthew Stanley, Travis D. Williams
The movement against restrictive digital copyright protection arose largely in response to the excesses of the Digital Millennium Copyright Act (DMCA) of 1998. In The Digital Rights Movement, Hector Postigo shows that what began as an assertion of consumer rights to digital content has become something broader: a movement concerned not just with consumers and gadgets but with cultural ownership. Increasingly stringent laws and technological measures are more than incoveniences; they lock up access to our “cultural commons.”
Postigo describes the legislative history of the DMCA and how policy “blind spots” produced a law at odds with existing and emerging consumer practices. Yet the DMCA established a political and legal rationale brought to bear on digital media, the Internet, and other new technologies. Drawing on social movement theory and science and technology studies, Postigo presents case studies of resistance to increased control over digital media, describing a host of tactics that range from hacking to lobbying.
Postigo discusses the movement’s new, user-centered conception of “fair use” that seeks to legitimize noncommercial personal and creative uses such as copying legitimately purchased content and remixing music and video tracks. He introduces the concept of technological resistance--when hackers and users design and deploy technologies that allows access to digital content despite technological protection mechanisms--as the flip side to the technological enforcement represented by digital copy protection and a crucial tactic for the movement.
Journalism has embraced digital media in its struggle to survive. But most online journalism just translates existing practices to the Web: stories are written and edited as they are for print; video and audio features are produced as they would be for television and radio. The authors of Newsgames propose a new way of doing good journalism: videogames.
Videogames are native to computers rather than a digitized form of prior media. Games simulate how things work by constructing interactive models; journalism as game involves more than just revisiting old forms of news production. Wired magazine’s game Cutthroat Capitalism, for example, explains the economics of Somali piracy by putting the player in command of a pirate ship, offering choices for hostage negotiation strategies.
Videogames do not offer a panacea for the ills of contemporary news organizations. But if the industry embraces them as a viable method of doing journalism--not just an occasional treat for online readers--newsgames can make a valuable contribution.
Wikipedia, the online encyclopedia, is built by a community--a community of Wikipedians who are expected to “assume good faith” when interacting with one another. In Good Faith Collaboration, Joseph Reagle examines this unique collaborative culture.
Wikipedia, says Reagle, is not the first effort to create a freely shared, universal encyclopedia; its early twentieth-century ancestors include Paul Otlet’s Universal Repository and H. G. Wells’s proposal for a World Brain. Both these projects, like Wikipedia, were fuelled by new technology--which at the time included index cards and microfilm. What distinguishes Wikipedia from these and other more recent ventures is Wikipedia’s good-faith collaborative culture, as seen not only in the writing and editing of articles but also in their discussion pages and edit histories. Keeping an open perspective on both knowledge claims and other contributors, Reagle argues, creates an extraordinary collaborative potential.
Wikipedia’s style of collaborative production has been imitated, analyzed, and satirized. Despite the social unease over its implications for individual autonomy, institutional authority, and the character (and quality) of cultural products, Wikipedia’s good-faith collaborative culture has brought us closer than ever to a realization of the century-old pursuit of a universal encyclopedia.
Today--following housing bubbles, bank collapses, and high unemployment--the Internet remains the most reliable mechanism for fostering innovation and creating new wealth. The Internet’s remarkable growth has been fueled by innovation. In this pathbreaking book, Barbara van Schewick argues that this explosion of innovation is not an accident, but a consequence of the Internet’s architecture--a consequence of technical choices regarding the Internet’s inner structure that were made early in its history.
The Internet’s original architecture was based on four design principles: modularity, layering, and two versions of the celebrated but often misunderstood end-to-end arguments. But today, the Internet’s architecture is changing in ways that deviate from the Internet’s original design principles, removing the features that have fostered innovation and threatening the Internet’s ability to spur economic growth, to improve democratic discourse, and to provide a decentralized environment for social and cultural interaction in which anyone can participate. If no one intervenes, network providers’ interests will drive networks further away from the original design principles. If the Internet’s value for society is to be preserved, van Schewick argues, policymakers will have to intervene and protect the features that were at the core of the Internet’s success.
Digital media and network technologies are now part of everyday life. The Internet has become the backbone of communication, commerce, and media; the ubiquitous mobile phone connects us with others as it removes us from any stable sense of location. Networked Publics examines the ways that the social and cultural shifts created by these technologies have transformed our relationships to (and definitions of) place, culture, politics, and infrastructure.
Four chapters—each by an interdisciplinary team of scholars using collaborative software—provide a synoptic overview along with illustrative case studies. The chapter on place describes how digital networks enable us to be present in physical and networked places simultaneously (on the phone while on the road; on the Web while at a café)—often at the expense of non-digital commitments. The chapter on culture explores the growth of amateur-produced and -remixed content online and the impact of these practices on the music, anime, advertising, and news industries. The chapter on politics examines the new networked modes of bottom-up political expression and mobilization, and the difficulty in channeling online political discourse into productive political deliberation. And finally, the chapter on infrastructure notes the tension between openness and control in the flow of information, as seen in the current controversy over net neutrality. An introduction by anthropologist Mizuko Ito and a conclusion by architecture theorist Kazys Varnelis frame the chapters, giving overviews of the radical nature of these transformations.
The Internet lets us share perfect copies of our work with a worldwide audience at virtually no cost. We take advantage of this revolutionary opportunity when we make our work “open access”: digital, online, free of charge, and free of most copyright and licensing restrictions. Open access is made possible by the Internet and copyright-holder consent, and many authors, musicians, filmmakers, and other creators who depend on royalties are understandably unwilling to give their consent. But for 350 years, scholars have written peer-reviewed journal articles for impact, not for money, and are free to consent to open access without losing revenue.
In this concise introduction, Peter Suber tells us what open access is and isn’t, how it benefits authors and readers of research, how we pay for it, how it avoids copyright problems, how it has moved from the periphery to the mainstream, and what its future may hold. Distilling a decade of Suber’s influential writing and thinking about open access, this is the indispensable book on the subject for researchers, librarians, administrators, funders, publishers, and policy makers.
The use of open-source software (OSS)--readable software source code that can be copied, modified, and distributed freely--has expanded dramatically in recent years. The number of OSS projects hosted on SourceForge.net (the largest hosting Web site for OSS), for example, grew from just over 100,000 in 2006 to more than 250,000 at the beginning of 2011. But why are some projects successful--that is, able to produce usable software and sustain ongoing development over time--while others are abandoned? In this book, the product of the first large-scale empirical study to look at social, technical, and institutional aspects of OSS, Charles Schweik and Robert English examine factors that lead to success in OSS projects and work toward a better understanding of Internet-based collaboration.
Drawing on literature from many disciplines and using a theoretical framework developed for the study of environmental commons, Schweik and English examine stages of OSS development, presenting multivariate statistical models of success and abandonment. Schweik and English argue that analyzing the conditions of OSS successes may also inform Internet collaborations in fields beyond software engineering, particularly those that aim to solve complex technical, social, and political problems.
The urban youth frequenting the Internet cafés of Accra, Ghana, who are decidedly not members of their country’s elite, use the Internet largely as a way to orchestrate encounters across distance and amass foreign ties--activities once limited to the wealthy, university-educated classes. The Internet, accessed on second-hand computers (castoffs from the United States and Europe), has become for these youths a means of enacting a more cosmopolitan self. In Invisible Users, Jenna Burrell offers a richly observed account of how these Internet enthusiasts have adopted, and adapted to their own priorities, a technological system that was not designed with them in mind.
Burrell describes the material space of the urban Internet café and the virtual space of push and pull between young Ghanaians and the foreigners they encounter online; the region’s famous 419 scam strategies and the rumors of “big gains” that fuel them; the influential role of churches and theories about how the supernatural operates through the network; and development rhetoric about digital technologies and the future viability of African Internet cafés in the region.
Burrell, integrating concepts from science and technology studies and African studies with empirical findings from her own field work in Ghana, captures the interpretive flexibility of technology by users in the margins but also highlights how their invisibility puts limits on their full inclusion into a global network society.