Skip navigation

Programming and Programming Languages

  • Page 3 of 17

Broadcast media, such as satellite, ground radio, and multipoint cable channels, can easily provide full connectivity for communication among geographically distributed users. One of the most important problems in the design of networks (referred to as packet broadcast networks) that can take practical advantage of broadcast channels is how to achieve efficient sharing of a single common channel.

Many multiple access protocols, or algorithms, for packet broadcast networks have been proposed, and much work has been done on the performance evaluation of the protocols. A variety of techniques have been used to analyze the performance; however, this is the first book to provide a unified approach to the performance evaluation problem by means of an approximate analytical technique called equilibrium point analysis.

Two types of packet broadcast networks—satellite networks and local area networks are considered, and eight multiple access protocols are studied and their performance analyzed in terms of throughput and average message delay.

Contents: Part I: Fundamentals. Multiple Access Protocols and Performance. Equilibrium Point Analysis. Part II: Satellite Networks. S-ALOHA. R-ALOHA. ALOHA-Reservation. TDMAReservation. SRUC. TDMA. Performance Comparisons of the Protocols for Satellite Networks. Part III: Local Area Networks. Buffered CSMACD. BRAM.

Performance Analysis of Multiple Access Protocols is included in the Computer Systems Series, Research Reports and Notes, edited by Herb Schwetman.

This book provides students with a deep, working understanding of the essential concepts of programming languages. Most of these essentials relate to the semantics, or meaning, of program elements, and the text uses interpreters (short programs that directly analyze an abstract representation of the program text) to express the semantics of many essential language elements in a way that is both clear and executable. The approach is both analytical and hands-on. The book provides views of programming languages using widely varying levels of abstraction, maintaining a clear connection between the high-level and low-level views. Exercises are a vital part of the text and are scattered throughout; the text explains the key concepts, and the exercises explore alternative designs and other issues. The complete Scheme code for all the interpreters and analyzers in the book can be found online through The MIT Press Web site. For this new edition, each chapter has been revised and many new exercises have been added. Significant additions have been made to the text, including completely new chapters on modules and continuation-passing style. Essentials of Programming Languages can be used for both graduate and undergraduate courses, and for continuing education courses for programmers.Daniel P. Friedman is Professor of Computer Science at Indiana University and is the author of many books published by The MIT Press, including The Little Schemer (fourth edition, 1995), The Seasoned Schemer (1995), A Little Java, A Few Patterns (1997), each of these coauthored with Matthias Felleisen, and The Reasoned Schemer (2005), coauthored with William E. Byrd and Oleg Kiselyov. Mitchell Wand is Professor of Computer Science at Northeastern University.

Building a Modern Computer from First Principles

In the early days of computer science, the interactions of hardware, software, compilers, and operating system were simple enough to allow students to see an overall picture of how computers worked. With the increasing complexity of computer technology and the resulting specialization of knowledge, such clarity is often lost. Unlike other texts that cover only one aspect of the field, The Elements of Computing Systems gives students an integrated and rigorous picture of applied computer science, as its comes to play in the construction of a simple yet powerful computer system.Indeed, the best way to understand how computers work is to build one from scratch, and this textbook leads students through twelve chapters and projects that gradually build a basic hardware platform and a modern software hierarchy from the ground up. In the process, the students gain hands-on knowledge of hardware architecture, operating systems, programming languages, compilers, data structures, algorithms, and software engineering. Using this constructive approach, the book exposes a significant body of computer science knowledge and demonstrates how theoretical and applied techniques taught in other courses fit into the overall picture.Designed to support one- or two-semester courses, the book is based on an abstraction-implementation paradigm; each chapter presents a key hardware or software abstraction, a proposed implementation that makes it concrete, and an actual project. The emerging computer system can be built by following the chapters, although this is only one option, since the projects are self-contained and can be done or skipped in any order. All the computer science knowledge necessary for completing the projects is embedded in the book, the only pre-requisite being a programming experience.The book's web site provides all tools and materials necessary to build all the hardware and software systems described in the text, including two hundred test programs for the twelve projects. The projects and systems can be modified to meet various teaching needs, and all the supplied software is open-source.

A Programming Handbook for Visual Designers and Artists

It has been more than twenty years since desktop publishing reinvented design, and it's clear that there is a growing need for designers and artists to learn programming skills to fill the widening gap between their ideas and the capability of their purchased software. This book is an introduction to the concepts of computer programming within the context of the visual arts. It offers a comprehensive reference and text for Processing (www.processing.org), an open-source programming language that can be used by students, artists, designers, architects, researchers, and anyone who wants to program images, animation, and interactivity.

The ideas in Processing have been tested in classrooms, workshops, and arts institutions, including UCLA, Carnegie Mellon, New York University, and Harvard University. Tutorial units make up the bulk of the book and introduce the syntax and concepts of software (including variables, functions, and object-oriented programming), cover such topics as photography and drawing in relation to software, and feature many short, prototypical example programs with related images and explanations. More advanced professional projects from such domains as animation, performance, and typography are discussed in interviews with their creators. "Extensions" present concise introductions to further areas of investigation, including computer vision, sound, and electronics. Appendixes, references to other material, and a glossary contain additional technical details. Processing can be used by reading each unit in order, or by following each category from the beginning of the book to the end. The Processing software and all of the code presented can be downloaded and run for future exploration.

Essays by: Alexander R. Galloway, Golan Levin, R. Luke DuBois, Simon Greenwold, Francis Li, Hernando Barragán

Interviews with: Jared Tarbell, Martin Wattenberg, James Paterson, Erik van Blockland, Ed Burton, Josh On, Jürg Lehni, Auriea Harvey and Michaël Samyn, Mathew Cullen and Grady Hall, Bob Sabiston, Jennifer Steinkamp, Ruth Jarman and Joseph Gerhardt, Sue Costabile, Chris Csikszentmihályi, Golan Levin and Zachary Lieberman, Mark Hansen

(Pr) Casey Reas from Protein® on Vimeo.

 

What is the status of the Free and Open Source Software (F/OSS) revolution? Has the creation of software that can be freely used, modified, and redistributed transformed industry and society, as some predicted, or is this transformation still a work in progress? Perspectives on Free and Open Source Software brings together leading analysts and researchers to address this question, examining specific aspects of F/OSS in a way that is both scientifically rigorous and highly relevant to real-life managerial and technical concerns.

The book analyzes a number of key topics: the motivation behind F/OSS—why highly skilled software developers devote large amounts of time to the creation of "free" products and services; the objective, empirically grounded evaluation of software—necessary to counter what one chapter author calls the "steamroller" of F/OSS hype; the software engineering processes and tools used in specific projects, including Apache, GNOME, and Mozilla; the economic and business models that reflect the changing relationships between users and firms, technical communities and firms, and between competitors; and legal, cultural, and social issues, including one contribution that suggests parallels between "open code" and "open society" and another that points to the need for understanding the movement's social causes and consequences.

Collaborative Ownership and the Digital Economy
Edited by Rishab Ghosh

Open source software is considered by many to be a novelty and the open source movement a revolution. Yet the collaborative creation of knowledge has gone on for as long as humans have been able to communicate. CODE looks at the collaborative model of creativity -- with examples ranging from collective ownership in indigenous societies to free software, academic science, and the human genome project -- and finds it an alternative to proprietary frameworks for creativity based on strong intellectual property rights.Intellectual property rights, argues Rishab Ghosh in his introduction, were ostensibly developed to increase creativity; but today, policy decisions that treat knowledge and art as if they were physical forms of property actually threaten to decrease creativity, limit public access to creativity, and discourage collaborative creativity. "Newton should have had to pay a license fee before being allowed even to see how tall the 'shoulders of giants' were, let alone to stand upon them," he writes.The contributors to CODE, from such diverse fields as economics, anthropology, law, and software development, examine collaborative creativity from a variety of perspectives, looking at new and old forms of creative collaboration and the mechanisms emerging to study them. Discussing the philosophically resonant issues of ownership, property, and the commons, they ask if the increasing application of the language of property rights to knowledge and creativity constitutes a second enclosure movement -- or if the worldwide acclaim for free software signifies a renaissance of the commons. Two concluding chapters offer concrete possibilities for both alternatives, with one proposing the establishment of "positive intellectual rights" to information and another issuing a warning against the threats to networked knowledge posed by globalization.

After completing this self-contained course on server-based Internet applications software, students who start with only the knowledge of how to write and debug a computer program will have learned how to build web-based applications on the scale of Amazon.com. Unlike the desktop applications that most students have already learned to build, server-based applications have multiple simultaneous users. This fact, coupled with the unreliability of networks, gives rise to the problems of concurrency and transactions, which students learn to manage by using the relational database system.

After working their way to the end of the book, students will have the skills to take vague and ambitious specifications and turn them into a system design that can be built and launched in a few months. They will be able to test prototypes with end-users and refine the application design. They will understand how to meet the challenge of extreme business requirements with automatic code generation and the use of open-source toolkits where appropriate. Students will understand HTTP, HTML, SQL, mobile browsers, VoiceXML, data modeling, page flow and interaction design, server-side scripting, and usability analysis.

The book, which originated as the text for an MIT course, is suitable for classroom use and will be a useful reference for software professionals developing multi-user Internet applications. It will also help managers evaluate such commercial software as Microsoft Sharepoint of Microsoft Content Management Server.

The goal of The Reasoned Schemer is to help the functional programmer think logically and the logic programmer think functionally. The authors of The Reasoned Schemer believe that logic programming is a natural extension of functional programming, and they demonstrate this by extending the functional language Scheme with logical constructs—thereby combining the benefits of both styles. The extension encapsulates most of the ideas in the logic programming language Prolog. The pedagogical method of The Reasoned Schemer is a series of questions and answers, which proceed with the characteristic humor that marked The Little Schemer and The Seasoned Schmer. Familiarity with a functional language or with the first eight chapters of The Little Schemer is assumed. Adding logic capabilities required the introduction of new forms. The authors' goal is to show to what extent writing logic programs is the same as writing functional programs using these forms. In this way, the reader of The Reasoned Schemer will come to understand how simple logic programming is and how easy it is to define functions that behave like relations.

Understanding an Indispensable Technology and Industry

Software has gone from obscurity to indispensability in less than fifty years. Although other industries have followed a similar trajectory, software and its supporting industry are different. In this book the authors explain, from a variety of perspectives, how software and the software industry are different—technologically, organizationally, and socially.

The growing importance of software requires professionals in all fields to deal with both its technical and social aspects; therefore, users and producers of software need a common vocabulary to discuss software issues. In Software Ecosystem, Messerschmitt and Szyperski address the overlapping and related perspectives of technologists and nontechnologists. After an introductory chapter on technology, the book is organized around six points of view: users, and what they need software to accomplish for them; software engineers and developers, who translate the user's needs into program code; managers, who must orchestrate the resources, material and human, to operate the software; industrialists, who organize companies to produce and distribute software; policy experts and lawyers, who must resolve conflicts inside and outside the industry without discouraging growth and innovation; and economists, who offer insights into how the software market works. Each chapter considers not only the issues most relevant to that perspective but also relates those issues to the other perspectives as well. Nontechnologists will appreciate the context in which technology is discussed; technical professionals will gain more understanding of the social issues that should be considered in order to make software more useful and successful.

Uncertainty is a fundamental and unavoidable feature of daily life; in order to deal with uncertaintly intelligently, we need to be able to represent it and reason about it. In this book, Joseph Halpern examines formal ways of representing uncertainty and considers various logics for reasoning about it. While the ideas presented are formalized in terms of definitions and theorems, the emphasis is on the philosophy of representing and reasoning about uncertainty; the material is accessible and relevant to researchers and students in many fields, including computer science, artificial intelligence, economics (particularly game theory), mathematics, philosophy, and statistics.

Halpern begins by surveying possible formal systems for representing uncertainty, including probability measures, possibility measures, and plausibility measures. He considers the updating of beliefs based on changing information and the relation to Bayes' theorem; this leads to a discussion of qualitative, quantitative, and plausibilistic Bayesian networks. He considers not only the uncertainty of a single agent but also uncertainty in a multi-agent framework. Halpern then considers the formal logical systems for reasoning about uncertainty. He discusses knowledge and belief; default reasoning and the semantics of default; reasoning about counterfactuals, and combining probability and counterfactuals; belief revision; first-order modal logic; and statistics and beliefs. He includes a series of exercises at the end of each chapter.

  • Page 3 of 17