Herbert Schwetman

  • Research Directions in Object-Oriented Programming

    Gul Agha, David Beech, Daniel G. Bobrow, Ole-Johan Dahl, Joseph A. Goguen, Brent Hailpern, Kenneth M. Kahn, Ole Lehrmann Madsen, David Maier, Andrea Skarra, Harold L. Ossher, Steven Reiss, Herbert Schwetman, Reid Smith, Alan Snyder, Peter Wegner, and Stanley Zdonik

    Once a radical notion, object-oriented programming is one of today's most active research areas.

    Once a radical notion, object-oriented programming is one of today's most active research areas. It is especially well suited to the design of very large software projects involving many programmers all working on the same project. The original contributions in this book will provide researchers and students in programming languages, databases, and programming semantics with the most complete survey of the field available. Broad in scope and deep in its examination of substantive issues, the book focuses on the major topics of object-oriented languages, models of computation, mathematical models, object-oriented databases, and object-oriented environments. The object-oriented languages include Beta, the Scandinavian successor to Simula (a chapter by Bent Kristensen, whose group has had the longest experience with object-oriented programming, reveals how that experience has shaped the group's vision today); CommonObjects, a Lisp-based language with abstraction; Actors, a low-level language for concurrent modularity; and Vulcan, a Prolog-based concurrent object-oriented language. New computational models of inheritance, composite objects, block-structure layered systems, and classification are covered, and theoretical papers on functional object-oriented languages and object-oriented specification are included in the section on mathematical models. The three chapters on object-oriented databases (including David Maier's "Development and Implementation of an Object-Oriented Database Management System," which spans the programming and database worlds by integrating procedural and representational capability and the requirements of multi-user persistent storage) and the two chapters on object-oriented environments provide a representative sample of good research in these two important areas.

    Research Directions in Object-Oriented Programming is included in the Computer Systems series, edited by Herb Schwetman.

    • Hardcover $75.00

Contributor

  • Research Directions in Concurrent Object-Oriented Programming

    Research Directions in Concurrent Object-Oriented Programming

    Gul Agha, Peter Wegner, and Akinori Yonezawa

    This collection of original research provides a comprehensive survey of developments at the leading edge of concurrent object-oriented programming. It documents progress—from general concepts to specific descriptions—in programming language design, semantic tools, systems, architectures, and applications. Chapters are written at a tutorial level and are accessible to a wide audience, including researchers, programmers, and technical managers.

    The problem of designing systems for concurrent programming has become an increasingly important area of research in computer science with a concomitant increase in the popularity of object-based programming. Because parallelism is a natural consequence of the use of objects, the development of systems for concurrent object-oriented programming is providing important software support for a new generation of concurrent computers.

    • Hardcover $69.95
    • Paperback $62.00 £48.00
  • The Organization of Reduction, Data Flow, and Control Flow Systems

    Werner Kluge

    This book provides a timely reexamination of computer organization and computer architecture.

    In light of research over the last decade on new ways of representing and performing computations, this book provides a timely reexamination of computer organization and computer architecture. It systematically investigates the basic organizational concepts of reduction, data flow, and control flow (or state transition) and their relationship to the underlying programming paradigms. For each of these concepts, Kluge looks at how principles of language organization translate into architectures and how architectural features translate into concrete system implementations, comparing them in order to identify their similarities and differences. The focus is primarily on a functional programming paradigm based on a full-fledged operational λ-calculus and on its realization by various reduction systems. Kluge first presents a brief outline of the overall configuration of a computing system and of an operating system kernel, introduce elements of the theory of Petrinets as modeling tools for nonsequential systems and processes, and use a simple form of higher-order Petri nets to identify by means of examples the operational and control disciplines that govern the organization of reduction, data flow, and control flow computations. He then introduces the notions of abstract algorithms and of reductions and includes an overview of the theory of the λ-calculus. The next five chapters describe the various computing engines that realize the reduction semantics of a full-fledged λ-calculus. The remaining chapters provide self-contained investigations of the G-machine, SKI combinator reduction, and the data flow approach for implementing the functional programming paradigm. This is followed by a detailed description of a typical control flow (or von Neumann) machine architecture (a VAX11 system). Properties of these machines are summarized in the concluding chapter, which classifies them according to the semantic models they support.

    • Paperback $12.75 £9.50
  • Introduction to Object-Oriented Databases

    Introduction to Object-Oriented Databases

    Won Kim

    Introduction to Object-Oriented Databases provides the first unified and coherent presentation of the essential concepts and techniques of object-oriented databases. It consolidates the results of research and development in the semantics and implementation of a full spectrum of database facilities for object-oriented systems, including data model, query, authorization, schema evolution, storage structures, query optimization, transaction management, versions, composite objects, and integration of a programming language and a database system. The book draws on the author's Orion project at MCC, currently the most advanced object-oriented database system, and places this work in a larger context by using relational database systems and other object-oriented systems for comparison.

    Contents Introduction • Data Model • Basic Interface • Relationships with Non-Object-Oriented Databases • Schema Modification • Model of Queries • Query Language • Authorization • Storage Structures • Query Processing • Transaction Management • Semantic Extensions • Integrating Object-Oriented Programming and Databases • Architecture • Survey of Object-Oriented Database Systems • Directions for Future Research and Development

    • Hardcover $55.00 £37.95
    • Paperback $30.00 £24.00
  • ABCL

    An Object-Oriented Concurrent System

    Akinori Yonezawa

    This book provides an overview of the new paradigm through the programming language ABCL.

    Object-oriented concurrent programming is a major new programming paradigm that exploits the benefits of object orientation concurrency, and distributed systems. This book provides an overview of the new paradigm through the programming language ABCL. It presents a complete description of the theory, programming, implementation, and application of the ABCL object-oriented concurrent system and expands on Yonezawa and Tokoro's work published in Object-Oriented Concurrent Programming.

    The extensively revised tutorials and papers cover parallel computation models, programming languages, programming techniques, language implementations in multi-processor architectures, programming environments, applications in distributed event simulation and construction of an operating system, parallel algorithms for natural language on-line parsing, and such new theoretical issues as reflective computation. The book also includes a user's guide to ABCL.

    ABCL: An Object-Oriented Concurrent System is included in the Computer Systems Series, edited by Herb Schwetman.

    • Hardcover $52.00
  • Interpretation and Instruction Path Coprocessing

    Eddy H. Debaere and Jan M. Van Campenhout

    Interpretation and Instruction Path Coprocessing presents an analysis of interpretive systems, and cost-effective software and hardware optimizations of the interpretive process on CISC and RISC architectures.

    Many languages and interactive software packages on personal computers use interpretive implementation techniques. Interpretation and Instruction Path Coprocessing presents an analysis of interpretive systems, and cost-effective software and hardware optimizations of the interpretive process on CISC and RISC architectures. It groups and presents concepts that are seldom found together and elaborates on ideas that are not part of the mainstream of recent hardware developments. The authors explore the techniques used in interpretive systems on general purpose microprocessor architectures. A key contribution is their investigation of how, and to what extent, interpretive execution can be accelerated using dedicated coprocessors. They analyze the advantages and drawbacks of interpretations, present both software and hardware techniques to improve interpretation and introduce the concept of coprocessing in the instruction path as a cost-effective way of boosting the execution speed of interpreters running on microprocessors. The RISC versus CISC issue is analyzed from the viewpoint of interpretations and its hardware support.

    Interpretation and Instruction Path Coprocessing is included in the Computer Systems Series, edited by Herb Schwetman.

    • Hardcover $31.50
  • Fault Tolerance Through Reconfiguration in VLSI and WSI Arrays

    Roberto Negrini, Mariagiovanna Sami, and R. Stefanelli

    This book brings together and discusses the most significant results scattered across the vast field of research in fault tolerance.

    Fault tolerance is one of the principle mechanisms for achieving high reliability, high availability in digital systems. It is the survival attribute of digital systems. This book brings together and discusses the most significant results scattered across the vast field of research in fault tolerance. It focuses in particular on reconfiguration techniques and presents the authors' own results in the reconfiguration of processing arrays. By means of dedicated arrays, they note, it is possible to build systems that are orders of magnitude more powerful than programmed computers. Their treatment of networks and arrays is extensive and has wide applicability.

    Contents Introduction • Typical Processing Arrays • Failure Mechanisms and Fault Models • Basic Problems of Fault-Tolerance Through Array Configuration • Technologies Supporting Reconfiguration Testing • Reconfiguration: An Introduction • The Diogenes Approach • Reconfiguration for Linear Arrays • Graph-Theoretical Approaches to Reconfiguration • Local Reconfiguration • Global Reconfiguration Techniques: Row/Column Elimination • Global Mapping: Index Mapping Reconfiguration Techniques • Reconfiguration Based on Request-Acknowledge Local Protocols • Reconfiguration of Multiple-Pipeline Structures • Some Extensions Toward Time Redundancy • Appendix: Reliability Prediction of Arrays

    Fault Tolerance Through Reconfiguration in VLSI and "I Arrays is included in the Computer Systems series, edited by Herb Schwetman.

    • Hardcover $47.50
  • Networks and Distributed Computation

    Concepts, Tools, and Algorithms

    Michel Raynal

    Networks and Distributed Computation covers the recent rapid developments in distributed systems.

    Networks and Distributed Computation covers the recent rapid developments in distributed systems. It introduces the basic tools for the design and analysis of systems involving large-scale concurrency, with examples based on network systems; considers problems of network and global state learning; discusses protocols allowing synchronization constraints to be distributed; and analyzes the fundamental elements of distribution in detail, using a large number of algorithms. Interprocess communication and synchronization are central issues in the design of distributed systems, taking on a different character from their counterparts in centralized systems. Raynal addresses these issues in detail and develops a coherent framework for presenting and analyzing a wide variety of algorithms relevant to distributed computation.Contents: First example - a data transfer protocol. Second example - independent control of logic clocks. Simple algorithms and protocols. Determination of the global state. Distributing a global synchronization constraint. Elements and algorithms for a toolbox.

    Networks and Distributed Computation is included in the Computer Systems series edited by Herb Schwetman.

    • Hardcover $32.50
  • A Commonsense Approach to the Theory of Error-Correcting Codes

    A Commonsense Approach to the Theory of Error-Correcting Codes

    Benjamin Arazi

    Teaching the theory of error correcting codes on an introductory level is a difficult task. The theory, which has immediate hardware applications, also concerns highly abstract mathematical concepts. This text explains the basic circuits in a refreshingly practical way that will appeal to undergraduate electrical engineering students as well as to engineers and technicians working in industry. Arazi's truly commonsense approach provides a solid grounding in the subject, explaining principles intuitively from a hardware perspective. He fully covers error correction techniques, from basic parity check and single error correction cyclic codes to burst error correcting codes and convolutional codes. All this he presents before introducing Galois field theory - the basic algebraic treatment and theoretical basis of the subject, which usually appears in the opening chapters of standard textbooks. One entire chapter is devoted to specific practical issues, such as Reed-Solomon codes (used in compact disc equipment), and maximum length sequences (used in various fields of communications). The basic circuits explained throughout the book are redrawn and analyzed from a theoretical point of view for readers who are interested in tackling the mathematics at a more advanced level.

    • Hardcover $9.75 £7.99
  • Simulating Computer Systems

    Techniques and Tools

    Myron H. MacDougall

    Simulating Computer Systems provides an introduction to simulation for computer and communication-system designers who want to analyze the performance of their designs. In it MacDougall describes a discrete-event simulation language called smpl, discusses simulation modeling with smpl (using a variety of models as examples), describes the design of smpl, and presents a C language implementation. The book's first part introduces smpl simulation operations using a queueing network simulation model; addresses the development, verification, and validation of simulation models (including hybrid modeling and the use of analytic models in verification); and describes how to estimate the accuracy of simulation results. A multiprocessor system model and a CSMA/CD LAN model are studied in detail to emphasize the joint use of simulation and analytic models and to further illustrate the use of smpl. Projects for the reader include a CPU pipeline model and a token ring LAN model. The implementation of smpl is the focus of the book's second part, which describes the design of smpl function and data structures and outlines a variety of extensions. This description, together with the C source listing provided, will allow the reader to implement smpl on any system.

    Simulating Computer Systems is included in the Computer Systems series, edited by Herb Schwetman.

    • Hardcover $46.00
  • Microprogrammable Parallel Computer

    MUNAP and Its Applications

    Takanobu Baba

    This book takes up the challenge offered by recent advances in theoretical computer science and artificial intelligence that have created a demand for a radically different type of computer architecture. It demonstrates the possibility of register transfer level parallel computing with microprogrammable flexible architecture that can fulfill a wide variety of user requirements, and provides all the necessary technical information to understand the process of design, development, and evaluation of this innovative MUNAP computer. After introducing the basic concepts in the computer architecture and microprogramming area, the book describes how the architect goes about selecting microoperations, considering software/firmware/hardware tradeoffs and what schemes might be used for interleaved memory access and interconnection network. Microprogrammed computer models are defined for the evaluation of computers with similar architectures. Microprogrammable Parallel Computer presents the results of exhaustive experimentations with this architecture showing how it can be exploited in current research on emulation of a machine language, tagged architectures, language processing for Smalltalk-80 and Prolog, software testing, database systems, 3D graphics, and numerical computation.

    Contents Introduction • Design Principle • Basic Organization • Preliminary Evaluation • Hardware Development • Firmware Development • Applications • Architectural Evaluation and Improvement • Future Directions

    Microprogrammable Parallel Computer is included in the Computer Systems Series, edited by Herb Schwetman.

    • Hardcover $40.00
  • Object-Oriented Concurrent Programming

    Mario Tokoro and Akinori Yonezawa

    This book deals with a major theme of the Japanese Fifth Generation Project, which emphasizes logic programming, parallelism, and distributed systems. It presents a collection of tutorials and research papers on a new programming and design methodology in which the system to be constructed is modeled as a collection of abstract entities called "objects" and concurrent messages passing among objects.

    This book deals with a major theme of the Japanese Fifth Generation Project, which emphasizes logic programming, parallelism, and distributed systems. It presents a collection of tutorials and research papers on a new programming and design methodology in which the system to be constructed is modeled as a collection of abstract entities called "objects" and concurrent messages passing among objects. This methodology is particularly powerful in exploiting as well as harnessing the parallelism that is naturally found in problem domains. The book includes several proposals for programming languages that support this methodology, as well as the applications of object-oriented concurrent programming to such diverse areas as artificial intelligence, software engineering, music synthesis, office information systems, and system programming. It is the first compilation of research results in this rapidly emerging area.

    Contents Concurrent Programming Using Actors • Concurrent Object-Oriented Programming in Act-1 • Modelling and Programming in a Concurrent Object-Oriented Language, ABCL/1 • Concurrent Programming in ConcurrentSmallTalk • Orient84K: An Object-Oriented Concurrent Programming Language for Knowledge Representation • POOL-T: A Parallel Object-Oriented Programming Language • Concurrent Strategy Execution in Omega • The Formes System: A Musical Application of Object-Oriented Concurrent Programming • Distributed Problem Solving in ABCL/1

    Contributors Gul Agha (MIT), Pierre America (Phillips Research Laboratory, Eindhoven), Giuseppe Attardi (DELPHI SpA), Jean Pierre Briot (IRCAM, Paris), Pierre Cointe (IRCAM, Paris), Carl Hewitt (MIT), Yutaka Ishikawa (Keio University), Henry Lieberman (MIT), Etsuya Shibayama (Tokyo Institute of Technology), Mario Tokoro (Keio University), Yasuhiko Yokote (Keio University), and Akinori Yonezawa (Tokyo Institute of Technology).

    Object-Oriented Concurrent Programming is included in The MIT Press Series in Artificial Intelligence, edited by Patrick Henry Winston and Michael Brady.

    • Hardcover $37.50
  • Performance Models of Microprocessor Systems

    Marco Ajmone Marsan, Gianfranco Balbo, and Gianni Conte

    While there are several studies of computer systems modeling and performance evaluation where models of multiprocessor systems can be found as examples of applications of general modeling techniques, this is the first to focus entirely on the problem of modeling and performance evaluation of multiprocessor systems using analytical methods. Increasingly sophisticated and fast-moving technologies require models that can estimate the performance of a computer system without having actually to build and test it, models that can help designers make the correct architectural choices. The area of distributed computer architectures, or multiprocessor systems, has numerous such choices and can greatly benefit from an extensive use of performance evaluation techniques in the system design stage. The multiprocessor features that are studied here focus on contention for physical system resources, such as shared devices and interconnection networks. A brief overview covers the modeling of other important system characteristics, such as failures of components and synchronizations at the software level.

    Contents Stochastic Processes • Queuing Models • Stochastic Petri Nets • Multiprocessor Architectures • Analysis of Crossbar Multiprocessor Architecture • Single Bus Multiprocessors with External Common Memory • Multiple Bus Multiprocessors with External Common Memory • Single Bus Multiprocessors with Distributed Common Memory • Multiple Bus Multiprocessors with Distributed Common Memory

    Performance Models of Multiprocessor Systems is included in The MIT Press Series in Computer Systems, edited by Herb Schwetman.

    • Hardcover $42.00
  • Performance Analysis of Multiple Access Protocol

    Performance Analysis of Multiple Access Protocol

    Shuji Tasaka

    Broadcast media, such as satellite, ground radio, and multipoint cable channels, can easily provide full connectivity for communication among geographically distributed users. One of the most important problems in the design of networks (referred to as packet broadcast networks) that can take practical advantage of broadcast channels is how to achieve efficient sharing of a single common channel. Many multiple access protocols, or algorithms, for packet broadcast networks have been proposed, and much work has been done on the performance evaluation of the protocols. A variety of techniques have been used to analyze the performance; however, this is the first book to provide a unified approach to the performance evaluation problem by means of an approximate analytical technique called equilibrium point analysis. Two types of packet broadcast networks - satellite networks and local area networks are considered, and eight multiple access protocols are studied and their performance analyzed in terms of throughput and average message delay.

    Contents Part I: Fundamentals • Multiple Access Protocols and Performance • Equilibrium Point Analysis • Part II: Satellite Networks • S-ALOHA • R-ALOHA • ALOHA-Reservation • TDMAReservation • SRUC • TDMA • Performance Comparisons of the Protocols for Satellite Networks • Part III: Local Area Networks • Buffered CSMACD • BRAM

    Performance Analysis of Multiple Access Protocols is included in the Computer Systems Series, Research Reports and Notes, edited by Herb Schwetman.

    • Paperback $38.00 £24.95
  • Analysis of Polling Systems

    Hideaki Takagi

    This monograph analyzes polling systems to evaluate such basic performance measures as the average queue length and waiting time.

    A polling system is one that contains a number of queues served in cyclic order. It is employed in computer-terminal communication systems and implemented in such standard data link protocols as BSC, SDLC, and HDLC, and its analysis is now finding a new application in local-area computer networks. This monograph analyzes polling systems to evaluate such basic performance measures as the average queue length and waiting time. Following a taxonomy of models with reference to previous work, it considers one-message buffer systems and infinite buffer systems with exhaustive, gated, and limited service disciplines. Examples to which the analysis of polling systems is applied are drawn from the field of computer communication networks.

    Contents Introduction • One-Message Buffer Systems • Exhaustive Service, Discrete-Time Systems • Exhaustive Service, Continuous-Time Systems • Gated Service Systems • Limited Service Systems • Systems with Zero Reply Intervals • Sample Applications • Future Research Topics • Summary of Important Results

    Analysis of Polling Systems is included in the Computer Systems Series, Research Reports and Notes, edited by Herb Schwetman.

    • Paperback $25.00
  • The LOCUS Distributed System Architecture

    The LOCUS Distributed System Architecture

    Gerald J. Popek

    LOCUS, a distributed version of the popular operating system Unix, provides an excellent solution. It makes a collection of computers, whether they are workstations or mainframes, as easy to use as a single computer by providing a set of supports for the underlying network that is virtually invisible to users and - applications programs.

    Computer systems consisting of many machines will be the norm within a few years. However, making a collection of machines appear as a single, coherent system - in which the location of files, servers, programs, or users is invisible to users who do not wish to know - is a very difficult problem. LOCUS, a distributed version of the popular operating system Unix, provides an excellent solution. It makes a collection of computers, whether they are workstations or mainframes, as easy to use as a single computer by providing a set of supports for the underlying network that is virtually invisible to users and - applications programs. This "network transparency" dramatically reduces the cost of developing and maintaining software, and considerably improves the user model of the system. It also permits a variety of system configurations, including diskless workstations, full duplex I/O to large mainframes, transparently shared peripherals, and incremental growth from one workstation to a large network including mainframes with no effect on applications software required to take advantage of the altered configurations. In addition to transparent, distributed operation, LOCUS features also include high performance and reliability; full Unix compatibility, support for heterogeneous machines and systems, automatic management of replicated file storage; and architectural extensions to support extensive interprocess communication and internetworking.

    Contents The LOCUS Architecture • Distributed Operation and Transparency • The LOCUS Distributed Filesystem • Remote Tasking • Filesystem Recovery • Dynamic Reconfiguration of LOCUS • Heterogeneity • System Management • Appendixes: LOCUS Version Vector Mechnism • LOCUS Internal Network Messages

    The LOCUS Distributed System Architecture is included in the Computer Systems series, edited by Herb Schwetman.

    • Hardcover $35.00
    • Paperback $18.00 £13.99
  • Performance and Evaluation of LISP Systems

    Performance and Evaluation of LISP Systems

    Richard P. Gabriel

    This final report of the Stanford Lisp Performance Study, conducted over a three year period by the author, describes implementation techniques, performance tradeoffs, benchmarking techniques, and performance results for all of the major Lisp dialects in use today. A popular highlevel programming language used predominantly in artificial intelligence, Lisp was the first language to concentrate on working with symbols instead of numbers. Lisp was introduced by John McCarthy in the early 1960s (McCarthy's LISP 1.5 Programmer's Manual published in 1962 is available in paperback from The MIT Press) and its continuous development has enabled it to remain dominant in artificial intelligence. Performance and Evaluation of LISP Systems is the first book to present descriptions on the Lisp implementation techniques actually in use and can serve as a handbook to the implementation details of all of the various current Lisp expressions. It provides detailed performance information using the tools of benchmarking (the process of utilizing standardized computer programs to test the processing power of different computer systems) to measure the various Lisp systems, and provides an understanding of the technical tradeoffs made during the implementation of a Lisp system. The study is divided into three major parts. The first provides the theoretical background, outlining the factors that go into evaluating the performance of a Lisp system. The second part presents the Lisp implementations: MacLisp, MIT CADR, LMI Lambda, S-I Lisp, Franz Lisp, MIL, Spice Lisp, Vax Common Lisp, Portable Standard Lisp, and Xerox D-Machine. A final part describes the benchmark suite that was used during the major portion of the study and the results themselves.

    Performance and Evaluation of Lisp Systems is included in the Computer Systems series, Research Reports and Notes, edited by Herb Schwetman.

    • Hardcover $32.50 £27.95
    • Paperback $38.00 £30.00
  • Logic Testing and Design for Testability

    Logic Testing and Design for Testability

    Hideo Fujiwara

    Design for testability techniques offer one approach toward alleviating this situation by adding enough extra circuitry to a circuit or chip to reduce the complexity of testing.

    Today's computers must perform with increasing reliability, which in turn depends on the problem of determining whether a circuit has been manufactured properly or behaves correctly. However, the greater circuit density of VLSI circuits and systems has made testing more difficult and costly. This book notes that one solution is to develop faster and more efficient algorithms to generate test patterns or use design techniques to enhance testability - that is, "design for testability." Design for testability techniques offer one approach toward alleviating this situation by adding enough extra circuitry to a circuit or chip to reduce the complexity of testing. Because the cost of hardware is decreasing as the cost of testing rises, there is now a growing interest in these techniques for VLSI circuits.The first half of the book focuses on the problem of testing: test generation, fault simulation, and complexity of testing. The second half takes up the problem of design for testability: design techniques to minimize test application and/or test generation cost, scan design for sequential logic circuits, compact testing, built-in testing, and various design techniques for testable systems.

    Logic Testing and Design for Testability is included in the Computer Systems Series, edited by Herb Schwetman.

    • Hardcover $47.50 £39.95
    • Paperback $35.00 £27.00