William Gropp

William Gropp is Director of the Parallel Computing Institute and Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign.

  • Using MPI, Third Edition

    Using MPI, Third Edition

    Portable Parallel Programming with the Message-Passing Interface

    William Gropp, Ewing Lusk, and Anthony Skjellum

    The thoroughly updated edition of a guide to parallel programming with MPI, reflecting the latest specifications, with many detailed examples.

    This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code.

    The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data.

    • Paperback $65.00 £55.00
  • Using Advanced MPI

    Using Advanced MPI

    Modern Features of the Message-Passing Interface

    William Gropp, Torsten Hoefler, Rajeev Thakur, and Ewing Lusk

    A guide to advanced features of MPI, reflecting the latest version of the MPI standard, that takes an example-driven, tutorial approach.

    This book offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, the latest version of the MPI standard, and updates from MPI-2. Like its companion volume, Using MPI, the book takes an informal, example-driven, tutorial approach. The material in each chapter is organized according to the complexity of the programs used as examples, starting with the simplest example and moving to more complex ones.

    Using Advanced MPI covers major changes in MPI-3, including changes to remote memory access and one-sided communication that simplify semantics and enable better performance on modern hardware; new features such as nonblocking and neighborhood collectives for greater scalability on large systems; and minor updates to parallel I/O and dynamic processes. It also covers support for hybrid shared-memory/message-passing programming; MPI_Message, which aids in certain types of multithreaded programming; features that handle very large data; an interface that allows the programmer and the developer to access performance data; and a new binding of MPI to Fortran.

    • Paperback $65.00 £55.00
  • Beowulf Cluster Computing with Linux, Second Edition

    Beowulf Cluster Computing with Linux, Second Edition

    William Gropp, Ewing Lusk, and Thomas Sterling

    The completely updated second edition of a guide to Beowulf cluster computing.

    Use of Beowulf clusters (collections of off-the-shelf commodity computers programmed to act in concert, resulting in supercomputer performance at a fraction of the cost) has spread far and wide in the computational science community. Many application groups are assembling and operating their own "private supercomputers" rather than relying on centralized computing centers. Such clusters are used in climate modeling, computational biology, astrophysics, and materials science, as well as non-traditional areas such as financial modeling and entertainment. Much of this new popularity can be attributed to the growth of the open-source movement.The second edition of Beowulf Cluster Computing with Linux has been completely updated; all three stand-alone sections have important new material. The introductory material in the first part now includes a new chapter giving an overview of the book and background on cluster-specific issues, including why and how to choose a cluster, as well as new chapters on cluster initialization systems (including ROCKS and OSCAR) and on network setup and tuning. The information on parallel programming in the second part now includes chapters on basic parallel programming and available libraries and programs for clusters. The third and largest part of the book, which describes software infrastructure and tools for managing cluster resources, has new material on cluster management and on the Scyld system.

    • Paperback $10.75 £8.99
  • Using MPI and Using MPI-2, 2-vol. set

    Using MPI and Using MPI-2, 2-vol. set

    William Gropp, Ewing Lusk, Anthony Skjellum, and Rajeev Thakur

    The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations. Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations. This two-volume set contains Using MPI and Using MPI-2.

    • Paperback $18.75 £14.99
  • Using MPI, Second Edition

    Using MPI, Second Edition

    Portable Parallel Programming with the Message Passing Interface

    William Gropp, Ewing Lusk, and Anthony Skjellum

    The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations. Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations.

    • Paperback $60.00 £50.00
  • Using MPI-2

    Using MPI-2

    Advanced Features of the Message-Passing Interface

    William Gropp, Ewing Lusk, and Rajeev Thakur

    Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material onthe new C++ and Fortran 90 bindings for MPI throughout the book.

    The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations.Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations.

    • Paperback $19.75 £15.99
  • MPI - The Complete Reference, Volume 2

    MPI - The Complete Reference, Volume 2

    Volume 2, The MPI Extensions

    William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir

    This volume presents a complete specification of the MPI-2 Standard. It is annotated with comments that clarify complicated issues, including why certain design choices were made, how users are intended to use the interface, and how they should construct their version of MPI.

    Since its release in summer 1994, the Message Passing Interface (MPI) specification has become a standard for message-passing libraries for parallel computations. There exist more than a dozen implementations on a variety of computing platforms, from the IBM SP-2 supercomputer to PCs running Windows NT. The MPI Forum, which has continued to work on MPI, has recently released MPI-2, a new definition that includes significant extensions, improvements, and clarifications. This volume presents a complete specification of the MPI-2 Standard. It is annotated with comments that clarify complicated issues, including why certain design choices were made, how users are intended to use the interface, and how they should construct their version of MPI. The volume also provides many detailed, illustrative programming examples.

    • Paperback $55.00 £45.00

Contributor

  • The OpenMP Common Core

    The OpenMP Common Core

    Making OpenMP Simple Again

    Timothy G. Mattson, Yun (Helen) He, and Alice E. Koniges

    How to become a parallel programmer by learning the twenty-one essential components of OpenMP.

    This book guides readers through the most essential elements of OpenMP—the twenty-one components that most OpenMP programmers use most of the time, known collectively as the “OpenMP Common Core.” Once they have mastered these components, readers with no prior experience writing parallel code will be effective parallel programmers, ready to take on more complex aspects of OpenMP. The authors, drawing on twenty years of experience in teaching OpenMP, introduce material in discrete chunks ordered to support effective learning. OpenMP was created in 1997 to make it as simple as possible for applications programmers to write parallel code; since then, it has grown into a huge and complex system. The OpenMP Common Core goes back to basics, capturing the inherent simplicity of OpenMP. After introducing the fundamental concepts of parallel computing and history of OpenMP's development, the book covers topics including the core design pattern of parallel computing, the parallel and worksharing-loop constructs, the OpenMP data environment, and tasks. Two chapters on the OpenMP memory model are uniquely valuable for their pedagogic approach. The key for readers is to work through the material, use an OpenMP-enabled compiler, and write programs to experiment with each OpenMP directive or API routine as it is introduced. The book's website, updated continuously, offers a wide assortment of programs and exercises.

    • Paperback $40.00 £32.00
  • Using OpenMP—The Next Step

    Using OpenMP—The Next Step

    Affinity, Accelerators, Tasking, and SIMD

    Ruud van der Pas, Eric Stotzer, and Christian Terboven

    A guide to the most recent, advanced features of the widely used OpenMP parallel programming model, with coverage of major features in OpenMP 4.5.

    This book offers an up-to-date, practical tutorial on advanced features in the widely used OpenMP parallel programming model. Building on the previous volume, Using OpenMP: Portable Shared Memory Parallel Programming (MIT Press), this book goes beyond the fundamentals to focus on what has been changed and added to OpenMP since the 2.5 specifications. It emphasizes four major and advanced areas: thread affinity (keeping threads close to their data), accelerators (special hardware to speed up certain operations), tasking (to parallelize algorithms with a less regular execution flow), and SIMD (hardware assisted operations on vectors).

    As in the earlier volume, the focus is on practical usage, with major new features primarily introduced by example. Examples are restricted to C and C++, but are straightforward enough to be understood by Fortran programmers. After a brief recap of OpenMP 2.5, the book reviews enhancements introduced since 2.5. It then discusses in detail tasking, a major functionality enhancement; Non-Uniform Memory Access (NUMA) architectures, supported by OpenMP; SIMD, or Single Instruction Multiple Data; heterogeneous systems, a new parallel programming model to offload computation to accelerators; and the expected further development of OpenMP.

    • Paperback $50.00 £40.00
  • Cloud Computing for Science and Engineering

    Cloud Computing for Science and Engineering

    Ian Foster and Dennis B. Gannon

    A guide to cloud computing for students, scientists, and engineers, with advice and many hands-on examples.

    The emergence of powerful, always-on cloud utilities has transformed how consumers interact with information technology, enabling video streaming, intelligent personal assistants, and the sharing of content. Businesses, too, have benefited from the cloud, outsourcing much of their information technology to cloud services. Science, however, has not fully exploited the advantages of the cloud. Could scientific discovery be accelerated if mundane chores were automated and outsourced to the cloud? Leading computer scientists Ian Foster and Dennis Gannon argue that it can, and in this book offer a guide to cloud computing for students, scientists, and engineers, with advice and many hands-on examples.

    The book surveys the technology that underpins the cloud, new approaches to technical problems enabled by the cloud, and the concepts required to integrate cloud services into scientific work. It covers managing data in the cloud, and how to program these services; computing in the cloud, from deploying single virtual machines or containers to supporting basic interactive science experiments to gathering clusters of machines to do data analytics; using the cloud as a platform for automating analysis procedures, machine learning, and analyzing streaming data; building your own cloud with open source software; and cloud security.

    The book is accompanied by a website, Cloud4SciEng.org, that provides a variety of supplementary material, including exercises, lecture slides, and other resources helpful to readers and instructors.

    • Hardcover $55.00 £45.00
  • Scientific Programming and Computer Architecture

    Scientific Programming and Computer Architecture

    Divakar Viswanath

    A variety of programming models relevant to scientists explained, with an emphasis on how programming constructs map to parts of the computer.

    What makes computer programs fast or slow? To answer this question, we have to get behind the abstractions of programming languages and look at how a computer really works. This book examines and explains a variety of scientific programming models (programming models relevant to scientists) with an emphasis on how programming constructs map to different parts of the computer's architecture. Two themes emerge: program speed and program modularity. Throughout this book, the premise is to "get under the hood," and the discussion is tied to specific programs. The book digs into linkers, compilers, operating systems, and computer architecture to understand how the different parts of the computer interact with programs. It begins with a review of C/C++ and explanations of how libraries, linkers, and Makefiles work. Programming models covered include Pthreads, OpenMP, MPI, TCP/IP, and CUDA.The emphasis on how computers work leads the reader into computer architecture and occasionally into the operating system kernel. The operating system studied is Linux, the preferred platform for scientific computing. Linux is also open source, which allows users to peer into its inner workings. A brief appendix provides a useful table of machines used to time programs. The book's website (https://github.com/divakarvi/bk-spca) has all the programs described in the book as well as a link to the html text.

    • Hardcover $65.00 £55.00
  • Programming Models for Parallel Computing

    Programming Models for Parallel Computing

    Pavan Balaji

    An overview of the most prominent contemporary parallel processing programming models, written in a unique tutorial style.

    With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.

    The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations.

    Contributors Timothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng

    • Paperback $60.00 £50.00
  • Quantum Computing

    Quantum Computing

    A Gentle Introduction

    Eleanor G. Rieffel and Wolfgang H. Polak

    A thorough exposition of quantum computing and the underlying concepts of quantum physics, with explanations of the relevant mathematics and numerous examples.

    The combination of two of the twentieth century's most influential and revolutionary scientific theories, information theory and quantum mechanics, gave rise to a radically new view of computing and information. Quantum information processing explores the implications of using quantum mechanics instead of classical mechanics to model information and its processing. Quantum computing is not about changing the physical substrate on which computation is done from classical to quantum but about changing the notion of computation itself, at the most basic level. The fundamental unit of computation is no longer the bit but the quantum bit or qubit.

    This comprehensive introduction to the field offers a thorough exposition of quantum computing and the underlying concepts of quantum physics, explaining all the relevant mathematics and offering numerous examples. With its careful development of concepts and thorough explanations, the book makes quantum computing accessible to students and professionals in mathematics, computer science, and engineering. A reader with no prior knowledge of quantum physics (but with sufficient knowledge of linear algebra) will be able to gain a fluent understanding by working through the book.

    • Hardcover $47.00 £38.00
    • Paperback $40.00 £32.00
  • Quantum Computing Without Magic

    Quantum Computing Without Magic

    Devices

    Zdzislaw Meglicki

    How quantum computing is really done: a primer for future quantum device engineers.

    This text offers an introduction to quantum computing, with a special emphasis on basic quantum physics, experiment, and quantum devices. Unlike many other texts, which tend to emphasize algorithms, Quantum Computing Without Magic explains the requisite quantum physics in some depth, and then explains the devices themselves. It is a book for readers who, having already encountered quantum algorithms, may ask, “Yes, I can see how the algebra does the trick, but how can we actually do it?” By explaining the details in the context of the topics covered, this book strips the subject of the “magic” with which it is so often cloaked. Quantum Computing Without Magic covers the essential probability calculus; the qubit, its physics, manipulation and measurement, and how it can be implemented using superconducting electronics; quaternions and density operator formalism; unitary formalism and its application to Berry phase manipulation; the biqubit, the mysteries of entanglement, nonlocality, separability, biqubit classification, and the Schroedinger's Cat paradox; the controlled-NOT gate, its applications and implementations; and classical analogs of quantum devices and quantum processes. Quantum Computing Without Magic can be used as a complementary text for physics and electronic engineering undergraduates studying quantum computing and basic quantum mechanics, or as an introduction and guide for electronic engineers, mathematicians, computer scientists, or scholars in these fields who are interested in quantum computing and how it might fit into their research programs.

    • Paperback $40.00 £32.00
  • Using OpenMP

    Using OpenMP

    Portable Shared Memory Parallel Programming

    Barbara Chapman, Gabriele Jost, and Ruud van der Pas

    A comprehensive overview of OpenMP, the standard application programming interface for shared memory parallel computing—a reference for students and professionals.

    "I hope that readers will learn to use the full expressibility and power of OpenMP. This book should provide an excellent introduction to beginners, and the performance section should help those with some experience who want to push OpenMP to its limits."—from the foreword by David J. Kuck, Intel Fellow, Software and Solutions Group, and Director, Parallel and Distributed Solutions, Intel Corporation

    OpenMP, a portable programming interface for shared memory parallel computers, was adopted as an informal standard in 1997 by computer scientists who wanted a unified model on which to base programs for shared memory systems. OpenMP is now used by many software developers; it offers significant advantages over both hand-threading and MPI. Using OpenMP offers a comprehensive introduction to parallel programming concepts and a detailed overview of OpenMP.

    Using OpenMP discusses hardware developments, describes where OpenMP is applicable, and compares OpenMP to other programming interfaces for shared and distributed memory parallel architectures. It introduces the individual features of OpenMP, provides many source code examples that demonstrate the use and functionality of the language constructs, and offers tips on writing an efficient OpenMP program. It describes how to use OpenMP in full-scale applications to achieve high performance on large-scale architectures, discussing several case studies in detail, and offers in-depth troubleshooting advice. It explains how OpenMP is translated into explicitly multithreaded code, providing a valuable behind-the-scenes account of OpenMP program performance. Finally, Using OpenMP considers trends likely to influence OpenMP development, offering a glimpse of the possibilities of a future OpenMP 3.0 from the vantage point of the current OpenMP 2.5. With multicore computer use increasing, the need for a comprehensive introduction and overview of the standard interface is clear.

    Using OpenMP provides an essential reference not only for students at both undergraduate and graduate levels but also for professionals who intend to parallelize existing codes or develop new parallel programs for shared memory computer architectures.

    • Paperback $50.00 £40.00
  • Scalable Input/Output

    Scalable Input/Output

    Achieving System Balance

    Daniel A. Reed

    The major research results from the Scalable Input/Output Initiative, exploring software and algorithmic solutions to the I/O imbalance.

    As we enter the "decade of data," the disparity between the vast amount of data storage capacity (measurable in terabytes and petabytes) and the bandwidth available for accessing it has created an input/output bottleneck that is proving to be a major constraint on the effective use of scientific data for research. Scalable Input/Output is a summary of the major research results of the Scalable I/O Initiative, launched by Paul Messina, then Director of the Center for Advanced Computing Research at the California Institute of Technology, to explore software and algorithmic solutions to the I/O imbalance. The contributors explore techniques for I/O optimization, including: I/O characterization to understand application and system I/O patterns; system checkpointing strategies; collective I/O and parallel database support for scientific applications; parallel I/O libraries and strategies for file striping, prefetching, and write behind; compilation strategies for out-of-core data access; scheduling and shared virtual memory alternatives; network support for low-latency data transfer; and parallel I/O application programming interfaces.

    • Paperback $19.75 £15.99
  • Beowulf Cluster Computing with Linux

    Beowulf Cluster Computing with Linux

    Thomas Sterling

    Comprehensive guides to the latest Beowulf tools and methodologies.

    Beowulf clusters, which exploit mass-market PC hardware and software in conjunction with cost-effective commercial network technology, are becoming the platform for many scientific, engineering, and commercial applications. With growing popularity has come growing complexity. Addressing that complexity, Beowulf Cluster Computing with Linux and Beowulf Cluster Computing with Windows provide system users and administrators with the tools they need to run the most advanced Beowulf clusters. The book is appearing in both Linux and Windows versions in order to reach the entire PC cluster community, which is divided into two distinct camps according to the node operating system. Each book consists of three stand-alone parts. The first provides an introduction to the underlying hardware technology, assembly, and configuration. The second part offers a detailed presentation of the major parallel programming librairies. The third, and largest, part describes software infrastructures and tools for managing cluster resources. This includes some of the most popular of the software packages available for distributed task scheduling, as well as tools for monitoring and administering system resources and user accounts. Approximately 75% of the material in the two books is shared, with the other 25% pertaining to the specific operating system. Most of the chapters include text specific to the operating system. The Linux volume includes a discussion of parallel file systems.

    • Paperback $50.00 £40.00
  • Beowulf Cluster Computing with Windows

    Beowulf Cluster Computing with Windows

    Thomas Sterling

    Comprehensive guides to the latest Beowulf tools and methodologies.

    Beowulf clusters, which exploit mass-market PC hardware and software in conjunction with cost-effective commercial network technology, are becoming the platform for many scientific, engineering, and commercial applications. With growing popularity has come growing complexity. Addressing that complexity, Beowulf Cluster Computing with Linux and Beowulf Cluster Computing with Windows provide system users and administrators with the tools they need to run the most advanced Beowulf clusters. The book is appearing in both Linux and Windows versions in order to reach the entire PC cluster community, which is divided into two distinct camps according to the node operating system. Each book consists of three stand-alone parts. The first provides an introduction to the underlying hardware technology, assembly, and configuration. The second part offers a detailed presentation of the major parallel programming librairies. The third, and largest, part describes software infrastructures and tools for managing cluster resources. This includes some of the most popular of the software packages available for distributed task scheduling, as well as tools for monitoring and administering system resources and user accounts. Approximately 75% of the material in the two books is shared, with the other 25% pertaining to the specific operating system. Most of the chapters include text specific to the operating system. The Linux volume includes a discussion of parallel file systems.

    • Paperback $60.00 £50.00
  • How to Build a Beowulf

    How to Build a Beowulf

    A Guide to the Implementation and Application of PC Clusters

    Donald J. Becker, John Salmon, Daniel F. Savarese, and Thomas Sterling

    This how-to guide provides step-by-step instructions for building aBeowulf-type computer, including the physical elements that make up aclustered PC computing system, the software required (most of which isfreely available), and insights on how to organize the code to exploitparallelism.

    Supercomputing research—the goal of which is to make computers that are ever faster and more powerful—has been at the cutting edge of computer technology since the early 1960s. Until recently, research cost in the millions of dollars, and many of the companies that originally made supercomputers are now out of business.The early supercomputers used distributed computing and parallel processing to link processors together in a single machine, often called a mainframe. Exploiting the same technology, researchers are now using off-the-shelf PCs to produce computers with supercomputer performance. It is now possible to make a supercomputer for less than $40,000. Given this new affordability, a number of universities and research laboratories are experimenting with installing such Beowulf-type systems in their facilities.This how-to guide provides step-by-step instructions for building a Beowulf-type computer, including the physical elements that make up a clustered PC computing system, the software required (most of which is freely available), and insights on how to organize the code to exploit parallelism. The book also includes a list of potential pitfalls.

    • Paperback $50.00 £40.00
  • A Programmer's Guide to ZPL

    A Programmer's Guide to ZPL

    Lawrence Snyder

    This guide illustrates typical ZPL usage and explains in an intuitive manner how the constructs work. The emphasis is on teaching the reader to be a ZPL programmer. Scientific computations are used as examples throughout

    ZPL is a new array programming language for science and engineering computation. Designed for fast execution on both sequential and parallel computers, it is intended to replace languages such as Fortran and C. Because ZPL benefits from recent research in parallel compilation, it provides a convenient high-level programming medium for supercomputers with efficiency comparable to hand-coded message-passing programs. Users with scientific computing experience can usually learn ZPL in a few hours, and those who have used MATLAB or Fortran 90 may already be acquainted with the array programming style.This guide provides a complete introduction to ZPL. It assumes that the reader is experienced with an imperative language such as C, Fortran, or Pascal. Though thorough and precise, it does not attempt to be a ZPL reference manual. Rather, it illustrates typical ZPL usage and explains in an intuitive manner how the constructs work. The emphasis is on teaching the reader to be a ZPL programmer. Scientific computations are used as examples throughout.

    • Paperback $8.75 £6.99
  • MPI - The Complete Reference, Second Edition, Volume 1

    MPI - The Complete Reference, Second Edition, Volume 1

    Volume 1, The MPI Core

    Marc Snir, Steve W. Otto, Steven Huss-Lederman, David W. Walker, and Jack Dongarra

    This volume, the definitive reference manual for the latest version of MPI-1, contains a complete specification of the MPI Standard.

    Since its release in summer 1994, the Message Passing Interface (MPI) specification has become a standard for message-passing libraries for parallel computations. There exist more than a dozen implementations on a variety of computing platforms, from the IBM SP-2 supercomputer to PCs running Windows NT. The initial MPI Standard, known as MPI-1, has been modified over the last two years. This volume, the definitive reference manual for the latest version of MPI-1, contains a complete specification of the MPI Standard. It is annotated with comments that clarify complicated issues, including why certain design choices were made, how users are intended to use the interface, and how they should construct their version of MPI. The volume also provides many detailed, illustrative programming examples.

    • Paperback $10.75 £8.99
  • Fortran 95 Handbook

    Fortran 95 Handbook

    Complete Iso/Ansi Reference

    Jeanne C. Adams, Walter S. Brainerd, Jeanne T. Martin, Brian T. Smith, and Jerrold L. Wagener

    The Fortran 95 Handbook, a comprehensive reference work for the Fortran programmer and implementor, contains a complete description of the Fortran 95 programming language. The chapters follow the same sequence of topics as the Fortran 95 standard, but contain a more thorough and informal explanation of the language's features and many more examples. Appendices describe all the intrinsic features, the deprecated features, and the complete syntax of the language. The Handbook also includs a feature not found in the standard: a cross reference of all the syntax terms, giving the rule that defines each term and all the rules that reference it. Major new features added in Fortran 95 are the 'FORALL' statement and construct, pure and elemental procedures, and structure and pointer default initialization.

    • Paperback $75.00 £50.00
  • Using PLAPACK: Parallel Linear Algebra Package

    Using PLAPACK: Parallel Linear Algebra Package

    Robert van de Geijn

    This book is a comprehensive introduction to all the components of a high-performance parallel linear algebra library, as well as a guide to the PLAPACK infrastructure.

    PLAPACK is a library infrastructure for the parallel implementation of linear algebra algorithms and applications on distributed memory supercomputers such as the Intel Paragon, IBM SP2, Cray T3D/T3E, SGI PowerChallenge, and Convex Exemplar. This infrastructure allows library developers, scientists, and engineers to exploit a natural approach to encoding so-called blocked algorithms, which achieve high performance by operating on submatrices and subvectors. This feature, as well as the use of an alternative, more application-centric approach to data distribution, sets PLAPACK apart from other parallel linear algebra libraries, allowing for strong performance and significanltly less programming by the user. This book is a comprehensive introduction to all the components of a high-performance parallel linear algebra library, as well as a guide to the PLAPACK infrastructure. Scientific and Engineering Computation series

    • Paperback $45.00 £38.00
  • Parallel Programming Using C++

    Parallel Programming Using C++

    Gregory V. Wilson and Paul Lu

    Foreword by Bjarne Stroustrup Software is generally acknowledged to be the single greatest obstacle preventing mainstream adoption of massively-parallel computing. While sequential applications are routinely ported to platforms ranging from PCs to mainframes, most parallel programs only ever run on one type of machine. One reason for this is that most parallel programming systems have failed to insulate their users from the architectures of the machines on which they have run. Those that have been platform-independent have usually also had poor performance. Many researchers now believe that object-oriented languages may offer a solution. By hiding the architecture-specific constructs required for high performance inside platform-independent abstractions, parallel object-oriented programming systems may be able to combine the speed of massively-parallel computing with the comfort of sequential programming. Parallel Programming Using C++ describes fifteen parallel programming systems based on C++, the most popular object-oriented language of today. These systems cover the whole spectrum of parallel programming paradigms, from data parallelism through dataflow and distributed shared memory to message-passing control parallelism. For the parallel programming community, a common parallel application is discussed in each chapter, as part of the description of the system itself. By comparing the implementations of the polygon overlay problem in each system, the reader can get a better sense of their expressiveness and functionality for a common problem. For the systems community, the chapters contain a discussion of the implementation of the various compilers and runtime systems. In addition to discussing the performance of polygon overlay, several of the contributors also discuss the performance of other, more substantial, applications. For the research community, the contributors discuss the motivations for and philosophy of their systems. As well, many of the chapters include critiques that complete the research arc by pointing out possible future research directions. Finally, for the object-oriented community, there are many examples of how encapsulation, inheritance, and polymorphism can be used to control the complexity of developing, debugging, and tuning parallel software.

    • Paperback $65.00 £55.00
  • Introduction to High-Performance Scientific Computing

    Introduction to High-Performance Scientific Computing

    Lloyd D. Fosdick, Elizabeth R. Jessup, Carolyn J. C. Schauble, and Gitta Domik

    Designed for undergraduates, An Introduction to High-Performance Scientific Computing assumes a basic knowledge of numerical computation and proficiency in Fortran or C programming and can be used in any science, computer science, applied mathematics, or engineering department or by practicing scientists and engineers, especially those associated with one of the national laboratories or supercomputer centers.

    This text evolved from a new curriculum in scientific computing that was developed to teach undergraduate science and engineering majors how to use high-performance computing systems (supercomputers) in scientific and engineering applications. Designed for undergraduates, An Introduction to High-Performance Scientific Computing assumes a basic knowledge of numerical computation and proficiency in Fortran or C programming and can be used in any science, computer science, applied mathematics, or engineering department or by practicing scientists and engineers, especially those associated with one of the national laboratories or supercomputer centers. The authors begin with a survey of scientific computing and then provide a review of background (numerical analysis, IEEE arithmetic, Unix, Fortran) and tools (elements of MATLAB, IDL, AVS). Next, full coverage is given to scientific visualization and to the architectures (scientific workstations and vector and parallel supercomputers) and performance evaluation needed to solve large-scale problems. The concluding section on applications includes three problems (molecular dynamics, advection, and computerized tomography) that illustrate the challenge of solving problems on a variety of computer architectures as well as the suitability of a particular architecture to solving a particular problem. Finally, since this can only be a hands-on course with extensive programming and experimentation with a variety of architectures and programming paradigms, the authors have provided a laboratory manual and supporting software via anonymous ftp. Scientific and Engineering Computation series

    • Hardcover $19.75 £15.99
  • Practical Parallel Programming

    Practical Parallel Programming

    Gregory V. Wilson

    Practical Parallel Programming provides scientists and engineers with a detailed, informative, and often critical introduction to parallel programming techniques.

    Parallel computers have become widely available in recent years. Many scientists are now using them to investigate the grand challenges of science, such as modeling global climate change, determining the masses of elementary particles from first principles, or sequencing the human genome. However, software for parallel computers has developed far more slowly than the hardware. Many incompatible programming systems exist, and many useful programming techniques are not widely known.Practical Parallel Programming provides scientists and engineers with a detailed, informative, and often critical introduction to parallel programming techniques. Following a review of the fundamentals of parallel computer theory and architecture, it describes four of the most popular parallel programming models in use today—data parallelism, shared variables, message passing, and Linda—and shows how each can be used to solve various scientific and numerical problems. Examples, coded in various dialects of Fortran, are drawn from such domains as the solution of partial differential equations, solution of linear equations, the simulation of cellular automata, studies of rock fracturing, and image processing. Practical Parallel Programming will be particularly helpful for scientists and engineers who use high-performance computers to solve numerical problems and do physical simulations but who have little experience of networking or concurrency. The book can also be used by advanced undergraduate and graduate students in computer science in conjunction with material covering parallel architectures and algorithms in more detail. Computer science students will gain a critical appraisal of the current state of the art in parallel programming. Scientific and Engineering Computation series

    • Hardcover $70.00 £58.00
    • Paperback $135.00 £110.00
  • Enabling Technologies for Petaflops Computing

    Enabling Technologies for Petaflops Computing

    Thomas Sterling, Paul Messina, and Paul H. Smith

    Chapters focus on four interrelated areas: applications and algorithms, device technology, architecture and systems, and software technology.

    Building a computer ten times more powerful than all the networked computing capability in the United States is the subject of this book by leading figures in the high performance computing community. It summarizes the near-term initiatives, including the technical and policy agendas for what could be a twenty-year effort to build a petaFLOP scale computer. (A FLOP—Floating Point OPeration—is a standard measure of computer performance and a PetaFLOP computer would perform a million billion of these operations per second.) Chapters focus on four interrelated areas: applications and algorithms, device technology, architecture and systems, and software technology. While a petaFLOPS machine is beyond anything within contemporary experience, early research into petaFLOPS system design and methodologies is essential to U.S. leadership in all facets of computing into the next century. The findings reported here explore new and fertile ground. Among them: construction of an effective petaFLOPS computing system will be feasible in two decades, although effectiveness and applicability will depend on dramatic cost reductions as well as innovative approaches to system software and programming methodologies; a mix of technologies such as semiconductors, optics, and possibly cryogenics will be required; and while no fundamental paradigm shift in system architecture is expected, active latency management will be essential, requiring a high degree of fine-grain parallelism and the mechanisms to exploit it. Scientific and Engineering Computation series.

    • Paperback $9.75 £7.99
  • PVM

    PVM

    A Users' Guide and Tutorial for Network Parallel Computing

    Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek, and Vaidyalingam S. Sunderam

    Written by the team that developed the software, this tutorial is the definitive resource for scientists, engineers, and other computer users who want to use PVM to increase the flexibility and power of their high-performance computing resources.

    Written by the team that developed the software, this tutorial is the definitive resource for scientists, engineers, and other computer users who want to use PVM to increase the flexibility and power of their high-performance computing resources. PVM introduces distributed computing, discusses where and how to get the PVM software, provides an overview of PVM and a tutorial on setting up and running existing programs, and introduces basic programming techniques including putting PVM in existing code. There are program examples and details on how PVM works on UNIX and multiprocessor systems, along with advanced topics (portability, debugging, improving performance) and troubleshooting.

    PVM (Parallel Virtual Machine) is a software package that enables the computer user to define a networked heterogeneous collection of serial, parallel, and vector computers to function as one large computer. It can be used as stand-alone software or as a foundation for other heterogeneous network software. PVM may be configured to contain various machine architectures, including sequential processors, vector processors, and multicomputers, and it can be ported to new computer architectures that may emerge.

    • Paperback $40.00 £32.00
  • High Performance Fortran Handbook

    High Performance Fortran Handbook

    Charles H. Koelbel, David Loveman, Robert S. Schreiber, Guy Lewis Steele Jr., and Mary Zosel

    High Performance Fortran (HPF) is a set of extensions to Fortran expressing parallel execution at a relatively high level. For the thousands of scientists, engineers, and others who wish to take advantage of the power of both vector and parallel supercomputers, five of the principal authors of HPF have teamed up here to write a tutorial for the language. There is an increasing need for a common parallel Fortran that can serve as a programming interface with the new parallel machines that are appearing on the market. While HPF does not solve all the problems of parallel programming, it does provide a portable, high-level expression for data- parallel algorithms that brings the convenience of sequential Fortran a step closer to today's complex parallel machines.

    • Hardcover $45.00
    • Paperback $9.75 £7.99
  • Enterprise Integration Modeling

    Enterprise Integration Modeling

    Proceedings of the First International Conference

    Charles J. Petrie, Jr.

    The goal of enterprise integration is the development of computer-based tools that facilitate coordination of work and information flow across organizational boundaries. These proceedings, the first on EI modeling technologies, provide a synthesis of the technical issues involved; describe the various approaches and where they overlap, complement, or conflict with each other; and identify problems and gaps in the current technologies that point to new research. The leading edge of a movement that began with computer-aided design/computer-aided manufacturing (CAD/CAM), EI now seeks to engage the development of computer-based tools to control not only manufacturing but the allied areas of materials supply, accounting, and inventory control. EI technology is pushing forward research in areas such as distributed AI, concurrent engineering, task coordination, human-computer interaction, and distributed planning and scheduling. These proceedings provide the first common technical ground for comparing, evaluating, or coordinating these efforts.

    Topics include: Computer Integrated Manufacturing. Open System Architecture Standards. The results of five workshops on EI modeling topics: Model Integration, Model/Application Namespace, Heterogeneous Execution Environments, Metrics and Methodologies, and Coordination Process Models.

    • Paperback $60.00 £50.00
  • Parallel Computational Fluid Dynamics

    Implementations and Results

    Horst D. Simon

    Computational Fluid Dynamics (CFD) is one of the most important applications areas for high-performance computing, setting the pace for developments in scientific computing. Anyone who wants to design a new parallel computer or develop a new software tool must understand the issues involved in CFD in order to be successful.The demands of CFD, particularly in the aerospace and automotive industries, coupled with the emergence of more powerful generations of parallel supercomputers, have led naturally to work on parallel computational fluid dynamics; and initial results from using parallel machines to study the properties of liquids and gases in motion are promising. Parallel Computational Fluid Dynamics provides the first survey of this rapidly developing field. Drawn. from such different disciplines as mechanical and aeronautical engineering, computer science, and numerical methods, contributions cover implementations of CFD codes on commercially available large-scale parallel systems, studies of parallel numerical algorithms for CFD applications, and discussions of computer science topics with direct application to parallel CFD. Researchers will find that Parallel Computational Fluid Dynamics serves a number of needs. It presents the expertise of multidisciplinary research groups that will make it possible to succeed in solving the "grand challenge" problems stimulated by the new national High Performance Computing and Communication Program. It offers aeronautical or mechanical engineers an excellent introduction to what has been accomplished in the last few years in CFD on parallel machines. And it provides researchers in the areas of hardware, software, and algorithms with a useful survey of CFD's computational requirements as well as a source of applications to test out new ideas.

    Horst D. Simon is Department Manager for Computer Sciences Corporation and Research Scientist at NASA Ames Research Center.

    • Hardcover $60.00
  • Unstructured Scientific Computation on Scalable Multiprocessors

    Piyush Mehrotra, Joel Saltz, and Robert Voigt

    This book focuses on the implementation of such algorithms on parallel computers, such as hypercubes and the Connection Machine®, that can be scaled up to incredible performances.

    Unstructured and dynamically varying algorithms are playing an increasingly important role in the solution of large-scale scientific problems on large-scale computers. This book focuses on the implementation of such algorithms on parallel computers, such as hypercubes and the Connection Machine®, that can be scaled up to incredible performances. The algorithms covered include those for partial differential equations and sparse linear algebra.The nineteen contributions describe methods to effectively map fluids and structural mechanics codes that employ unstructured and/or adaptive meshes, scalable algorithms for problems in sparse linear algebra, scalable tools and compilers designed to handle irregular scientific computations, mapping methods for adaptive fast multipole methods, and parallelized grid generation and problem partitioning.

    • Hardcover $39.95 £32.00
  • Data-Parallel Programming on MIMD Computers

    Data-Parallel Programming on MIMD Computers

    Philip J. Hatcher and Michael J. Quinn

    Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers.

    MIMD computers are notoriously difficult to program. Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers. The authors provide enough data so that the reader can decide the feasibility of architecture-independent programming in a data-parallel language. For each benchmark program they give the source code listing, absolute execution time on both a multiprocessor and a multicomputer, and a speedup relative to a sequential program. And they often present multiple solutions to the same problem, to better illustrate the strengths and weaknesses of these compilers. The language presented is Dataparallel C, a variant of the original C* language developed by Thinking Machines Corporation for its Connection Machine processor array. Separate chapters describe the compilation of Dataparallel C programs for execution on the Sequent multiprocessor and the Intel and nCUBE hypercubes, respectively. The authors document the performance of these compilers on a variety of benchmark programs and present several case studies.

    Contents Introduction • Dataparallel C Programming Language Description • Design of a Multicomputer Dataparallel C Compiler • Design of a Multiprocessor Dataparallel C Compiler • Writing Efficient Programs • Benchmarking the Compilers • Case Studies • Conclusions

    • Hardcover $9.75 £7.99
  • Multicomputer Networks

    Multicomputer Networks

    Message-Based Parallel Processing

    Richard M. Fujimoto and Daniel A. Reed

    High-performance message-based supercomputers have only recently emerged from the research laboratory. The commercial manufacturing of such products as the Intel iPSC, the Ametek s/14, the NCUBE/ten, and the FPS T Series - all based on multicomputer network technology - has sparked lively interest in high-performance computation, and particularly in the message-passing paradigm for parallel computation. This book makes readily available information on many aspects of the design and use of multicomputer networks, including machine organization, system software, and application programs. It provides an introduction to the field for students and researchers and a survey of important recent results for practicing engineers. The emphasis throughout is on design principles and techniques; however, there are also descriptions and comparison of research and commercial machines and case studies of specific applications.Multicomputer Networks covers such major design areas as communication hardware, operating systems, fault tolerance, algorithms, and the selection of network topologies. The authors present results in each of these areas, emphasizing analytic models of interconnection networks, VLSI constraints and communication, communication paradigms and hardware support, multicomputer operating systems, and applications for distributed simulation and for partial differential equations. They survey the hardware designs and the available software and present a comparative performance study of existing machines.

    • Hardcover $59.95
    • Paperback $50.00 £40.00
  • The Characteristics of Parallel Algorithms

    Robert J. Douglass, Dennis B. Gannon, and Leah H. Jamieson

    Although there has been a tremendous growth of interest in parallel architecture and parallel processing in recent years, comparatively little work has been done on the problem of characterizing parallelism in programs and algorithms. This book, a collection of original papers, specifically addresses that topic. The editors and two dozen other contributors have produced a work that cuts across numerical analysis, artificial intelligence, and database management, speaking to questions that lie at the heart of current research in these and many other fields of knowledge: How much commonality in algorithm structure is there across problem domains? What attributes of algorithms are the most important in dictating the structure of a parallel algorithm? How can algorithms be matched with languages and architectures? Their book provides an important starting place for a comprehensive taxonomy of parallel algorithms.

    The Characteristics of Parallel Algorithms is included in the Scientific Computation Series, edited by Dennis Gannon.

    • Hardcover $49.50
  • Cellular Automata Machines

    Cellular Automata Machines

    A New Environment for Modeling

    Tommaso Toffoli and Norman Margolus

    Recently, cellular automata machines with the size, speed, and flexibility for general experimentation at a moderate cost have become available to the scientific community. These machines provide a laboratory in which the ideas presented in this book can be tested and applied to the synthesis of a great variety of systems. Computer scientists and researchers interested in modeling and simulation as well as other scientists who do mathematical modeling will find this introduction to cellular automata and cellular automata machines (CAM) both useful and timely. Cellular automata are the computer scientist's counterpart to the physicist's concept of 'field' They provide natural models for many investigations in physics, combinatorial mathematics, and computer science that deal with systems extended in space and evolving in time according to local laws. A cellular automata machine is a computer optimized for the simulation of cellular automata. Its dedicated architecture allows it to run thousands of times faster than a general-purpose computer of comparable cost programmed to do the same task. In practical terms this permits intensive interactive experimentation and opens up new fields of research in distributed dynamics, including practical applications involving parallel computation and image processing.

    Contents Introduction • Cellular Automata • The CAM Environment • A Live Demo • The Rules of the Game • Our First rules • Second-order Dynamics • The Laboratory • Neighbors and Neighborhood • Running • Particle Motion • The Margolus Neighborhood • Noisy Neighbors • Display and Analysis • Physical Modeling • Reversibility • Computing Machinery • Hydrodynamics • Statistical Mechanics • Other Applications • Imaging Processing • Rotations • Pattern Recognition • Multiple CAMS • Perspectives and Conclusions

    Cellular Automata Machines is included in the Scientific Computation Series, edited by Dennis Cannon.

    • Hardcover $70.00 £58.00
    • Paperback $30.00 £25.00
  • The Massively Parallel Processor

    The Massively Parallel Processor

    Jerry L. Potter and Dennis B. Gannon

    The development of parallel processing, with the attendant technology of advanced software engineering, VLSI circuits, and artificial intelligence, now allows high-performance computer systems to reach the speeds necessary to meet the challenge of future complex scientific and commercial applications. This collection of articles documents the design of one such computer, a single instruction multiple data stream (SIMD) class supercomputer with 16,834 processing units capable of over 6 billion 8 bit operations per second. It provides a complete description of the Massively Parallel Processor (MPP), including discussions of hardware and software with special emphasis on applications, algorithms, and programming. This system with its massively parallel hardware and advanced software is on the cutting edge of parallel processing research, making possible AI, database, and image processing applications that were once thought to be inconceivable. The massively parallel processor represents the first step toward the large-scale parallelism needed in the computers of tomorrow. Orginally built for a variety of image-processing tasks, it is fully programmable and applicable to any problem with sizeable data demands.

    Contents "History of the MPP," D. Schaefer • "Data Structures for Implementing the Classy Algorithm on the MPP," R. White • "Inversion of Positive Definite Matrices on the MPP," R. White • "LANDSAT-4 Thematic Mapper Data Processing with the MPP," R. O. Faiss • "Fluid Dynamics Modeling," E. J. Gallopoulas • "Database Management," E. Davis • "List Based Processing on the MPP," J. L. Potter • "The Massively Parallel Processor System Overvew," K. E. Batcher • "Array Unit," K. E. Batcher • "Array Control Unit," K. E. Batcher • "Staging Memory," K. E. Batcher • "PE Design," J. Burkley • "Programming the MPP," J. L. Potter • "Parallel Pascal and the MPP," A. P Reeves • "MPP System Software," K. E. Batcher • "MPP Program Development and Simulation," E. J. Gallopoulas

    • Hardcover $42.95
    • Paperback $40.00 £32.00
  • Parallel MIMD Computation

    Parallel MIMD Computation

    HEP Supercomputer and Its Applications

    Janusz S. Kowalik

    Fifteen original contributions from experts in high-speed computation on multi-processor architectures, concurrent programming and parallel algorithms.

    Experts in high-speed computation agree that the rapidly growing demand for more powerful computers can only be met by a radical change in computer architecture, a change from a single serial processor to an aggregation of many processors working in parallel. At present, our knowledge about multi-processor architectures, concurrent programming or parallel algorithms is very limited. This book discusses all three subjects in relation to the HEP supercomputer that can handle multiple instruction streams and multiple data streams (MIMD). The HEP multiprocessor is an innovative general purpose computer, easy to use by anybody familiar with FORTRAN. Following a preface by the editor, the book's fifteen original contributions are divided into four sections: The HEP Architecture and Systems Software; The HEP Performance; Programming and Languages; and Applications of the HEP Computer. An appendix describes the use of monitors in FORTRAN, providing a tutorial on the barrier, self-scheduling DO loop, and Askfor monitors.

    J. S. Kowalik, who has contributed a chapter with S. P. Kumar on "Parallel Algorithms for Recurrence and Tridiagonal Linear Equations," is a manager in Boeing Computer Services' Artificial Intelligence Center in Seattle.MIMD Computation is included in the Scientific Computation Series, edited by Dennis Cannon.

    • Hardcover
    • Paperback $55.00 £45.00