William Gropp

William Gropp is Director of the Parallel Computing Institute and Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign.

  • Using MPI, Third Edition

    Using MPI, Third Edition

    Portable Parallel Programming with the Message-Passing Interface

    William Gropp, Ewing Lusk, and Anthony Skjellum

    The thoroughly updated edition of a guide to parallel programming with MPI, reflecting the latest specifications, with many detailed examples.

    This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code.

    The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data.

    • Paperback $65.00
  • Using Advanced MPI

    Using Advanced MPI

    Modern Features of the Message-Passing Interface

    William Gropp, Torsten Hoefler, Rajeev Thakur, and Ewing Lusk

    A guide to advanced features of MPI, reflecting the latest version of the MPI standard, that takes an example-driven, tutorial approach.

    This book offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, the latest version of the MPI standard, and updates from MPI-2. Like its companion volume, Using MPI, the book takes an informal, example-driven, tutorial approach. The material in each chapter is organized according to the complexity of the programs used as examples, starting with the simplest example and moving to more complex ones.

    Using Advanced MPI covers major changes in MPI-3, including changes to remote memory access and one-sided communication that simplify semantics and enable better performance on modern hardware; new features such as nonblocking and neighborhood collectives for greater scalability on large systems; and minor updates to parallel I/O and dynamic processes. It also covers support for hybrid shared-memory/message-passing programming; MPI_Message, which aids in certain types of multithreaded programming; features that handle very large data; an interface that allows the programmer and the developer to access performance data; and a new binding of MPI to Fortran.

    • Paperback $65.00
  • Beowulf Cluster Computing with Linux, Second Edition

    Beowulf Cluster Computing with Linux, Second Edition

    William Gropp, Ewing Lusk, and Thomas Sterling

    The completely updated second edition of a guide to Beowulf cluster computing.

    Use of Beowulf clusters (collections of off-the-shelf commodity computers programmed to act in concert, resulting in supercomputer performance at a fraction of the cost) has spread far and wide in the computational science community. Many application groups are assembling and operating their own "private supercomputers" rather than relying on centralized computing centers. Such clusters are used in climate modeling, computational biology, astrophysics, and materials science, as well as non-traditional areas such as financial modeling and entertainment. Much of this new popularity can be attributed to the growth of the open-source movement.The second edition of Beowulf Cluster Computing with Linux has been completely updated; all three stand-alone sections have important new material. The introductory material in the first part now includes a new chapter giving an overview of the book and background on cluster-specific issues, including why and how to choose a cluster, as well as new chapters on cluster initialization systems (including ROCKS and OSCAR) and on network setup and tuning. The information on parallel programming in the second part now includes chapters on basic parallel programming and available libraries and programs for clusters. The third and largest part of the book, which describes software infrastructure and tools for managing cluster resources, has new material on cluster management and on the Scyld system.

    • Paperback $10.75
  • Using MPI and Using MPI-2, 2-vol. set

    Using MPI and Using MPI-2, 2-vol. set

    William Gropp, Ewing Lusk, Anthony Skjellum, and Rajeev Thakur

    The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations. Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations. This two-volume set contains Using MPI and Using MPI-2.

    • Paperback $18.75
  • Using MPI, Second Edition

    Using MPI, Second Edition

    Portable Parallel Programming with the Message Passing Interface

    William Gropp, Ewing Lusk, and Anthony Skjellum

    The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations. Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations.

    • Paperback $60.00
  • Using MPI-2

    Using MPI-2

    Advanced Features of the Message-Passing Interface

    William Gropp, Ewing Lusk, and Rajeev Thakur

    Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material onthe new C++ and Fortran 90 bindings for MPI throughout the book.

    The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations.Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations.

    • Paperback $58.00
  • MPI - The Complete Reference, Volume 2

    MPI - The Complete Reference, Volume 2

    Volume 2, The MPI Extensions

    William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir

    This volume presents a complete specification of the MPI-2 Standard. It is annotated with comments that clarify complicated issues, including why certain design choices were made, how users are intended to use the interface, and how they should construct their version of MPI.

    Since its release in summer 1994, the Message Passing Interface (MPI) specification has become a standard for message-passing libraries for parallel computations. There exist more than a dozen implementations on a variety of computing platforms, from the IBM SP-2 supercomputer to PCs running Windows NT. The MPI Forum, which has continued to work on MPI, has recently released MPI-2, a new definition that includes significant extensions, improvements, and clarifications. This volume presents a complete specification of the MPI-2 Standard. It is annotated with comments that clarify complicated issues, including why certain design choices were made, how users are intended to use the interface, and how they should construct their version of MPI. The volume also provides many detailed, illustrative programming examples.

    • Paperback $55.00