Skip navigation

Scientific and Engineering Computation

Modern Features of the Message-Passing Interface

This book offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, the latest version of the MPI standard, and updates from MPI-2. Like its companion volume, Using MPI, the book takes an informal, example-driven, tutorial approach. The material in each chapter is organized according to the complexity of the programs used as examples, starting with the simplest example and moving to more complex ones.

Portable Parallel Programming with the Message-Passing Interface

This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common.

A Gentle Introduction

The combination of two of the twentieth century’s most influential and revolutionary scientific theories, information theory and quantum mechanics, gave rise to a radically new view of computing and information. Quantum information processing explores the implications of using quantum mechanics instead of classical mechanics to model information and its processing. Quantum computing is not about changing the physical substrate on which computation is done from classical to quantum but about changing the notion of computation itself, at the most basic level.

Devices

This text offers an introduction to quantum computing, with a special emphasis on basic quantum physics, experiment, and quantum devices. Unlike many other texts, which tend to emphasize algorithms, Quantum Computing without Magic explains the requisite quantum physics in some depth, and then explains the devices themselves.

Portable Shared Memory Parallel Programming

"I hope that readers will learn to use the full expressibility and power of OpenMP. This book should provide an excellent introduction to beginners, and the performance section should help those with some experience who want to push OpenMP to its limits."
—from the foreword by David J. Kuck, Intel Fellow, Software and Solutions Group, and Director, Parallel and Distributed Solutions, Intel Corporation

Message-Based Parallel Processing

High-performance message-based supercomputers have only recently emerged from the research laboratory. The commercial manufacturing of such products as the Intel iPSC, the Ametek s/14, the NCUBE/ten, and the FPS T Series - all based on multicomputer network technology - has sparked lively interest in high-performance computation, and particularly in the message-passing paradigm for parallel computation.

Use of Beowulf clusters (collections of off-the-shelf commodity computers programmed to act in concert, resulting in supercomputer performance at a fraction of the cost) has spread far and wide in the computational science community. Many application groups are assembling and operating their own "private supercomputers" rather than relying on centralized computing centers. Such clusters are used in climate modeling, computational biology, astrophysics, and materials science, as well as non-traditional areas such as financial modeling and entertainment.

Achieving System Balance
Edited by Daniel A. Reed

As we enter the "decade of data," the disparity between the vast amount of data storage capacity (measurable in terabytes and petabytes) and the bandwidth available for accessing it has created an input/output bottleneck that is proving to be a major constraint on the effective use of scientific data for research.

Meshes and Pyramids

Parallel-Algorithms for Regular Architectures is the first book to concentrate exclusively on algorithms and paradigms for programming parallel computers such as the hypercube, mesh, pyramid, and mesh-of-trees. Algorithms are given to solve fundamental tasks such as sorting and matrix operations, as well as problems in the field of image processing, graph theory, and computational geometry. The first chapter defines the computer models, problems to be solved, and notation that will be used throughout the book.

Edited by Thomas Sterling

Beowulf clusters, which exploit mass-market PC hardware and software in conjunction with cost-effective commercial network technology, are becoming the platform for many scientific, engineering, and commercial applications. With growing popularity has come growing complexity. Addressing that complexity, Beowulf Cluster Computing with Linux and Beowulf Cluster Computing with Windows provide system users and administrators with the tools they need to run the most advanced Beowulf clusters.

Beowulf clusters, which exploit mass-market PC hardware and software in conjunction with cost-effective commercial network technology, are becoming the platform for many scientific, engineering, and commercial applications. With growing popularity has come growing complexity. Addressing that complexity, Beowulf Cluster Computing with Linux and Beowulf Cluster Computing with Windows provide system users and administrators with the tools they need to run the most advanced Beowulf clusters.

The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.

Portable Parallel Programming with the Message Passing Interface

The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum.

Advanced Features of the Message-Passing Interface

The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines). The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.

A Guide to the Implementation and Application of PC Clusters

Supercomputing research--the goal of which is to make computers that are ever faster and more powerful--has been at the cutting edge of computer technology since the early 1960s. Until recently, research cost in the millions of dollars, and many of the companies that originally made supercomputers are now out of business.The early supercomputers used distributed computing and parallel processing to link processors together in a single machine, often called a mainframe.

ZPL is a new array programming language for science and engineering computation. Designed for fast execution on both sequential and parallel computers, it is intended to replace languages such as Fortran and C. Because ZPL benefits from recent research in parallel compilation, it provides a convenient high-level programming medium for supercomputers with efficiency comparable to hand-coded message-passing programs.

Volume 2, The MPI Extensions

Since its release in summer 1994, the Message Passing Interface (MPI) specification has become a standard for message-passing libraries for parallel computations. There exist more than a dozen implementations on a variety of computing platforms, from the IBM SP-2 supercomputer to PCs running Windows NT. The MPI Forum, which has continued to work on MPI, has recently released MPI-2, a new definition that includes significant extensions, improvements, and clarifications. This volume presents a complete specification of the MPI-2 Standard.

Since its release in summer 1994, the Message Passing Interface (MPI) specification has become a standard for message-passing libraries for parallel computations. There exist more than a dozen implementations on a variety of computing platforms, from the IBM SP-2 supercomputer to PCs running Windows NT.

Volume 1, The MPI Core

Since its release in summer 1994, the Message Passing Interface (MPI) specification has become a standard for message-passing libraries for parallel computations. There exist more than a dozen implementations on a variety of computing platforms, from the IBM SP-2 supercomputer to PCs running Windows NT. The initial MPI Standard, known as MPI-1, has been modified over the last two years. This volume, the definitive reference manual for the latest version of MPI-1, contains a complete specification of the MPI Standard.

Complete Iso/Ansi Reference

The Fortran 95 Handbook, a comprehensive reference work for the Fortran programmer and implementor, contains a complete description of the Fortran 95 programming language. The chapters follow the same sequence of topics as the Fortran 95 standard, but contain a more thorough and informal explanation of the language's features and many more examples. Appendices describe all the intrinsic features, the deprecated features, and the complete syntax of the language.

PLAPACK is a library infrastructure for the parallel implementation of linear algebra algorithms and applications on distributed memory supercomputers such as the Intel Paragon, IBM SP2, Cray T3D/T3E, SGI PowerChallenge, and Convex Exemplar. This infrastructure allows library developers, scientists, and engineers to exploit a natural approach to encoding so-called blocked algorithms, which achieve high performance by operating on submatrices and subvectors.

Foreword by Bjarne Stroustrup

This text evolved from a new curriculum in scientific computing that was developed to teach undergraduate science and engineering majors how to use high-performance computing systems (supercomputers) in scientific and engineering applications.

Parallel computers have become widely available in recent years. Many scientists are now using them to investigate the grand challenges of science, such as modeling global climate change, determining the masses of elementary particles from first principles, or sequencing the human genome. However, software for parallel computers has developed far more slowly than the hardware.

Building a computer ten times more powerful than all the networked computing capability in the United States is the subject of this book by leading figures in the high performance computing community. It summarizes the near-term initiatives, including the technical and policy agendas for what could be a twenty-year effort to build a petaFLOP scale computer. (A FLOP—Floating Point OPeration—is a standard measure of computer performance and a PetaFLOP computer would perform a million billion of these operations per second.)

  •  
  • Page 1 of 2