Skip navigation

Mathematics and Physics

Mathematics and Physics

  • Page 4 of 7

In writing the first book-length study of ancient Egyptian mathematics, Richard Gillings presents evidence that Egyptian achievements in this area are much more substantial than has been previously thought. He does so in a way that will interest not only historians of Egypt and of mathematics, but also people who simply like to manipulate numbers in novel ways. He examines all the extant sources, with particular attention to the most extensive of these—the Rhind Mathematical Papyrus, a collection of training exercises for scribes. This papyrus, besides dealing with the practical, commercial computations for which the Egyptians developed their mathematics, also includes a series of abstract numerical problems stated in a more general fashion.

The mathematical operations used were extremely limited in number but were adaptable to a great many applications. The Egyptian number system was decimal, with digits sequentially arranged (much like our own, but reading right to left), allowing them to add and subtract with ease. They could multiply any number by two, and to accomplish more extended multiplications made use of a binary process, successively multiplying results by two and adding those partial products that led to the correct result. Division was done in a similar way. They could fully manipulate fractions, even though all of them (with one exception) were expressed in the unwieldy form of sumes of unit fractions—those having "1" as their numerator. (The exception was 2/3. The scribes recognized this as a very special quantity and took 2/3 of integral or fractional numbers whenever the change presented itself in the course of computation.) In expressing a rational quantity as a series of unit fractions, the scribes were generally able to choose a simple and direct solution from among the many—sometimes thousands—that are possible. Doing this without modern computers would seem quite as remarkable as building pyramids without modern machinery.

The range of mathematical problems that were solved using these limited operational means is far wider than many historians of mathematics acknowledge. Gillings gives examples showing that the Egyptians were able, for example, to solve problems in direct and inverse proportion; to evaluate certain square roots; to introduce the concept of a "harmonic mean" between two numbers; to solve linear equations of the first degree, and two simultaneous equations, one of the second degree; to find the sum of terms of arithmetic and geometric progressions; to calculate the area of a circle and of cylindrical (possibly even spherical) surfaces; to calculate the volumes of truncated pyramids and cylindrical granaries; and to make use of rudimentary trigonometric functions in describing the slopes of pyramids. The Egyptian accomplishment that historians have tended to repeat uncritically, one after another, is one that Gillings can find no evidence to support: that the Egyptians knew the Pythagorean theorem, at least in the special case of the 3-4-5 right triangle.

Finite Groups

Richard Brauer (1901-1977) was one of the leading algebraists of this century. Although he contributed to a number of mathematical fields, Brauer devoted the major share of his efforts to the study of finite groups, a subject of considerable abstract interest and one that underlies many of the more recent advances in combinatorics and finite geometries.

As a result of a lunchtime conversation with Professor Wendell Garner concerning the productiveness of the sacrifice bunt, Earnshaw Cook took on the three-year task of presenting a formal analysis of baseball. His analysis, explained in terms perfectly clear to anyone with college freshman level mathematics, suggests that no one has ever known the true percentages, and if anyone did know them he could manage almost any team into the top ranks of major league baseball.

Among other theories that Cook attacks with irrefutable mathematical findings are the benefits of the sacrifice bunt, the use of relief pitchers, the traditional batting order, the hit and run play, and the standardization of baseball itself.

As with almost any serious innovation, the first edition of this book met with bitter controversy and criticism from some baseball fans, team managers, and sportswriters. James Gallagher in Sporting News wrote, "I do not understand how the Baltimore mathematicians reached their controversial conclusions, but in my book any generalizations about baseball have to be wrong." Yet in 1964 this "Baltimore mathematician," using his scoring index, K.2 factors, base-scoring equations, etc., predicted that the hometown Baltimore Orioles would finish in fourth place behind, in order, New York, Chicago, and Minnesota—with perfect accuracy!

Modeling and Simulation with Incomplete Knowledge

This book presents, within a conceptually unified theoretical framework, a body of methods that have been developed over the past fifteen years for building and simulating qualitative models of physical systems—bathtubs, tea kettles, automobiles, the physiology of the body, chemical processing plants, control systems, electrical systems—where knowledge of that system is incomplete. The primary tool for this work is the author's QSIM algorithm, which is discussed in detail.

Qualitative models are better able than traditional models to express states of incomplete knowledge about continuous mechanisms. Qualitative simulation guarantees to find all possible behaviors consistent with the knowledge in the model. This expressive power and coverage is important in problem solving for diagnosis, design, monitoring, explanation, and other applications of artificial intelligence.

The framework is built around the QSIM algorithm for qualitative simulation and the QSIM representation for qualitative differential equations, both of which are carefully grounded in continuous mathematics. Qualitative simulation draws on a wide range of mathematical methods to keep a complete set of predictions tractable, including the use of partial quantitative information. Compositional modeling and component-connection methods for building qualitative models are also discussed in detail.

Qualitative Reasoning is primarily intended for advanced students and researchers in AI or its applications. Scientists and engineers who have had a solid introduction to AI, however, will be able to use this book for self-instruction in qualitative modeling and simulation methods.

Artificial Intelligence series

Creating a Professional Identity in Post-World War II America

Women Becoming Mathematicians looks at the lives and careers of thirty-six of the approximately two hundred women who earned Ph.D.s in mathematics from American institutions from 1940 to 1959. During this period, American mathematical research enjoyed an unprecedented expansion, fueled by the technological successes of World War II and the postwar boom in federal funding for education in the basic sciences. Yet women's share of doctorates earned in mathematics in the United States reached an all-time low. This book explores the complex interplay between the personal and professional lives of those women who embarked on mathematical careers during this period, with a view to understanding how changes in American society during the 1950s, 1960s, and 1970s affected their career development and identities as mathematicians.

The book is based on extensive interviews with thirty-six women mathematicians of the postwar generation, as well as primary and secondary historical and sociological research. Taking a life-course approach, the book examines the development of mathematical identity across the life span, from childhood through adulthood and into retirement. It focuses on the process by which women who are actively involved in the mathematical community come to "know themselves" as mathematicians. The women's stories are instructive precisely because they do not conform to a set pattern; compelled to improvise, the women mathematicians of the 1940s and 1950s followed diverse paths in their struggle to construct a professional identity in postwar America.

A Search for the Hidden Meaning of Science

Nature has secrets, and it is the desire to uncover them that motivates the scientific quest. But what makes these "secrets" secret? Is it that they are beyond human ken? that they concern divine matters? And if they are accessible to human seeking, why do they seem so carefully hidden? Such questions are at the heart of Peter Pesic's enlightening effort to uncover the meaning of modern science.

Pesic portrays the struggle between the scientist and nature as the ultimate game of hide-and-seek, in which a childlike wonder propels the exploration of mysteries. Witness the young Albert Einstein, fascinated by a compass and the sense it gave him of "something deeply hidden behind things." In musical terms, the book is a triple fugue, interweaving three themes: the epic struggle between the scientist and nature; the distilling effects of the struggle on the scientist; and the emergence from this struggle of symbolic mathematics, the purified language necessary to decode nature's secrets.

Pesic's quest for the roots of science begins with three key Renaissance figures: William Gilbert, a physician who began the scientific study of magnetism; François Viète, a French codebreaker who played a crucial role in the foundation of symbolic mathematics; and Francis Bacon, a visionary who anticipated the shape of modern science. Pesic then describes the encounters of three modern masters—Johannes Kepler, Isaac Newton, and Albert Einstein—with the depths of nature. Throughout, Pesic reads scientific works as works of literature, attending to nuance and tone as much as to surface meaning. He seeks the living center of human concern as it emerges in the ongoing search for nature's secrets.

Theory and Practice

A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models.

Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models.

Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.

In The Art of Causal Conjecture, Glenn Shafer lays out a new mathematical and philosophical foundation for probability and uses it to explain concepts of causality used in statistics, artificial intelligence, and philosophy.

The various disciplines that use causal reasoning differ in the relative weight they put on security and precision of knowledge as opposed to timeliness of action. The natural and social sciences seek high levels of certainty in the identification of causes and high levels of precision in the measurement of their effects. The practical sciences—medicine, business, engineering, and artificial intelligence—must act on causal conjectures based on more limited knowledge. Shafer's understanding of causality contributes to both of these uses of causal reasoning. His language for causal explanation can guide statistical investigation in the natural and social sciences, and it can also be used to formulate assumptions of causal uniformity needed for decision making in the practical sciences.

Causal ideas permeate the use of probability and statistics in all branches of industry, commerce, government, and science. The Art of Causal Conjecture shows that causal ideas can be equally important in theory. It does not challenge the maxim that causation cannot be proven from statistics alone, but by bringing causal ideas into the foundations of probability, it allows causal conjectures to be more clearly quantified, debated, and confronted by statistical evidence.

Efficient Algorithms

Algorithmic Number Theory provides a thorough introduction to the design and analysis of algorithms for problems from the theory of numbers. Although not an elementary textbook, it includes over 300 exercises with suggested solutions. Every theorem not proved in the text or left as an exercise has a reference in the notes section that appears at the end of each chapter. The bibliography contains over 1,750 citations to the literature. Finally, it successfully blends computational theory with practice by covering some of the practical aspects of algorithm implementations.

The subject of algorithmic number theory represents the marriage of number theory with the theory of computational complexity. It may be briefly defined as finding integer solutions to equations, or proving their non-existence, making efficient use of resources such as time and space. Implicit in this definition is the question of how to efficiently represent the objects in question on a computer. The problems of algorithmic number theory are important both for their intrinsic mathematical interest and their application to random number generation, codes for reliable and secure information transmission, computer algebra, and other areas.

Publisher's Note: Volume 2 was not written. Volume 1 is, therefore, a stand-alone publication.


Algebraic Semantics of Imperative Programs presents a self-contained and novel "executable" introduction to formal reasoning about imperative programs. The authors' primary goal is to improve programming ability by improving intuition about what programs mean and how they run.

The semantics of imperative programs is specified in a formal, implemented notation, the language OBJ; this makes the semantics highly rigorous yet simple, and provides support for the mechanical verification of program properties.

OBJ was designed for algebraic semantics; its declarations introduce symbols for sorts and functions, its statements are equations, and its computations are equational proofs. Thus, an OBJ "program" is an equational theory, and every OBJ computation proves some theorem about such a theory. This means that an OBJ program used for defining the semantics of a program already has a precise mathematical meaning. Moreover, standard techniques for mechanizing equational reasoning can be used for verifying axioms that describe the effect of imperative programs on abstract machines. These axioms can then be used in mechanical proofs of properties of programs.

Intended for advanced undergraduates or beginning graduate students, Algebraic Semantics of Imperative Programs contains many examples and exercises in program verification, all of which can be done in OBJ.

  • Page 4 of 7