We now know that there is much more to classical mechanics than previously suspected. Derivations of the equations of motion, the focus of traditional presentations of mechanics, are just the beginning. This innovative textbook, now in its second edition, concentrates on developing general methods for studying the behavior of classical systems, whether or not they have a symbolic solution. It focuses on the phenomenon of motion and makes extensive use of computer simulation in its explorations of the topic. It weaves recent discoveries in nonlinear dynamics throughout the text, rather than presenting them as an afterthought. Explorations of phenomena such as the transition to chaos, nonlinear resonances, and resonance overlap to help the student develop appropriate analytic tools for understanding. The book uses computation to constrain notation, to capture and formalize methods, and for simulation and symbolic analysis. The requirement that the computer be able to interpret any expression provides the student with strict and immediate feedback about whether an expression is correctly formulated.
This second edition has been updated throughout, with revisions that reflect insights gained by the authors from using the text every year at MIT. In addition, because of substantial software improvements, this edition provides algebraic proofs of more generality than those in the previous edition; this improvement permeates the new edition.
Why do people who perform largely the same type of work make different technology choices in the workplace? An automotive design engineer working in India, for example, finds advanced information and communication technologies essential, allowing him to work with far-flung colleagues; a structural engineer in California relies more on paper-based technologies for her everyday work; and a software engineer in Silicon Valley operates on multiple digital levels simultaneously all day, continuing after hours on a company-supplied home computer and network connection. In Technology Choices, Diane Bailey and Paul Leonardi argue that occupational factors—rather than personal preference or purely technological concerns—strongly shape workers’ technology choices.
Drawing on extensive field work—a decade’s worth of observations and interviews in seven engineering firms in eight countries—Bailey and Leonardi challenge the traditional views of technology choices: technological determinism and social constructivism. Their innovative occupational perspective allows them to explore how external forces shape ideas, beliefs, and norms in ways that steer individuals to particular technology choices— albeit in somewhat predictable and generalizable ways. They examine three relationships at the heart of technology choices: human to technology, technology to technology, and human to human. An occupational perspective, they argue, helps us not only to understand past technology choices, but also to predict future ones.
Computing is usually viewed as a technology field that advances at the breakneck speed of Moore’s Law. If we turn away even for a moment, we might miss a game-changing technological breakthrough or an earthshaking theoretical development. This book takes a different perspective, presenting computing as a science governed by fundamental principles that span all technologies. Computer science is a science of information processes. We need a new language to describe the science, and in this book Peter Denning and Craig Martell offer the great principles framework as just such a language. This is a book about the whole of computing—its algorithms, architectures, and designs.
Denning and Martell divide the great principles of computing into six categories: communication, computation, coordination, recollection, evaluation, and design. They begin with an introduction to computing, its history, its many interactions with other fields, its domains of practice, and the structure of the great principles framework. They go on to examine the great principles in different areas: information, machines, programming, computation, memory, parallelism, queueing, and design. Finally, they apply the great principles to networking, the Internet in particular.
Great Principles of Computing will be essential reading for professionals in science and engineering fields with a “computational” branch, for practitioners in computing who want overviews of less familiar areas of computer science, and for non-computer science majors who want an accessible entry way to the field.
In this book, Sanjoy Mahajan shows us that the way to master complexity is through insight rather than precision. Precision can overwhelm us with information, whereas insight connects seemingly disparate pieces of information into a simple picture. Unlike computers, humans depend on insight. Based on the author’s fifteen years of teaching at MIT, Cambridge University, and Olin College, The Art of Insight in Science and Engineering shows us how to build insight and find understanding, giving readers tools to help them solve any problem in science and engineering.
To master complexity, we can organize it or discard it. The Art of Insight in Science and Engineering first teaches the tools for organizing complexity, then distinguishes the two paths for discarding complexity: with and without loss of information. Questions and problems throughout the text help readers master and apply these groups of tools. Armed with this three-part toolchest, and without complicated mathematics, readers can estimate the flight range of birds and planes and the strength of chemical bonds, understand the physics of pianos and xylophones, and explain why skies are blue and sunsets are red.
The Art of Insight in Science and Engineering will appear in print and online under a Creative Commons Noncommercial Share Alike license.
Complex communicating computer systems—computers connected by data networks and in constant communication with their environments—do not always behave as expected. This book introduces behavioral modeling, a rigorous approach to behavioral specification and verification of concurrent and distributed systems. It is among the very few techniques capable of modeling systems interaction at a level of abstraction sufficient for the interaction to be understood and analyzed. Offering both a mathematically grounded theory and real-world applications, the book is suitable for classroom use and as a reference for system architects.
The book covers the foundation of behavioral modeling using process algebra, transition systems, abstract data types, and modal logics. Exercises and examples augment the theoretical discussion. The book introduces a modeling language, mCRL2, that enables concise descriptions of even the most intricate distributed algorithms and protocols. Using behavioral axioms and such proof methods as confluence, cones, and foci, readers will learn how to prove such algorithms equal to their specifications. Specifications in mCRL2 can be simulated, visualized, or verified against their requirements. An extensive mCRL2 toolset for mechanically verifying the requirements is freely available online; this toolset has been successfully used to design and analyze industrial software that ranges from healthcare applications to particle accelerators at CERN. Appendixes offer material on equations and notation as well as exercise solutions.
This book offers a general overview of the physics, concepts, theories, and models underlying the discipline of aerodynamics. A particular focus is the technique of velocity field representation and modeling via source and vorticity fields and via their sheet, filament, or point-singularity idealizations. These models provide an intuitive feel for aerodynamic flow-field behavior and are the basis of aerodynamic force analysis, drag decomposition, flow interference estimation, and other important applications. The models are applied to both low speed and high speed flows. Viscous flows are also covered, with a focus on understanding boundary layer behavior and its influence on aerodynamic flows.
The book covers some topics in depth while offering introductions and summaries of others. Computational methods are indispensable for the practicing aerodynamicist, and the book covers several computational methods in detail, with a focus on vortex lattice and panel methods. The goal is to improve understanding of the physical models that underlie such methods. The book also covers the aerodynamic models that describe the forces and moments on maneuvering aircraft, and provides a good introduction to the concepts and methods used in flight dynamics. It also offers an introduction to unsteady flows and to the subject of wind tunnel measurements.
The book is based on the MIT graduate-level course “Flight Vehicle Aerodynamics” and has been developed for use not only in conventional classrooms but also in a massive open online course (or MOOC) offered on the pioneering MOOC platform edX. It will also serve as a valuable reference for professionals in the field. The text assumes that the reader is well versed in basic physics and vector calculus, has had some exposure to basic fluid dynamics and aerodynamics, and is somewhat familiar with aerodynamics and aeronautics terminology.
Engineering education in the United States was long regarded as masculine territory. For decades, women who studied or worked in engineering were popularly perceived as oddities, outcasts, unfeminine (or inappropriately feminine in a male world). In Girls Coming to Tech!, Amy Bix tells the story of how women gained entrance to the traditionally male field of engineering in American higher education.
As Bix explains, a few women breached the gender-reinforced boundaries of engineering education before World War II. During World War II, government, employers, and colleges actively recruited women to train as engineering aides, channeling them directly into defense work. These wartime training programs set the stage for more engineering schools to open their doors to women. Bix offers three detailed case studies of postwar engineering coeducation. Georgia Tech admitted women in 1952 to avoid a court case, over objections by traditionalists. In 1968, Caltech male students argued that nerds needed a civilizing female presence. At MIT, which had admitted women since the 1870s but treated them as a minor afterthought, feminist-era activists pushed the school to welcome more women and take their talent seriously.
In the 1950s, women made up less than one percent of students in American engineering programs; in 2010 and 2011, women earned 18.4% of bachelor’s degrees, 22.6% of master’s degrees, and 21.8% of doctorates in engineering. Bix’s account shows why these gains were hard won.
Modern, complex digital systems invariably include hardware-implemented finite state machines. The correct design of such parts is crucial for attaining proper system performance. This book offers detailed, comprehensive coverage of the theory and design for any category of hardware-implemented finite state machines. It describes crucial design problems that lead to incorrect or far from optimal implementation and provides examples of finite state machines developed in both VHDL and SystemVerilog (the successor of Verilog) hardware description languages.
Important features include: extensive review of design practices for sequential digital circuits; a new division of all state machines into three hardware-based categories, encompassing all possible situations, with numerous practical examples provided in all three categories; the presentation of complete designs, with detailed VHDL and SystemVerilog codes, comments, and simulation results, all tested in FPGA devices; and exercise examples, all of which can be synthesized, simulated, and physically implemented in FPGA boards. Additional material is available on the book’s Website.
Designing a state machine in hardware is more complex than designing it in software. Although interest in hardware for finite state machines has grown dramatically in recent years, there is no comprehensive treatment of the subject. This book offers the most detailed coverage of finite state machines available. It will be essential for industrial designers of digital systems and for students of electrical engineering and computer science.
The technology of mechanized program verification can play a supporting role in many kinds of research projects in computer science, and related tools for formal proof-checking are seeing increasing adoption in mathematics and engineering. This book provides an introduction to the Coq software for writing and checking mathematical proofs. It takes a practical engineering focus throughout, emphasizing techniques that will help users to build, understand, and maintain large Coq developments and minimize the cost of code change over time.
Two topics, rarely discussed elsewhere, are covered in detail: effective dependently typed programming (making productive use of a feature at the heart of the Coq system) and construction of domain-specific proof tactics. Almost every subject covered is also relevant to interactive computer theorem proving in general, not just program verification, demonstrated through examples of verified programs applied in many different sorts of formalizations. The book develops a unique automated proof style and applies it throughout; even experienced Coq users may benefit from reading about basic Coq concepts from this novel perspective. The book also offers a library of tactics, or programs that find proofs, designed for use with examples in the book. Readers will acquire the necessary skills to reimplement these tactics in other settings by the end of the book. All of the code appearing in the book is freely available online.
Engineering has been an essential collaborator in biological research and breakthroughs in biology are often enabled by technological advances. Decoding the double helix structure of DNA, for example, only became possible after significant advances in such technologies as X-ray diffraction and gel electrophoresis. Diagnosis and treatment of tuberculosis improved as new technologies—including the stethoscope, the microscope, and the X-ray—developed. These engineering breakthroughs take place away from the biology lab, and many years may elapse before the technology becomes available to biologists. In this book, David Lee argues for concurrent engineering—the convergence of engineering and biological research—as a means to accelerate the pace of biological discovery and its application to diagnosis and treatment. He presents extensive case studies and introduces a metric to measure the time between technological development and biological discovery.
Investigating a series of major biological discoveries that range from pasteurization to electron microscopy, Lee finds that it took an average of forty years for the necessary technology to become available for laboratory use. Lee calls for new approaches to research and funding to encourage a tighter, more collaborative coupling of engineering and biology. Only then, he argues, will we see the rapid advances in the life sciences that are critically needed for life-saving diagnosis and treatment.