The Year of Open Science: Computer science

To highlight the Year of Open Science, we spoke to acquisitions editor Elizabeth Swayze about what open access means for the field of computer science

The Biden-Harris administration declared 2023 the Year of Open Science in the United States, offering an opportunity to advance national open science policy and provide greater and more equitable access to research in key areas of scientific study.

Elizabeth Swayze
Elizabeth Swayze.

The MIT Press centers open access in much of the work we do; we take pride in making high quality, well-researched scholarship freely available to the public. In honor of the Year of Open Science, we spoke to Elizabeth Swayze, senior acquisitions editor of computer science, about the impact OA scholarship has had in her field.

“Open access has always been part of the DNA of computer science—going back to open source software and courseware—well ahead of the OA movement that emerged in the 1990s and deepened as the internet became ubiquitous,” Swayze said. “The MIT Press has a rich history of publishing landmark books from thought leaders in the field of computer science. Legacy titles include Structure and Interpretation of Computer Programs (first edition published in 1984, second edition published in 1996; JavaScript edition published in 2022), Reinforcement Learning (first edition published in 1998, second edition in 2018), and How to Design Programs (first edition published in 2001, and the second in 2017). More recently the computer science list has published Deep Learning (2016), Probabilistic Machine Learning: An Introduction (2022), and Essentials of Compilation (2023). All these books rely on open science to ensure excellence since the community is part of the process of building these landmark books, from stress-testing code, to catching technical errors, to test driving drafts in courses.”

Read on to explore several books from Elizabeth’s list, and discover even more computer science titles on our website.


Structure and Interpretation of Computer Programs: JavaScript Edition by Harold Abelson, Gerald Jay Sussman, Martin Henz and Tobias Wrigstad

Since the publication of its first edition in 1984 and its second edition in 1996, Structure and Interpretation of Computer Programs (SICP) has influenced computer science curricula around the world. Widely adopted as a textbook, the book has its origins in a popular entry-level computer science course taught by Harold Abelson and Gerald Jay Sussman at MIT. SICP introduces the reader to central ideas of computation by establishing a series of mental models for computation. Earlier editions used the programming language Scheme in their program examples. This edition has been adapted to JavaScript.


Reinforcement Learning, Second Edition: An Introduction by Richard S. Sutton and Andrew G. Barto

Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field’s key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics.


Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville

Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. 

This book is available in an open access edition on the authors’ website.


Probabilistic Machine Learning: An Introduction by Kevin P. Murphy

This book offers a detailed and up-to-date introduction to machine learning (including deep learning) through the unifying lens of probabilistic modeling and Bayesian decision theory. The book covers mathematical background (including linear algebra and optimization), basic supervised learning (including linear and logistic regression and deep neural networks), as well as more advanced topics (including transfer learning and unsupervised learning). End-of-chapter exercises allow students to apply what they have learned, and an appendix covers notation.


Sign up for our newsletter to hear more updates from the Press