Skip navigation


  • Jennifer Lieberman, author of Power Lines shares with us some thoughts on our dependency on electricity and what it means in the wake of Hurricane Irma.

    Posted at 09:00 am on Sat, 16 Sep 2017 in current affairs, environment, science, technology
  • The MIT Press is proud to announce its partnership with the March for Science, a series of events across the country planned to coincide with Earth Day on April 22, 2017. The Press’s commitment to publishing rigorous, cutting-edge scientific research through its books and journals aligns with the March’s mission to promote “publicly communicated science as a pillar of human freedom and prosperity.” We are pleased to lend our voice to this important cause.

    For more information, please visit the March for Science page.

    Posted at 10:15 am on Wed, 19 Apr 2017 in MIT, science
  • What do computers, cells, and brains have in common? Computers are electronic devices designed by humans; cells are biological entities crafted by evolution; brains are the containers and creators of our minds. But all are, in one way or another, information-processing devices. The power of the human brain is, so far, unequaled by any existing machine or known living being. In our final post celebrating Brain Awareness Week, Arlindo Oliveira discusses how advances in science and technology could enable us to create digital minds

    The field of Artificial Intelligence was started more than six decades ago, with the work of Alan Turing about computers and intelligence, in 1950, and a famous conference, in 1956, in Dartmouth College, where many well-known researchers met, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel, and Herbert Simon. Before these events, the idea that computers could display intelligent behavior had been only addressed in very vague, abstract, and philosophical terms. During the ensuing decades, Artificial Intelligence has seen several Springs of hope and Winters of discontent, as positive results alternated with negative ones. At times, artificially intelligent systems looked just around the corner, while at other times the whole enterprise seemed doomed by its sheer complexity.

    Posted at 09:00 am on Fri, 17 Mar 2017 in artificial intelligence, neuroscience, science
  • Clifford Siskin discusses his new book, System: The Shaping of Modern Knowledge, which explains the long history of "blaming the system" from Galileo to the political economy of the early-nineteenth century to today.

    The book opens with Galileo’s “message from the stars,” which is depicted on the jacket. How did Galileo and Enlightenment thinkers contribute to the knowledge of our own computational universe? 

    In the histories I tell about system, I pair Galileo and Francis Bacon as helping to launch system on its upward trajectory at the turn into the seventeenth century. System was first used in English in the same year—1610—that Galileo first trained his improved spyglass on Jupiter and discovered that the world (read "universe") was a world full of systems: Jupiter, like earth, was the center of its own lunar system—and both of those systems were part of a larger one: a solar system. System only became, to use Galileo’s own word, “really” interesting when it became plural—when systems started showing up inside of each other. Seventy-five years later, that interest led Isaac Newton to choose system as the genre to convey the philosophical impact of his discoveries. Empowered by that decision—a tale I tell in detail—system rapidly became the primary form for explanation during the eighteenth-century Enlightenment in the West. The linear growth of systems in print was paced by the publication of what I call Master Systems during the mid and late decades—Systems that attempted to include all previous systems—followed by a takeoff in specialized systems (of education, of the income tax) at the century's end. By that point, system and the world were bound together in a powerful explanatory framework centered at the core of modern knowledge. When they changed, they changed in relationship to each other. Thus our new notion of a computational universe—as I outline in the Coda of the book—combines a new kind of system, algorithmic information processing, with new ways of comprehending the world, as something that systematically computes itself, perhaps into an infinite number of selves.

    Posted at 12:30 pm on Mon, 19 Dec 2016 in humanities, science
  • We spoke with Rutsuko “Ruth” S. Nagayama, Professor of Psychology at Shizuoka Eiwa Gakuin University, for the latest Spotlight on Science Q&A. Here, she reflects on an article she co-authored in 2007 for PRESENCE: Teleoperators and Virtual Environments. “The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces” has been one of the journal’s Top 5 Most Downloaded Articles in the past year. Read the article for free on our SOS page.

     Can you talk a little bit about “the uncanny valley?”

    The uncanny valley is a hypothesis about the psychological reaction when we see a robot, and was proposed by a roboticist Masahiro Mori in 1970. Mori argued that  although it would be a good thing to make a robot's appearance more humanlike, people could feel uncomfortable with robots that were almost (but not perfectly)  humanlike.

    A graph used by Mori to explain his hypothesis is well known. In his graph, the horizontal axis represents how much artificial objects (robots, dolls, prosthetics, etc.)  resemble real human beings. The vertical axis represents a kind of impression score of artificial objects as rated by human observers.

    Mori predicted that the more closely artificial objects resembled real humans, the more comfortable our impression of them would be. But when their resemblance reached very close to real humans, we would have negative impressions of them. Mori depicted the occurrence of the negative impressions as a valley in the graph. This portion of the graph is called the uncanny valley. Mori warned that obviously robotic appearance is okay, but making them highly humanlike runs a risk of falling into the valley.

    Posted at 01:30 pm on Mon, 05 Dec 2016 in science, spotlight on science
  • In honor of Ada Lovelace Day, we look back at last year's essay ("Changing the Face of Computing—One Stitch at a Time") by Yasmin Kafai and Jane Margolis about the legacy of the pioneering British mathematician who became the first computer programmer.<--break-> 

    As we celebrate Ada Lovelace Day, we should be reminded that one of the first computers in the nineteenth century, the “Analytical Engine,” was based on the design of the Jacquard loom, for weaving fashionable complex textiles of the times. It was fashion that inspired British mathematician Ada Lovelace to write the code for the loom that wove the complex patterns that were in vogue. She also wrote a most beautiful sentence linking computing and fashion: “We may say most aptly that the Analytical Engine weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves.” And yet, the historical and intimate relationship between fashion and computer science has largely been forgotten and ignored, even as Lovelace’s pioneering spirit lives on today’s runways.

    Posted at 12:00 pm on Tue, 11 Oct 2016 in computer science, engineering, math, science, technology
  • We spoke with Dr. Jeremy Trevelyan Burman, PhD for this month’s Spotlight on Science Q&A. Dr. Burman was recently named to a tenure-track position supporting the new Reflecting on Psychology graduate program at the University of Groningen in the Netherlands.

    Posted at 11:00 am on Thu, 22 Sep 2016 in journals, science
  • The daguerreotype, invented in France, came to America in 1839. It was, as Sarah Kate Gillespie's book The Early American Daguerreotype shows, something wholly and remarkably new: a product of science and innovative technology that resulted in a visual object. We're celebrating World Photo Day with an excerpt from The Early American Daguerreotype.

    Originally a French invention, daguerreotyping—a photographic process that produces extremely detailed images—reached American shores in the fall of 1839. A daguerreotype is a direct-positive image on a silvered copper plate. Historically, the plate was polished until it had a mirror-like surface, then was treated with lightsensitive chemicals. The plate was then fitted into a camera and exposed to the subject. Once exposed, the plate was developed above a box of mercury fumes, and the image was fixed in a bath of hyposulfate of soda. The finished product was then washed and dried. Because the surface remained sensitive, it was placed under a plate of glass and usually put in a case.

    Posted at 10:00 am on Fri, 19 Aug 2016 in art, history, photography, science, technology
  • This month’s Spotlight on Science looks at the intersection of synesthesia and art. Carol Steen discusses her own synesthesia and her journey to understand it, how synesthesia has impacted her art, and the increase in synesthesia awareness and research. Her article, “Visions Shared: A Firsthand Look into Synesthesia and Art” (Leonardo, June 2001) was one of the earliest first-hand accounts of synesthesia and its role in art, and her story helped inspire Wendy Mass's award-winning novel, A Mango-Shaped Space. Steen has since co-written a chapter for the Oxford Handbook of Synesthesia, and continues to create art from her synesthetic visions.  Read the article for free on our SOS page.

    You write that you first learned about synesthesia in 1993 when Richard E. Cytowic was in the process of bringing it back into mainstream science. Your article was published seven years later, in 2001. In 2003, author Wendy Mass wrote a young adult novel about an artistic and synesthetic girl named Mia, called A Mango-Shaped Space. Ten years later, Oxford University Press published the Oxford Handbook of Synesthesia, and just last year, your article was cited in an extensive paper titled "Color Synesthesia: Insight into perception, emotion, and consciousness," published in the journal Current Opinion in Neurology. How has the rise in awareness of synesthesia, and the accompanying increase in research about it, impacted you? Has it affected your art, or your artistic process, at all?

    In 1993, we didn't have computers. Well, a few people did, but for most of us computers didn't exist. More importantly, even if you had a computer, you were still isolated. Early in 1995 I would make long trips by subway to the one branch of my college where they had a computer lab. In a very small dark room on the top floor of an old NYC building were about 20 small screened computers. I could use them if a class was not being held, or if, with permission and providing I was very quiet, there was an available seat. I remember one day I sat in this room and learned I could ask a search engine for information about synesthesia. I did and waited for the answer. It gave me 35 “hits”—seventeen of those were duplicates. 

    Posted at 09:00 am on Tue, 16 Aug 2016 in art, journals, science
  • On July 20, 1969 workers called in sick and children stayed home from school. Crowds gathered around televisions in department store windows to watch the Apollo 11 moon landing. Digital Apollo: Human and Machine in Spaceflight by David Mindell examines the design and execution of each of the six Apollo moon landings, drawing on transcripts and data telemetry from the flights, astronaut interviews, and NASA’s extensive archives. In honor of the anniversary of the first moon landing, the following is an excerpt from Digital Apollo that describes the high tension of that fateful day.

    On a July day in 1969, after a silent trip around the far side of the moon, the two Apollo spacecraft reappeared out of the shadows and reestablished contact with earth. The command and service module (CSM) (sometimes simply ‘‘command module’’) was now the mother ship, the capsule and its supplies that would carry the astronauts home. The CSM continued to orbit the moon, with astronaut Michael Collins alone in the capsule. ‘‘Listen, babe,’’ Collins reported to ground controllers at NASA in Houston, ‘‘everything’s going just swimmingly. Beautiful.’’ His two colleagues Neil Armstrong and Edwin ‘‘Buzz’’ Aldrin had just separated the other spacecraft, the fragile, spidery lunar module (LM, pronounced ‘‘lem’’), nicknamed Eagle, from the command module. This odd, aluminum balloon, packed with instruments and a few engines, would carry the two men down to the lunar surface.

    Posted at 09:00 am on Wed, 20 Jul 2016 in history, science, technology
  • We’re back with another installment of Spotlight on Science. Dr. Beatrice A. Golomb (University of California, San Diego) talks about her research into how the chemical compound Coenzyme Q10 could benefit Gulf War veterans suffering from Gulf War Illness (GWI). Her article is among the most popular from the journal Neural Computation over the last year, according to Altmetric Explorer. Read the article for free on our SOS page.

    How did you first become familiar with Gulf War Illness (GWI)? Has there been a significant increase recently in awareness of this condition?

    I first heard about the condition in the mid-1990s, around the time reports came out on the condition from the Presidential Advisory Committee (PAC) and Institute of Medicine (IOM). I was immediately concerned that the demands for evidence differed radically for postulated physiological vs. psychological causes. Where postulated “organic” (physiological) causes were considered, the bar was high: absence of evidence for a causal role was construed as evidence of absence of a role. Moreover, they hadn’t looked hard for evidence—e.g. omitting consideration of animal studies, the primary setting in which controlled exposure to toxins is allowed. In contrast, for hypothesized stress and psychological causes, mere suggestion of a role was deemed sufficient proof. No evidence was required that those with more stress were more likely to become ill. No demands were made for evidence that people in other historical settings with similar psychological stress, without the chemical stress, had become similarly ill, etc.*

    Posted at 10:45 am on Thu, 30 Jun 2016 in journals, science
  • Can “moral bioenhancement”—using technological or pharmaceutical means to boost the morally desirable and remove the morally problematic—bring about a morally improved humanity? In The Myth of the Moral Brain, Harris Wiseman draws on insights from philosophy, biology, theology, and clinical psychology to make the case that moral functioning is immeasurably complex, mediated by biology but not determined by it. Harris Wiseman discusses his book, which considers an integrated approach to moral enhancement.

    Does humanity need moral enhancement?

    Regardless of how optimistic one’s view of human nature is, it is pretty clear that humans still do terrible things to one another. On first sight, looking at the subject matter on the most superficial level, the idea that one should create some magical technology able to prevent the horrors that humans perpetrate sounds great. Why would one not apply such a technology? The problem with this view is that no one has posed any kind of even vaguely plausible idea of how such a fantastical technology could be devised, even in principle. And there is one very good reason why no realistic prospects have been proposed. Given what we know about the vagueness of the relationship between human biology and the way in which sophisticated moral functioning is enacted, the prospect of some such globally-affective technology is pure fantasy. There are simply no “biological levers” that are clear or reliable enough to improve humanity’s moral powers in some grand and salvatory sense. In contrast, the sorts of moral enhancement that are entirely possible are cruder interventions for individual persons with some morally-related difficulties (e.g. addictions, pathological violence, some affective disorders, disturbing sexual aberrations), though in a largely medical context, and not without important side-effects for the person being treated. Within this scope moral enhancement might have some viable uses.

    Posted at 03:15 pm on Tue, 14 Jun 2016 in bioethics, science
  • A new, open-access alternative for academic publishing

    The MIT Media Lab and the MIT Press today announced the launch the Journal of Design and Science (JoDS). This online, open-access journal provides a new model for academic publishing by encouraging broad-ranging discourse that challenges traditional academic silos and publishing practices. It will be curated by a team led by MIT Media Lab Director Joi Ito.

    Posted at 04:16 pm on Wed, 24 Feb 2016 in design, science
  • In honor of Ada Lovelace Day, an international celebration of the achievements of women in science, technology, engineering, and math (STEM), Yasmin Kafai and Jane Margolis reflect on the legacy of the British mathematician, who is famously regarded as the first female computer programmer.<--break->

    Posted at 09:00 am on Tue, 13 Oct 2015 in computer science, engineering, math, science, technology


Or, if you prefer to use an RSS reader, you can subscribe to the Blog RSS feed.


Books, news, and ideas from MIT Press

The MIT PressLog is the official blog of MIT Press. Founded in 2005, the Log chronicles news about MIT Press authors and books. The MIT PressLog also serves as forum for our authors to discuss issues related to their books and scholarship. Views expressed by guest contributors to the blog do not necessarily represent those of MIT Press.