Bits to Bitcoin

Q&A with Mark Stuart Day, Author of Bits to Bitcoin

In his new book, Bits to Bitcoin: How Our Digital Stuff Works, Mark Stuart Day explains our digital infrastructure in terms that non-scientists can understand. Covering topics such as timesharing, deadlock, thrashing, virtual memory and virtual machines, and packets and networks, as well as the Bitcoin system, Day’s book digs beneath the surface to teach us how the technologies we rely on regularly really work. We’ve asked him a few questions about his book (published today!) below.

 

You write in the book’s preface that Bits to Bitcoin “explains computer systems to people whose intellectual interests are firmly rooted elsewhere—particularly in the humanities.” Why is it so important to educate the general public about computer science?

That education is important for two distinct reasons. First, computer systems have moved from being exotic tools to being part of the fabric of everyday life. The more of our life that is intertwined with computer technology, the more important it is to understand at least the rudiments – what is and isn’t possible, some underlying principles. If we don’t know anything, then we defer to a priesthood of experts and can’t even tell when they’re perhaps stretching the truth. Sensible policies, wise laws, smart consumer choices, and ultimately good lives hinge on shifting from “knowing nothing” to “knowing enough to know what to ask.”

The second reason doesn’t depend at all on the computerization of society. Instead, we can just observe that as an intellectual area, computer science has actually accomplished some pretty interesting things that aren’t very well-known outside the field. Whenever you are dealing with coordination, change, failures, cheating, limited resources, and a number of other common areas, there are things that computer scientists know that might be relevant to framing the problem or finding a solution. Computer science doesn’t have every answer to problems in those areas, but you can approach those issues with a different perspective if you understand the ways in which computer science has solved or defined them.

 

How is computation like literature in translation (an example you use in the beginning of the book)? Why is the metaphor of language useful when explaining computation?

One of the things that’s fascinating and cool about computation is that in some important ways it doesn’t matter how you think or talk about computation – it’s all the same phenomenon underneath. So in the book I explain both an approach that uses machines – simple conceptual machines, but still noticeably “mechanical” – and a more “linguistic” approach that substitutes values for names. (As an aside, they’re simplifications of what two great pioneers invented in their early efforts to understand computing: Alan Turing devised what we now call the Turing Machine, while Alonzo Church invented the lambda calculus.) 

We know that people differ in how comfortable they are with things that look “mathematical,” that look “linguistic,” or that look “mechanical.” So it’s handy to know that any particular model of computation isn’t the same as computation itself.

Now, that doesn’t mean that language never matters. Just as you can have a particular piece of writing that is full of untranslatable jokes, you can have a particular program that’s very well suited to one language or model of computation but very awkward to express in any other way. But for programming in general – for computational problem-solving in general – it really doesn’t much matter. Programmers love to argue the specific merits of their favorite programming language or tool, but the religious wars that sometimes break out among them are mostly much ado about nothing.

 

How can “perfect” programs in “perfect” environments still be uncomputable?  Can you explain some of the inevitable limits on computing?

First, let me underscore that real-world programs have lots of imperfections that tend to be more relevant than these concerns about the limits of perfect programs – and I talk about those more realistic concerns in the book as well. That said, the issue you’re mentioning is one of those weird logic puzzles that turns out to have some practical implications. When I say it’s a logic puzzle, it’s a little like the problem of whether an omnipotent god can create something so heavy that he can’t lift it. Whether you say “yes” or “no,” the answer still seems to be problematic… and so we have to reconsider the word “omnipotent” and wonder whether that’s really a coherent concept after all.

To return to programs: the easiest way of putting it in practical terms is to say that it’s not possible, in general, to write a program that reliably predicts the behavior of programs. Some programs, either intentionally or (more often) by mistake, just run endlessly and never produce a result. So one of the things that would be nice to know in general is whether a particular program ever reaches a final answer. That’s what computer scientists call the Halting Problem: does a program eventually halt?

The logic-puzzle aspect is that we basically assume the existence of one of these impossible programs, then use it in a way that leads to a contradiction. We assume that we have a program that will look at other programs and do one of two things: if that program halts, then our checking program also halts. But if that program doesn’t halt, then our checking program likewise doesn’t halt.

That assumption doesn’t seem too alarming. We might not know how exactly to build this checking program, but that won’t matter after the next couple of steps.

Now we take that checking program and we make a slight variant. The new checking program will do the reverse of what the old one did: so now if the program being checked halts, then the new checking program won’t halt. And if the program being checked doesn’t halt, then the new checking program halts.

That still seems pretty straightforward. After all, it should be easy to just reverse the behavior of the checking program, right?

The coup de grace comes when we take this new checking program and ask it to check itself. That’s a little surprising, but ought to be OK – after all, the new checking program is just another program. And as a program, it either halts or doesn’t. But when we ask the new checking program whether it halts, we have a serious problem.

Remember that the new checking program does the reverse of what it determines for the checked program. So that means that if the new checking program halts, it doesn’t halt. And likewise if it doesn’t halt, it halts. That makes no sense!  Our best way out of the muddle is to conclude that the checking program we assumed is not actually possible.

In friendly environments, we may be able to sidestep the worst consequences of the Halting Problem by tweaking the programming tools we use, or by avoiding idioms that are hard to check. There are many kinds of useful checking and error-finding tools that are still practical and important, so I wouldn’t want to leave an impression that checking is always unworkable. But especially when we consider the possibility of attacks or deliberately-misbehaving programs, the Halting Problem says we’re in trouble. In the general case, we can’t analyze a program to be sure that it’s well-behaved.

 

What does Thompson’s Hack—the revelation that Ken Thompson, the creator of Unix, had secretly arranged it so that any Unix system allowed him to sign in—help us understand about computer security and vulnerability?

Broadly speaking, Thompson’s Hack means that you can’t trust any software that you didn’t write yourself. Since almost no-one writes all of their software themselves, that means that basically no software is trustworthy.

If you are concerned about whether a program does the right thing, what should you look at? Thompson demonstrated brilliantly that it’s not sufficient to look only at the program of interest. Examining the program that he had hacked did not reveal his secret ability to sign in. Instead, the hidden hack was inserted as the sign-in program was translated. To a first approximation, every program exists in two forms: one for people to read, and one for machines to execute. So every program is translated from a human-readable form to a machine-usable form, and Thompson exploited that translation process.

More alarmingly, it’s also not sufficient to look at the special program (the compiler) that’s translating from human-readable programs to machine-usable programs. The compiler is itself a program, and Thompson played the same trick there: a hack gets inserted when the ordinary-looking compiler program is being translated into its machine-usable form. This is the part of the hack that makes everyone’s head spin, so don’t be worried if you’re feeling confused.

So now suppose that you have concerns about the behavior of a program – perhaps you think that it’s collecting data from you that it shouldn’t, or you worry that it’s occasionally diverting copies of legitimately-collected data to someone who shouldn’t have it. Let’s further suppose that the vendor offers to show you the human-readable program. When you (or your expert programmers) read the program, it doesn’t have those problems. So is everything OK? Not really.

In a practical sense, we’re actually depending on two facts: you have to be somewhat sophisticated to pull off this kind of attack, and most programmers are trustworthy enough that they don’t do this kind of thing. But that characterization just puts this attack into the same category as white-collar fraud – which likely means that if the gain is large enough, people will do it.

 

What does Bitcoin illustrate about what the future of computing could look like?

The positive side of Bitcoin is that it allows a collection of mutually-untrusting parties to maintain and modify a shared ledger, despite possibly having some modest levels of misbehavior among those parties. Things get really interesting when we generalize from the Bitcoin system specifically to the wider class of “blockchain” systems. Blockchain systems provide shared data for other, non-currency, kinds of applications while offering Bitcoin-like properties of openness and resilience. Shared data is a part of almost every economically significant use of computing, so when we have a new approach to shared data it’s pretty important. At my most enthusiastic, I can point to large swaths of the economy where there are trusted intermediaries who might not be needed after all.

The negative side is that the vast majority of Bitcoin/blockchain systems that are currently being hyped seem to be examples of solutions in search of problems. Just because you can put some kind of shared-data system onto a blockchain doesn’t necessarily mean you’ve done anything useful. I think we’re still figuring out what – if anything – really belongs on blockchains.

 

What are some of the biggest misconceptions about computing that you often encounter? Why do you think these ideas are widespread?

I think I’m most often surprised when I meet people who are convinced that they are somehow incapable of understanding computers and digital infrastructure. Often these people are incredibly smart and sophisticated in a non-computer field. It’s a striking contrast when I’m speaking to my physician, for example. He’s clearly the expert, and I wouldn’t question his professional judgment about a diagnosis, but if he’s explaining some procedure or risk I do my best to understand what he’s saying. I don’t put up my hands and say “oh, I’m just a dummy when it comes to medicine! I just can’t figure it out!” But I do know people with medical backgrounds who say that sort of thing to me about computers.

I don’t really know why so many people have that helpless attitude. I suspect computing professionals haven’t helped by having intro books for “Dummies,” and a “Genius Bar” for helping you with certain products, but I’m sure the roots of the problem are older than that. I know it’s tempting when I’m helping my family with some home-network issue to just say “I’ll fix this for you.” I give a lot of credit to my wife for insisting that she sits at her computer during troubleshooting. I don’t get to touch her machine, only propose courses of action. She’s kind of a model for being a person who may not know much about computing, but she doesn’t trust the experts to get it right for her – she wants to stay in the loop.