The approach almost all Chess-playing programs take is to use some variant of the *minimax *algorithm. This is actually a very simple algorithm. It works with the concepts of board states and moves. A board state is the position of all pieces on the board, and a move is a transition from one state to another (for example, moving your pawn two steps forward will transition your board into a state similar to but distinct from the state the board was in before the move). From any board state, it’s quite simple to list all moves that a player can take; on average you can take thirty-five or so different moves, and on the first turn, you have twenty moves to choose from. Sometimes you can take only one or two different moves—when this happens, you usually have a problem. Minimax presumes that you can store multiple board states in memory or, in other words, that you can keep lots of copies of the game. (This is not a problem on any modern computer, as a Chess board state can be represented in just a few bytes.)

Minimax also assumes that you have a way of evaluating the value (or utility, or goodness) of a board state. For now, you can think of the value of a board state as how many pieces you’ve got left on the board, minus how many pieces your opponent has. If the resulting number is positive, you’re probably ahead in the game.

Here is how minimax works. Imagine you play from the perspective of the white player, and want to know the best move to take from a particular board state. You start with listing all possible moves you could make from that state. You then simulate taking each move and store all of the resulting board states. If you were very shortsighted, you could stop here; simply estimate the value of each resulting board state (for example, by counting pieces) and choose the move that leads to the board state with the highest value. That would be the *max *part of the minimax algorithm.

That would indeed be shortsighted, however, because a move that brings an immediate benefit (for example, by capturing one of the opponent’s pieces) might give the opponent an opening to strike back with one or several captures of her own, thus being disastrous in the slightly less short term. Everyone who has played more than one game of Chess knows this.Therefore, for each of the board states resulting from your possible first moves, you list all possible moves by the opponent and evaluate the resulting board states. The crucial difference to what you did in the first step is that while in the first step you wanted to find the move that was best for you, in the second step, you assume that the opponent will take the move that is most advantageous for her, that is, the worst move for you. If your opponent can capture your piece when it’s her turn, she will. That’s the *min *part of the minimax algorithm. Putting them together, the best move for you is the one that minimizes the maximum score your opponent can reach in her turn. Practically, for each of your possible moves in your turn, you assign the lowest value of the board that your opponent can reach through any of her moves in the second turn.

You are now looking two turns ahead. However, what of those clever strategies where you let your enemy make a capture on her turn just so you yourself can make a bigger capture on your next turn? Well, you could simply let the minimax algorithm run one step further and assume that the opponent will assume that you do the move that benefits you best. This could go on forever, or until you have reached the end of the game in your simulations. In fact, if you simulate all the way to the end of the game, exploring all moves and all responses to those moves, you will find the provably optimal strategy, one that cannot be improved on. In other words, you will play Chess perfectly.

But you are not going to simulate all the way to the end of the game because you don’t have the time to do so. If there are thirty possible moves from one board state (a typical number for midgame Chess), then each turn you simulate will multiply the number of board states you need to investigate by thirty. In other words, you need to evaluate 30*t *states, where *t *is the number of turns you look ahead. For looking five steps ahead, that’s about twenty-four million. The number of states you need to investigate quickly approaches the number of atoms in the earth and other such ridiculous numbers. Searching to the end of a Chess game is therefore impossible for any computer we will be able to construct in the foreseeable future. That is why you need a good way of estimating the value of a board state, because you will have to be content with looking only a few turns ahead.

In practice, what a Chess-playing agent does is search to a given depth and then evaluate the board states it reaches even though those are not typically win or loss states. Luckily, in Chess a simple board state estimation such as the piece difference between black and white generally works fairly well, though many more complex methods have been developed.

Minimax is called a *tree-search algorithm*, not because it helps you looking for the most delicious cherries in a cherry tree, but because of what it does can be understood as growing a branching tree as it searches for the best move—a tree that grows upside down. Think of it this way: the root of the tree is the original board state, the one you want to find the best move for. All of the possible moves from that state become branches that grow from the root. At the end of each branch is the board state that move leads to. Of course, from each of these states, a number of moves are possible, and these can in turn be visualized as branches from the end of the previous branch …and so it continues until you reach those board states where you do not try any more moves but instead estimate the value of the state. Those are called “leaves,” in this somewhat imperfect analogy. (Computer scientists are not famous for their analogies.) The number of moves possible at each state is called the“branching factor.”

Of course, there have been a number of improvements to this method since Alan Turing himself first suggested it in the1940s. There are ways of “pruning” the search so that fewer board states are investigated, there are ways of concentrating on the most promising move sequences, and there are much better ways of estimating the value of a board. But the minimax principle remains. It is the core of almost all successful Chess-playing programs.

**Excerpted from Playing Smart: On Games, Intelligence, and Artificial Intelligence by Julian Togelius. Copyright 2018, Massachusetts Institute of Technology.**