There are problems that are hard to solve. There are problems that are easy to solve if you don’t care about speed. At university the algorithms course was one of my favourites. Finding the right algorithm can mean being able to solve the problem much faster. Maybe you’ll have to discover it yourself but there are a lot of good algorithms out there so always do your research. This could be a little series looking at individual algorithms.

I follow Sebastian Leaue’s YouTube channel and he has made a series of videos about chess:

- Evaluate and alpha-beta pruning
- Chess AI
- Making a Better Chess Bot
- Tiny Chess Bot Challenge
- Chess Bot Challenge Results

He covers a lot more than this but one of the most important algorithms used by chess engines is the “recusive descent minimax algorithm with alpha-beta pruing”. Let’s have a look at that one part at a time.

## Minmax algorithm

The minimax algorithm was originally made for zero-sum game theory. That applies to games where one player’s win is the other player’s loss and vice-versa.

People have long talked about piece values in chess. In computer chess this can be used as the basis of an evaluation function which will take the board state, the pieces and their positions, and turn out a single number. This can be arrange such that, say, a good board for white is a negative number and a good board for black is positive. The minimax algorithm tries to minimise the evaluation for white and maximise the evaluation for black.

This is a recursive algorithm which searches through legal moves and counter moves that each player can make. The goal is not to find the move that can lead the player to the best board. The goal is to find the best move given that the other player will always oppose any advantage.

It’s normally expressed as something like this:

```
float Evaluate(const Board& board, size_t depth, bool isBlack) {
if (depth == 0 || board.IsCheckmate()) {
return board.GetEvaluation();
}
if (isBlack) {
float evaluation = numeric_limits<float>::min();
for (const auto& newBoard : board.GetBoardsAfterMove(isBlack)) {
evaluation = std::max(evaluation, Evaluate(newBoard, depth - 1, !isBlack));
}
}
else {
float evaluation = numeric_limits<float>::max();
for (const auto& newBoard : board.GetBoardsAfterMove(isBlack)) {
evaluation = std::min(evaluation, Evaluate(newBoard, depth - 1, !isBlack));
}
}
return evaluation;
}
```

and is called like this:

```
Evaluate(board, depth, isBlack);
```

This is a fairly simple tree search. At each level all the possible moves are considered and the best evaluation is found. There is the additional complication that each level swaps between considering one player and the other player. For white this involves picking the minimum evaluation, for black this involves picking the maximum evaluation.

By calling this algorithm for each potential move a player can find which boards are favourable and which are not. However it’s not fast, the entire tree will be searched each time. From the start of a game white can make 10 moves, black can make 10 counter moves. At that rate there are a million moves to consider after just 3 move-counter pairs.

## Alpha-beta pruning

We can use alpha-beta pruning to skip some of the search. Once we have evaluated one of our opponent’s moves we skip branchs that are better for our opponent and, therefore, worse for us. If we find a move that is worse for our opponent and, therefore, better for us it becomes the new move to beat.

```
float function Evaluate(const Board& board, size_t depth, float alpha, float beta, bool isBlack) {
if (depth == 0 || board.IsCheckmate()) {
return board.GetEvaluation();
}
if (isBlack) {
float evaluation = numeric_limits<float>::min();
for (const auto& newBoard : board.GetBoardsAfterMove(isBlack)) {
evaluation = std::max(evaluation, Evaluate(newBoard, depth - 1, !isBlack));
alpha = std::max(evaluation, alpha);
if (evaluation >= beta) {
break;
}
}
}
else {
float evaluation = numeric_limits<float>::max();
for (const auto& newBoard : board.GetBoardsAfterMove(isBlack)) {
evaluation = std::min(evaluation, Evaluate(newBoard, depth - 1, !isBlack));
beta = std::min(evaluation, beta);
if (evaluation <= alpha) {
break;
}
}
}
return evaluation;
}
```

and is called like this:

```
Evaluate(board, depth,
numeric_limits<float>::min(), numeric_limits<float>::max(), isBlack);
```

I think the logic here is slightly hard to follow. Think about what happens when it is looking at the moves for black. It keeps track of the best evaluation it has found for black, it will return that evaluation. It also keep track of `alpha`

, the best evaluation here or in earlier branches. If the best evaluation is greater than `beta`

it knows that white never let this branch be taken. As the branch won’t be taken it doesn’t have to do any more work.

How much can be skipped is very dependant on the order we examine the moves in. If we’re lucky we only have to look at the first move and can skip all the rest. If we’re unlucky we have to look at them all. Is there some way to choose the order to be lucky?

## Recursive decent

Let’s ignore alpha-beta pruning briefly.
We have an easy way to control how long a search will take, how deep we let it go.
If searching to a depth of 10 took 10 seconds then searching to a depth of 11 might take 20 seconds.
The deeper we go the slower it will be.
Conversely the shallower we go the faster it will be.
Going to a depth 9 will only take 5 seconds.
The evaluations we get won’t be *as good* but they should still be *pretty good*.

Now let’s think about alpha-beta pruning again. We’ve a technique which could drastically speed up our search if only we could guess the right order to search. We’ve a technique to quickly get a pretty good set of evaluation. Let’s combine them.

Recursive decent makes a series of calls to `Evaluate`

each deeper than the last.
The evaluations from the previous calls are used to choose the order to consider moves.

```
float function Evaluate(BoardEvaluations& boardEvaluations,
const Board& board, size_t depth, float alpha, float beta, bool isBlack) {
if (depth == 0 || board.IsCheckmate()) {
return board.GetEvaluation();
}
if (isBlack) {
float evaluation = numeric_limits<float>::min();
const auto newBoards = board.GetBoardsAfterMove(isBlack);
for (const auto& newBoard : ReorderBoards(newBoards, boardEvaluations))) {
evaluation = std::max(evaluation, Evaluate(boardEvaluations, newBoard, depth - 1, !isBlack));
alpha = std::max(evaluation, alpha);
boardEvaluations[newBoard] = evaluation;
if (evaluation >= beta) {
break;
}
}
}
else {
float evaluation = numeric_limits<float>::max();
const auto newBoards = board.GetBoardsAfterMove(isBlack);
for (const auto& newBoard : ReorderBoards(newBoards, boardEvaluations))) {
evaluation = std::min(evaluation, Evaluate(boardEvaluations, newBoard, depth - 1, !isBlack));
beta = std::min(evaluation, beta);
boardEvaluations[newBoard] = evaluation;
if (evaluation <= alpha) {
break;
}
}
}
return evaluation;
}
```

and is called like this:

```
for (size_t depth = 0; depth < maxDepth; depth++) {
Evaluate(boardEvaluations, board, depth,
numeric_limits<float>::min(), numeric_limits<float>::max(), isBlack);
}
```

As the previous evaluations were pretty good then we will probably be lucky.
As we’re probably lucky a great deal of each tree can be skipped.
Even though we make *more* calls to `Evaluate`

each call is going to be faster so we take less time overall.

Assuming you want your chess engine to win this is a situation where it is worth spending time on optimisation. A faster algorithm lets you search a bigger tree in the same amount of time. A bigger tree means looking more moves ahead.

There’s a **lot** more to chess algorithms than this but it’s an important part.

## Code presentation

While writing this I had to decide how to present the code. The examples I was cribbing from on Wikipedia aren’t something that I’d use in a project. However they might make things easier for anyone who wants to follow those links. In the end I stuck close to the original examples as they emphasis the `min`

and `max`

.

If I making my own project then it would:

- Reduce repetition. To make
`alpha`

and`beta`

work they swap back and forth between calls. - Use explicit types.
- Combine the board and player.
- Store board evaluations as a member variable.

```
Evaluation function Evaluator::Get(
const GameState& state, size_t depth, Evaluation alpha, Evaluation beta) {
if (depth == 0 || state.IsCheckmate()) {
return state.GetEvaluation();
}
const auto player = state.GetActivePlayer();
const auto pick = player.GetPickFunction();
const auto cutoff = player.GetCutoffFunction();
auto evaluation = player.GetWorstEvaluation();
const auto newBoards = state.GetBoardsAfterMove();
for (const auto& newBoard : ReorderBoards(newBoards))) {
evaluation = pick(evaluation, Evaluate(newBoard, depth - 1, beta, alpha));
alpha = pick(evaluation, alpha);
m_BoardEvaluations[newBoard] = evaluation;
if (cutoff(evaluation, beta)) {
break;
}
}
return evaluation;
}
```

## Leave a Reply