字幕列表 影片播放
If you watched my last video, you would know that the game of Go is based on such a simple
set of rules, much less complicated than Chess. But why Go computers couldn't beat the best
professional Go players until recently, while Chess computers have done it for nearly two
decades. One can simply answer that they have fundamentally different approaches of how
they think when playing. So, how a chess computer actually think?
For a human being, the game of chess involves a great deal of high-level abstract reasoning,
pattern recognition, conscious thought and sometimes even psychology. By accumulating
experiences games after games, people develop and improve their skills and strategies. Some
might argue that we should program the computers to imitate human thinking. In the past, many
early researchers of artificial intelligence also agreed with that, and made the mistake
we now call the AI fallacy. They assumed that the only way to perform non-routine tasks
of invention, creativity and judgment to the standard of human experts is to replicate
the approach of human specialists. Chess computers do none of that. In 1997, Deep Blue, the supercomputer
of IBM, beat Garry Kasparov, who then became world chess champion in a 6-game series. The
machine didn't have intuition or experience like Kasparov, it won by abusing its sheer
processing power and massive data storage capability. To simply put, instead of thinking,
Deep Blue calculated its way to victory, which is also known as the brute force approach.
In this method, a chess engine builds a search tree consisting of every possible future move
from both players. In this example tree, white circles represent the chess engine's moves,
while black squares represent the opponent's moves. A ply is just one move by one player.
The number of children at a position is called the branching factor.
Once the tree is created, the chess computer use Minimax algorithm to decide which move
is the best, based on an assumption that the opponent has similar thoughts, and will try
its best to win. This algorithm was first developed by John von Neumann in 1928, and
adapted for chess by Claude E. Shannon in 1950. Back to the example tree, each final
position has been assigned a number based on the computer's result, which are 1, 0 and
-1 for win, draw and loss. The chess engine works upward from the bottom ply, it chooses
the best positions from each possible position black will take, or the maximum. At the next
level, it assumes that black will choose the worst possible position for white, or the
minimum. In the end, the computer takes the maximum of the top three positions to make
the move. After black makes its move, the computer creates a new tree and goes through
this process again, rinse and repeat till the game ends.
Programmers immediately realized a huge problem of Minimax. It's impossible to search the
entire tree of a chess game, where the average branching factor is approximately 35 and the
average depth of plies is about 80. That means the estimated size of search tree for chess
would be 10^123. If this number sounds incomprehensible for you, just remember there are only about
10^75 atoms in the universe.
This problem is commonly solved by using a fixed-depth search, or limiting how far ahead
the chess engine will look. It will then analyze each position in the last ply to decide whether
that position is good or bad by using a static evaluation function. This is one of many deciding
factors distinguishing between good and bad chess engines, as the good ones usually have
a much more accurate and complicated function. For example, a mediocre chess engine might
just compare the number of pieces each side has, while the better one might apply a weight
to each type of piece, because obviously a queen is much more valuable than a pawn. However,
no matter how complicated the function gets, each position is eventually assigned a number
representing how good that position is.
Programmers again realized another serious problem, the horizon effect, first described
by Hans Berliner. Imagine the chess engine only searches to a depth of 8-ply. At ply
8, the computer's Queen captures an enemy's pawn. It sounds great until you realize that
pawn is defended by another pawn, so the opponent will take the computer's Queen the next move
at ply 9, but the chess engine can't see that, because it stopped searching at ply 8. Programmers
can't just simply increase the depth limit to avoid this, because there will be always
a horizon, no matter how deep it is. An algorithm called "Quiescence Search" was invented to
tackle this problem. After reaching the final ply, the chess engine enters a special search
mode that only expands certain types of moves, and only finishes when it gets to a quiet
and relatively stable position.
There is still another problem. The fixed-depth search reduces the strain on processing power,
but it also compromises the quality of decisions due to lack of depth. To be able to increase
search depth and look further, programmers use the Alpha-Beta pruning algorithm to lower
the branching factor. Basically it allows the computer to discard many tree branches
because the computer can see early on that the branch it's going to check is worse than
another available checked branch. A good Alpha-Beta pruning can significantly reduce the branching
factor from 30 to only 5.
The fixed-depth Minimax, with Alpha-Beta pruning and Quiescence search have been the cornerstones
of almost all chess engines, from the famous 1997 Deep Blue to the modern ones as Stockfish,
Komodo and Houdini. Nowadays, thanks to new algorithmic developments, combined with advanced
hardware, Stockfish, the strongest chess engine, only needs an iPhone instead of a tower full
of CPUs, to beat any chess grandmaster. That said, this doesn't mean computers are becoming
intelligent like humans. Their dominance in chess and other board games only prove that,
machines became fast enough to exceed the playing capacity of the best human player.
Go computers might be an exception, but it's a different story, which I hope to talk about
in another video. As a creator of Deep Blue has said, "Assume that we were playing Kasparov
at the World Trade Center, and 9/11 hit. Kasparov would've run like hell. We would've run like
hell. Deep Blue would just sit there... computing."