AlphaGo, (probably) because of its Monte Carlo tree search algorithm, seems to perform very "clean" victories over humans when it feels it is already winning: no complex fighting. They seem like teaching games, according to a commentator.
We do not see this in chess programs, and this is one of the criticisms against chess programs: they often make "computer"-ish moves that lead to unnecessarily complicated and seemingly risky lines. Two thoughts:
Modify chess programs so they do find the "cleanest" win when they are ahead. This might be impossible due to the nature of chess compared to go 囲碁. In chess, a great many positions are objectively drawn, so a computer perhaps ought to seek to maintain whatever heuristic advantage it thinks it has, never yielding an inch (that inch might end up being the difference between win and draw), even if it requires going through a complicated line (which it has evaluated all variations). We see computer-chess-like behavior when AlphaGo plays itself, extremely complex fighting over the whole board lasting the whole game. We make the analogy that computers of both games do this when they evaluate positions as "close" to a draw.
Very strong humans in chess (e.g., Capablanca) also seem to accomplish these clean wins. This makes sense in that humans think probabilistically (like Monte Carlo) and are lazy, seeking to avoid calculation. Evaluate and compare human players by their ability to do this; this might be a measure of their strength, their deep understanding of the game. Playing like Tal, playing towards positions with high variance, can be very successful and win many games (if you can calculate well) but is not so interesting in terms of strength.
No comments :
Post a Comment