Consider a human versus computer chess game. Human tries to win. Computer tries to induce as many bad moves as possible. Or maybe, the human tries to avoid as many bad moves as possible, to keep the game zero-sum.
Normally, when a strong computer program plays a human, after the human makes just one bad move, it's all over and the computer will curb-stomp its way to victory from there. Now, even after the computer gains a winning advantage, the computer may try to deliberately give up the advantage in hopes of inducing more bad moves by the human.
This requires knowing the kinds of positions humans tend to make bad moves.
Objectively a "bad move" could be defined as giving up half a game point or more. Should the definition be expanded to more subjective bad moves? There is also the practical difficulty of determining whether half a point has been lost.
No comments :
Post a Comment