Kerby Anderson
Yesterday, I talked about robots and wanted to follow up with some perspective on how artificial intelligence represents independent thinking and autonomous actions. There are reasons to believe that AI and robots will be learning and thinking in ways we might not predict. Let me illustrate this with the game Go.
Go is an ancient East Asian game played on a nineteen-by-nineteen grid with black and white stones. The goal is to surround your opponent’s stones with yours. Once you do that, you take them off the board. It is more complex than chess. After a few moves, there are 200 quadrillion (2 x 1015) possible configurations.
When computers beat chess masters, they used a brute force method (where they merely crunch through all sorts of possible moves). That is not possible with Go. Therefore, engineers produced AlphaGo to learn by watching 150,000 games played by human experts. Then it played against copies of itself.
The engineers then organized a tournament in South Korea against the world champion of Go. AlphaGo won the first game, but it was the second game that had everyone talking. The machine made a series of moves that made no sense. Commentators explained to the people watching that it was “a strange move” and that AlphaGo had made “a mistake.”
But the world champion knew something wasn’t right. He took a very long time before he took his next move. Before long, it was obvious that AlphaGo had won again and Go strategy had been rewritten right before everyone’s eyes.
Later versions essentially dispensed with human knowledge and developed their own strategies and thinking. This illustrates the power and the peril of artificial intelligence.
This post originally appeared at https://pointofview.net/viewpoints/ai-thinking/?utm_source=rss&utm_medium=rss&utm_campaign=ai-thinking