The Creativity Code. Marcus du SautoyЧитать онлайн книгу.
and early 1980s. I certainly remember wasting a huge amount of time playing the likes of Pong, Space Invaders and Asteroids on a friend’s Atari 2600 console. The console was one of the first whose hardware could play multiple games that were loaded via a cartridge. It allowed a whole range of different games to be developed over time. Previous consoles could only play games that had been physically programmed into the units.
One of my favourite Atari games was called Breakout. A wall of coloured bricks was at the top of the screen and you controlled a paddle at the bottom that could be moved left or right using a joystick. A ball would bounce off the paddle and head towards the bricks. Each time it hit a brick, the brick would disappear. The aim was to clear the bricks. The yellow bricks at the bottom of the wall scored one point. The red bricks on top got you seven points. As you cleared blocks, the paddle would shrink and the ball would speed up to make the game play harder.
We were particularly pleased one afternoon when we found a clever way to hack the game. If you dug a tunnel up through the bricks on the edge of the screen, once the ball made it through to the top it bounced back and forward off the top of the screen and the upper high-scoring bricks, gradually clearing the wall. You could sit back and watch until the ball eventually came back down through the wall. You just had to be ready with the paddle to bat the ball back up again. It was a very satisfying strategy!
Hassabis and the team he was assembling also spent a lot of time playing computer games in their youth. Their parents may be happy to know that the time and effort they put into those games did not go to waste. It turned out that Breakout was a perfect test case to see if the team at DeepMind could program a computer to learn how to play games. It would have been a relatively straightforward job to write a program for each individual game. Hassabis and his team were going to set themselves a much greater challenge.
They wanted to write a program that would receive as an input the state of the pixels on the screen and the current score and set it to play with the goal of maximising the score. The program was not told the rules of the game: it had to experiment randomly with different ways of moving the paddle in Breakout or firing the laser cannon at the descending aliens of Space Invaders. Each time it made a move it could assess whether the move had helped increase the score or had had no effect.
The code implements an idea dating from the 1990s called reinforcement learning, which aims to update the probability of actions based on the effect on a reward function or score. For example, in Breakout the only decision is whether to move the paddle at the bottom left or right. Initially the choice will be 50:50. But if moving the paddle randomly results in it hitting the ball, then a short time later the score goes up. The code then recalibrates the probability of whether to go left or right based on this new information. It will increase the chance of heading in the direction towards which the ball is heading. The new feature was to combine this learning with neural networks that would assess the state of the pixels to decide what features were correlating to the increase in score.
At the outset, because the computer was just trying random moves, it was terrible, hardly scoring anything. But each time it made a random move that bumped up the score, it would remember the move and reinforce the use of such a move in future. Gradually the random moves disappeared and a more informed set of moves began to emerge, moves that the program had learned through experiment seemed to boost its score.
It’s worth watching the supplementary video the DeepMind team included in the paper they eventually wrote. It shows the program learning to play Breakout. At first you see it randomly moving the paddle back and forward to see what will happen. Then, when the ball finally hits the paddle and bounces back and hits a brick and the score goes up, the program starts to rewrite itself. If the pixels of the ball and the pixels of the paddle connect that seems to be a good thing. After 400 game plays it’s doing really well, getting the paddle to continually bat the ball back and forward.
The shock for me came when you see what it discovered after 600 games. It found our hack! I’m not sure how many games it took us as kids to find this trick, but judging by the amount of time I wasted with my friend it could well have been more. But there it is. The program manipulated the paddle to tunnel its way up the sides, such that the ball would be stuck in the gap between the top of the wall and the top of the screen. At this point the score goes up very fast without the computer’s having to do very much. I remember my friend and I high-fiving when we’d discovered this trick. The machine felt nothing.
By 2014, four years after the creation of DeepMind, the program had learned how to outperform humans on twenty-nine of the forty-nine Atari games it had been exposed to. The paper the team submitted to Nature detailing their achievement was published in early 2015. To be published in Nature is one of the highlights of a scientist’s career. But their paper achieved the even greater accolade of being featured as the cover story of the whole issue. The journal recognised that this was a huge moment for artificial intelligence.
It has to be reiterated what an amazing feat of programming this was. From just the raw data of the state of the pixels and the changing score, the program had changed itself from randomly moving the paddle of Breakout back and forth to learning that tunnelling the sides of the wall would win you the top score. But Atari games are hardly on a par with the ancient game of Go. Hassabis and his team at DeepMind decided they were ready to create a new program that could take it on.
It was at this moment that Hassabis decided to sell the company to Google. ‘We weren’t planning to, but three years in, focused on fundraising, I had only ten per cent of my time for research,’ he explained in an interview in Wired at the time. ‘I realised that there’s maybe not enough time in one lifetime to both build a Google-sized company and solve AI. Would I be happier looking back on building a multi-billion business or helping solve intelligence? It was an easy choice.’ The sale put Google’s firepower at his fingertips and provided the space for him to create code to realise his goal of solving Go … and then intelligence.
First blood
Previous computer programs built to play Go had not come close to playing competitively against even a pretty good amateur, so most pundits were highly sceptical of DeepMind’s dream to create code that could get anywhere near an international champion of the game. Most people still agreed with the view expressed in The New York Times by the astrophysicist Piet Hut after DeepBlue’s success at chess in 1997: ‘It may be a hundred years before a computer beats humans at Go – maybe even longer. If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don’t have to be a Kasparov.’
Just two decades into that hundred years, the DeepMind team believed they might have cracked the code. Their strategy of getting algorithms to learn and adapt appeared to be working, but they were unsure quite how powerful the emerging algorithm really was. So in October 2015 they decided to test-run their program in a secret competition against the current European champion, the Chinese-born Fan Hui.
AlphaGo destroyed Fan Hui five games to nil. But the gulf between European players of the game and those in the Far East is huge. The top European players, when put in a global league, rank in the 600s. So, although it was still an impressive achievement, it was like building a driverless car that could beat a human driving a Ford Fiesta round Silverstone then trying to challenge Lewis Hamilton in a Grand Prix.
Certainly when the press in the Far East heard about Fan Hui’s defeat they were merciless in their dismissal of how meaningless the win was for AlphaGo. Indeed, when Fan Hui’s wife contacted him in London after the news got out, she begged her husband not to go online. Needless to say he couldn’t resist. It was not a pleasant experience to read how dismissive the commentators in his home country were of his credentials to challenge AlphaGo.
Fan Hui credits his matches with AlphaGo with teaching him new insights into how to play the game. In the following months his ranking went from 633 to the 300s. But it wasn’t only Fan Hui who was learning. Every game AlphaGo plays affects its code and changes it to improve its play next time around.
It was at this point that the DeepMind team felt confident enough to offer their challenge to Lee Sedol, South Korea’s eighteen-time