Ouch. Facebook is probably hating on Google so much right now.
Okay, here's what's up: Google published a blog post today as well as the fancy video below, which is narrated by Demis Hassabis, a British researcher from Google's DeepMind artificial intelligence team. The point of these two posts was to announce a major breakthrough based on 20 years of hard work. What's the breakthrough, you ask? Well, Google DeepMind taught a computer program the ancient game of Go.
But back to Facebook just for a moment... One day before Google DeepMind made its announcement, Mark Zuckerberg, Facebook's CEO, publicly wrote on Facebook that his AI team is getting close to achieving the exact same breakthrough. He even said the researcher who has been working on the project "sits about 20 feet” from his desk: “I love having our AI team right near me so I can learn from what they’re working on.”
Yeah. The awkwardness must've been unreal for that researcher this afternoon at Facebook's headquarters. Anyway, Google relished in boasting about its highly-intelligent computer program, called AlphaGo, which is capable of winning the ancient game even when pitted against the most professional human players. Google's video shows three-time European Go Champion Fan Hui losing to the software in all five games. Crazy, right?
Facebook’s program is called Darkforest. And Mark Zuckerberg posted the video below to explain his company’s research. The main thing to realise about Go is that it originated in China like 2,500 years ago and there is about 10 to the power of 700 possible variations of plays. The game lets you place black or white stones on a 19x19 grid, and when you surround your opponents pieces, they’re captured.
The ancient Chinese game of Go is one of the last games where the best human players can still beat the best artificial intelligence players. Last year, the Facebook AI Research team started creating an AI that can learn to play Go.Scientists have been trying to teach computers to win at Go for 20 years. We're getting close, and in the past six months we've built an AI that can make moves in as fast as 0.1 seconds and still be as good as previous systems that took years to build. Our AI combines a search-based approach that models every possible move as the game progresses along with a pattern matching system built by our computer vision team.The researcher who works on this, Yuandong Tian, sits about 20 feet from my desk. I love having our AI team right near me so I can learn from what they're working on.You can learn more about this research here: http://arxiv.org/abs/1511.06410Posted by Mark Zuckerberg on Tuesday, January 26, 2016
The point of the game is to control at least 50 per cent of the board. Needless to say, it’s difficult to do. Now, in order for computers to play, they must be programmed to recognised all the variations. Chess, in comparison, has 10 to the power of 60 possible plays. In fact, chess was mastered by a computer game in 1997, but the first classic game to be mastered by a computer was Noughts and Crosses (also called tic-tac-toe) in 1952.
So, Google made a lot of classic board gamers - as well as Go players - extremely happy today. But, more importantly, its research from the project has affected how computers are able to search for a sequence of actions - and as Google said, that's just one more rung on the ladder toward solving artificial intelligence. This type of research, for instance, could further facial-recognition processing and predictive search.
What's next? Well, Google DeepMind has challenged the best Go player in the world, Lee Sedol of South Korea, to a match. It's been scheduled for March 2016. May the best man - or bot - win. Heh.