Google Allo Google Allo is another keen informing application for Android and iOS that causes you say increasingly and accomplish more. Communicate better with stickers, doodles, and colossal emoticons and content. Allo likewise presents to you the Google Assistant, review release. React rapidly with Smart Reply Google Allo makes it less demanding for you to react rapidly and keep the discussion going, notwithstanding when you're in a hurry. With Smart Reply, you can react to messages with only a tap, so you can send a speedy "yes" because of a companion asking "Are you on your way?" Smart Reply will likewise recommend reactions for photographs. On the off chance that your companion sends you a photograph of their pet, you may see Smart Reply proposals like "aww adorable!" And whether you're a "haha" or "😂" sort of individual, Smart Reply will enhance after some time and acclimate to your style. Meet your Google A...
AlphaGo: using machine learning to master the ancient game of Go
- Get link
- X
- Other Apps
The game of Go originated in China more than 2,500 years ago. Confucius wrote about the game, and it is considered one of the four essential arts required of any true Chinese scholar. Played by more than 40 million people worldwide, the rules of the game are simple: Players take turns to place black or white stones on a board, trying to capture the opponent's stones or surround empty space to make points of territory. The game is played primarily through intuition and feel, and because of its beauty, subtlety and intellectual depth it has captured the human imagination for centuries.
But as simple as the rules are, Go is a game of profound complexity. There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions�that�s more than the number of atoms in the universe, and more than a googol times larger than chess.
This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to artificial intelligence (AI) researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans. The first game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952. Then fell checkers in 1994. In 1997 Deep Blue famously beat Garry Kasparov at chess. It�s not limited to board games either�IBM's Watson [PDF] bested two champions at Jeopardy in 2011, and in 2014 our own algorithms learned to play dozens of Atari games just from the raw pixel inputs. But to date, Go has thwarted AI researchers; computers still only play Go as well as amateurs.
Traditional AI methods�which construct a search tree over all possible positions�don�t have a chance in Go. So when we set out to crack Go, we took a different approach. We built a system, AlphaGo, that combines an advanced tree search with deep neural networks. These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections. One neural network, the �policy network,� selects the next move to play. The other neural network, the �value network,� predicts the winner of the game.
We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning. Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.
After all that training it was time to put AlphaGo to the test. First, we held a tournament between AlphaGo and the other top programs at the forefront of computer Go. AlphaGo won all but one of its 500 games against these programs. So the next step was to invite the reigning three-time European Go champion Fan Hui�an elite professional player who has devoted his life to Go since the age of 12�to our London office for a challenge match. In a closed-doors match last October, AlphaGo won by 5 games to 0. It was the first time a computer program has ever beaten a professional Go player. You can find out more in our paper, which was published in Nature today.
What�s next? In March, AlphaGo will face its ultimate challenge: a five-game challenge match in Seoul against the legendary Lee Sedol�the top Go player in the world over the past decade.
We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI. However, the most significant aspect of all this for us is that AlphaGo isn�t just an �expert� system built with hand-crafted rules; instead it uses general machine learning techniques to figure out for itself how to win at Go. While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems. Because the methods we�ve used are general-purpose, our hope is that one day they could be extended to help us address some of society�s toughest and most pressing problems, from climate modelling to complex disease analysis. We�re excited to see what we can use this technology to tackle next!
Posted by Demis Hassabis, Google DeepMind https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyqrK8tiMVpEc7wK6pwkeW3gCAuMnbcY0mbbZYRKW9dz3_WWmwFghyTjjDC1uRo6bUVq2fe3V0scGA7PgE7xsyN5RXuIjwkd_Lxm4hfESu3kwO42St3xN0mU9iGIHoCYkDJQVgc-D2Jls/s1600/Go-game_hero.jpg Demis HassabisGoogle DeepMind
But as simple as the rules are, Go is a game of profound complexity. There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions�that�s more than the number of atoms in the universe, and more than a googol times larger than chess.
This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to artificial intelligence (AI) researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans. The first game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952. Then fell checkers in 1994. In 1997 Deep Blue famously beat Garry Kasparov at chess. It�s not limited to board games either�IBM's Watson [PDF] bested two champions at Jeopardy in 2011, and in 2014 our own algorithms learned to play dozens of Atari games just from the raw pixel inputs. But to date, Go has thwarted AI researchers; computers still only play Go as well as amateurs.
Traditional AI methods�which construct a search tree over all possible positions�don�t have a chance in Go. So when we set out to crack Go, we took a different approach. We built a system, AlphaGo, that combines an advanced tree search with deep neural networks. These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections. One neural network, the �policy network,� selects the next move to play. The other neural network, the �value network,� predicts the winner of the game.
We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning. Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.
After all that training it was time to put AlphaGo to the test. First, we held a tournament between AlphaGo and the other top programs at the forefront of computer Go. AlphaGo won all but one of its 500 games against these programs. So the next step was to invite the reigning three-time European Go champion Fan Hui�an elite professional player who has devoted his life to Go since the age of 12�to our London office for a challenge match. In a closed-doors match last October, AlphaGo won by 5 games to 0. It was the first time a computer program has ever beaten a professional Go player. You can find out more in our paper, which was published in Nature today.
What�s next? In March, AlphaGo will face its ultimate challenge: a five-game challenge match in Seoul against the legendary Lee Sedol�the top Go player in the world over the past decade.
We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI. However, the most significant aspect of all this for us is that AlphaGo isn�t just an �expert� system built with hand-crafted rules; instead it uses general machine learning techniques to figure out for itself how to win at Go. While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems. Because the methods we�ve used are general-purpose, our hope is that one day they could be extended to help us address some of society�s toughest and most pressing problems, from climate modelling to complex disease analysis. We�re excited to see what we can use this technology to tackle next!
Posted by Demis Hassabis, Google DeepMind https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyqrK8tiMVpEc7wK6pwkeW3gCAuMnbcY0mbbZYRKW9dz3_WWmwFghyTjjDC1uRo6bUVq2fe3V0scGA7PgE7xsyN5RXuIjwkd_Lxm4hfESu3kwO42St3xN0mU9iGIHoCYkDJQVgc-D2Jls/s1600/Go-game_hero.jpg Demis HassabisGoogle DeepMind
- Get link
- X
- Other Apps
Popular posts from this blog
Google Allo Google Allo is another keen informing application for Android and iOS that causes you say increasingly and accomplish more. Communicate better with stickers, doodles, and colossal emoticons and content. Allo likewise presents to you the Google Assistant, review release. React rapidly with Smart Reply Google Allo makes it less demanding for you to react rapidly and keep the discussion going, notwithstanding when you're in a hurry. With Smart Reply, you can react to messages with only a tap, so you can send a speedy "yes" because of a companion asking "Are you on your way?" Smart Reply will likewise recommend reactions for photographs. On the off chance that your companion sends you a photograph of their pet, you may see Smart Reply proposals like "aww adorable!" And whether you're a "haha" or "😂" sort of individual, Smart Reply will enhance after some time and acclimate to your style. Meet your Google A...
Supporting women in tech at GHC 16
The 2016 Grace Hopper Celebration of Women in Computing (GHC) begins today, and we�re thrilled to join the 15,000 women and allies convening in Houston for three days of learning, inspiration and community building. Thousands of women at Google are building tools and products that organize the world�s information, help businesses get online and prosper, and forge connections across a growing digital community of 3.5 billion people. So it only makes sense that Google would be part of the world's largest gathering of women technologists. We see GHC as a critical way to connect women in tech and help clear hurdles to their professional development. We know that there�s much more work to do to help level the playing field � and that�s why the mission of the Grace Hopper Celebration is so important. Just yesterday we reported new U.S. research from Gallup and Google that suggests girls are less likely than boys to be told by parents and teachers that they would be good at computer scien...
Capture and share VR photos with Cardboard Camera, now on iOS
Whether you�re hiking on the Olympic Peninsula or attending your cousin�s wedding, go beyond the flat photo or selfie. With Cardboard Camera�now available on iOS as well as Android �you can capture 3D 360-degree virtual reality photos. Just like Google Cardboard, it works with the phone you already have with you. VR photos taken with Cardboard Camera are three-dimensional panoramas that can transport you right back to the moment. Near things look near and far things look far. You can look around to explore the image in all directions, and even hear sound recorded while you took the photo to hear the moment exactly as it happened. To capture a VR photo, hold your phone vertically, tap record, then turn around as though you�re taking a panorama. Bugaboo Spire in B.C., Canada captured by Googler Adam Dickinson Starting today, you can also share your VR photos with friends and family on both iPhone and Android devices. Select multiple photos to create a virtual photo album, tap the share ...
Comments
Post a Comment