Welcome Guest [Log In] [Register]
Welcome to Natural Hazards Forum. We hope you enjoy your visit.


You're currently viewing our forum as a guest. This means you are limited to certain areas of the board and there are some features you can't use. If you join our community, you'll be able to access member-only sections, and use many member-only features such as customizing your profile, sending personal messages, and voting in polls. Registration is simple, fast, and completely free.


Join our community!


If you're already a member please log in to your account to access all of our features:

Username:   Password:
Add Reply
Google machine learns to master video games
Topic Started: 26 Feb 2015, 12:59 AM (14 Views)
skibboy
Member Avatar

25 February 2015

Google machine learns to master video games

By Rebecca Morelle
Science Correspondent, BBC News

Posted Image
The machine learned to play video games and sometimes performed better than human professional players

A machine has taught itself how to play and win video games, scientists say.

The computer program, which is inspired by the human brain, learned how to play 49 classic Atari games.

In more than half, it was as good or better than a professional human player.

Researchers from Google DeepMind said this was the first time a system had learned how to master a wide range of complex tasks.

The study is published in the journal Nature.

Dr Demis Hassabis, DeepMind's vice president of engineering, said: "Up until now, self-learning systems have only been used for relatively simple problems.

"For the first time, we have used it in a perceptually rich environment to complete tasks that are very challenging to humans."

Technology companies are investing heavily in machine learning.

In 2014, Google purchased DeepMind Technologies for a reported £400m.

This is not the first time that a machine has mastered complex games.

IBM's Deep Blue - a chess-playing computer - famously beat the world champion Garry Kasparov in a match staged in 1997.

However, this artificial intelligence system was pre-programmed with a sort of instruction manual that gave it the expertise it needed to excel at the board game.

Posted Image
The machine excelled at Space Invaders but Pac-Man was harder work

The difference with DeepMind's computer program, which the company describes as an "agent", is that it is armed only with the most basic information before it is given a video game to play.

Dr Hassabis explained: "The only information we gave the system was the raw pixels on the screen and the idea that it had to get a high score. And everything else it had to figure out by itself."

The team presented the machine with 49 different videogames, ranging from classics such as Space Invaders and Pong, to boxing and tennis games and the 3D-racing challenge Enduro.

In 29 of them, it was comparable to or better than a human games tester.

For Video Pinball, Boxing and Breakout, its performance far exceeded the professional's, but it struggled with Pac-Man, Private Eye and Montezuma's Revenge.

"On the face it, it looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily," said Dr Hassabis.

"What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do.

"The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of game play what to do."

The research is the latest development in the field of "deep learning", which is paving the way for smarter machines.

Scientists are developing computer programs that - like the human brain - can be exposed to large amounts of data, such as images or sounds, and then intuitively extract useful information or patterns.

Examples include machines that can scan millions of images and understand what they are looking at: they can tell a cat is a cat, for example.

This ability is key for self-driving cars, which need an awareness of their surroundings.

Or machines that can understand human speech, which can be used in sophisticated voice recognition software or for systems that translate languages in real-time.

Dr Hassabis said: "One of the things holding back robotics today, in factories, in things like elderly care robots and in household-cleaning robots, is that when these machines are in the real world, they have to deal with the unexpected. You can't pre-program it with every eventuality that might happen.

"In some sense, these machines need intelligence that is adaptable and they have to be able to learn for themselves."

Some fear that creating computers that can outwit humans could be dangerous.

In December, Prof Stephen Hawking said that the development of full artificial intelligence "could spell the end of the human race".

Source: Posted Image
Offline Profile Quote Post Goto Top
 
skibboy
Member Avatar

25 February 2015

Rise of the Machines: video gamers beware

Posted Image
© AFP/File / by Mariette Le Roux

PARIS (AFP) - Researchers unveiled a software system Wednesday which had taught itself to play 49 different video games and proceeded to defeat human professionals -- a major step in the fast-developing Artificial Intelligence realm.

Not only did the system give flesh-and-blood gamers a run for their money, it discovered tricks its own programmers didn't even know existed, a team from Google-owned research company DeepMind reported in the scientific journal Nature.

"This... is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks," said co-developer Demis Hassabis.

The feat brings us closer to a future run by smart, general-purpose robots which can teach themselves to perform a task, store a "memory" of trial and error and adapt their actions for a better outcome next time.

Such machines may be able to do anything from driving our cars to planning our holidays and conduct scientific research, said the team.

Inspired by the human learning process, the "artificial agent" dubbed deep Q-network (DQN) was let loose, with only minimal programming, on an Atari game console from the 1980s.

"The only information they (the system) get is the pixels (on the screen) and the game score and the goal they've been told is to maximise the score," Hassabis explains in a Nature video.

"Apart from that, they have no idea about what kind of game they are playing, what their controls do, what they're even controlling in the game."

Unlike humans, the algorithm-based software starts off without the benefit of previous experience.

Presented with an on-screen paddle and ball, for example, a human gamer would already know that the goal must somehow involve striking the one with the other.

The system, by comparison, learns by activating computer keys randomly until it starts scoring through trial and error.

- Learns, adapts, and gets better -

"The system kind of learns, adapts and gets better and better incrementally until eventually it becomes almost perfect on some of the games," said Hassabis.

Games included the late 1970's classic Breakout, in which the player has to break through several layers of bricks at the top of a computer screen with a "ball" bounced off a paddle sliding from side to side at the bottom, Ms Pac-Man which entails gobbling pellets along a maze, pinball, boxing, tennis and a car race called Enduro.

The system outperformed professional human players in many of the games, but fared poorly in some, including Ms Pac-Man.

In particular game types, explained DeepMind colleague Vlad Mnih, "it's very difficult to get your first points or first rewards, so if the game involves solving a maze then pressing keys randomly will not actually get you any points and then the system has nothing to learn from."

But it did discover aspects of games that its creators hadn't even known about.

It figured out, for example, that in Breakout the optimal strategy is to dig a tunnel through one side of the wall and send the ball in to bounce behind it, breaking the bricks from the back.

- To the future and beyond -

The creators said their system was further advanced in many ways than Watson, an AI question-answering system that outwitted the most successful human players of quiz game Jeopardy in 2011, and Deep Blue, the computer which beat chess master Garry Kasparov in 1997.

Both these had largely been preprogrammed with their particular abilities.

"Whereas what we've done is build algorithms that learn from the ground up, so literally you give them perceptual experience and they learn how to do things directly from that perceptual experience," Hassabis told journalists.

"The advantage of these types of systems is that they can learn and adapt to unexpected things and also... the programmers or the system designers don't necessarily have to know the solution themselves in order for the machine to master that task."

The long-term goal, he added, was to build smart, general-purpose machines.

"We are many decades off from doing that," said the researcher. "But I do think that this is the first significant rung of the ladder that we're on."

The next developmental step will entail tests with 3D video games from the 90s.

by Mariette Le Roux

Source: Posted Image
Offline Profile Quote Post Goto Top
 
1 user reading this topic (1 Guest and 0 Anonymous)
« Previous Topic · Science & Nature · Next Topic »
Add Reply

Skin by OverTheBelow