Search This Blog

Tuesday, June 7, 2011

How to Train Your Robot (to Lie)

How to Train Your Robot (to Lie)

sn-robot.jpg
Those lying toasters. Georgia Tech's Decepticon knows how to mislead pursuers to shake them off.
Credit: Georgia Institute of Technology
A military base has just fallen to enemy fighters. A robot containing top-secret information has to escape detection by the invading army. The robot is facing three corridors: right, center, and left. It could randomly pick a corridor and hope the enemy soldiers pick a different one. Or it could leave a false trail—assuming robots can be trained to lie. A new study using this scenario suggests that they can be.
This is not the first time machines have displayed deceptive tendencies. In 2007, Swiss researchers discovered that robots programmed to learn from experience spontaneously evolved the habit of lying to one another under some conditions. But the idea of deliberately implanting deceit is new.
The first step wasn't teaching the robots how to lie, but when to lie. Computer scientists Alan Wagner and Ronald Arkin of the Georgia Institute of Technology in Atlanta came up with an algorithm that set two conditions: First, a robot had to be in conflict with someone or something else. And second, it had to be able to influence its adversary's actions. If both conditions checked out, the robot was cleared to lie.
Then they tested the idea in two-wheeled, camera-equipped robots that were playing a simple game of hide-and-seek.
Innocent fun?. Robots flaunt their lying skills in a game of hide-and-seek.
Credit: Georgia Institute of Technology
The "hider" robot had to choose one of three compartments in which to conceal itself. The pathway to each was blocked by a green, red, or blue marker. Whichever way the robot went, it knocked down one marker. After some training, the hider bot and seeker bot figured out that a fallen marker indicated which direction the hider had gone. But there was one catch: The seeker didn't know the hider bot had been programmed to lie.
When the game began, the hider randomly chose one of the compartments. Then its deceptive programming kicked in and told it to go another direction first—knocking over a different marker—and then turn back toward its choice. The seeker, seeing the fallen marker and suspecting no trickery, would follow the fake trail and leave the hider undiscovered.
After 20 trials, they found the deception worked 75% of the time. When it failed, Wagner says, the robots were clumsy. The hider might try to go right but end up going toward the center, for example, or its slow perception of the marker might make it hit the wrong one. "It thinks it's going to knock over one, but lo and behold, it plows through all of them." Barring such blunders, however, the seeker robots fell for the ruse every time, the researchers reported last week in the International Journal of Social Robotics.
One of the hardest parts of deception is knowing how an adversary will interpret a false message, notes game theorist Philippe Jehiel of the Paris School of Economics, who was not involved in the work. He said he wasn't certain that the authors' way of modeling what the seeker would do was clear. "As far as I could see in the present study, this is one of the most challenging issues."
Wagner and Arkin also recognized the importance of knowing the enemy. In this study, they made the deception work by not revealing to the seeker the hider's capacity to lie. How the hider would fare against smarter robots remains an open question.
Wagner says he's aware people might be leery of creating deceitful robots. But he thinks robots that know how to lie could benefit society in the long run. "There are a lot of important situations in which humans deceive for the better of the other person," he says. For example, "If I'm trying to get a person with Alzheimer's to take medicine, we may be in a temporary state of conflict, but overall it's better for them to take that medicine," Wagner says. "Deception is not necessarily nefarious."
If you enjoyed this article, you might also like It's a Bot-Eat-Bot World.

No comments:

Post a Comment