Why Brains Get Creeped Out by Androids

By Mark Brown, Wired UK We’ve all found ourselves in the uncanny valley before. It’s that uneasy feeling you get when viewing a realistic humanoid or CGI person that’s so close to looking human that it seems almost spooky. [partner id=”wireduk” align=”right”]The actual “valley” refers to a precipitous drop in “likeability” as onscreen characters and […]

By Mark Brown, Wired UK

We've all found ourselves in the uncanny valley before. It's that uneasy feeling you get when viewing a realistic humanoid or CGI person that's so close to looking human that it seems almost spooky.

[partner id="wireduk" align="right"]The actual "valley" refers to a precipitous drop in "likeability" as onscreen characters and humanoid robots step too far towards being human-like. As in, we enjoy Pixar's Wall-E and Nintendo's Mario, but we get the heeby jeebies from the ultra-realistic faces of The Polar Express or the upcoming Tintin movie.

So far, the phenomenon has been described entirely anecdotally, but an international team of researchers, led by Ayse Pinar Saygin of the University of California, San Diego, wanted to find out if the sensation was actually caused by something deep within our brains.

The team picked out 20 subjects, aged 20 to 36. They had no experience working with robots and hadn't spent time in Japan where there's more cultural exposure to androids. Saygin also recruited the help of Repliee Q2, an especially human-like robot from Intelligent Robotics Laboratory at Osaka University. Q2 has 13 degrees of freedom on her face alone, and uses her posable eyes, brows, cheeks, lids, lips and neck to make facial expressions and mouth shapes.

The team made videos of Repliee Q2 performing actions like waving, nodding, taking a drink of water and picking up a piece of paper from a table. Then, the same actions were performed by the Japanese woman whom Q2 is based on. Finally, the researchers stripped the robot of its synthetic skin and hair to reveal a Terminator-style metal robot with dangling wires and visible circuits.

The subjects were shown each of the videos and were informed about which was a robot and which human. Then, the subjects' brains were scanned in an fMRI machine.

When viewing the real human and the metallic robot, the brains showed very typical reactions. But when presented with the uncanny android, the brain "lit up" like a Christmas tree.

When viewing the android, the parietal cortex -- and specifically in the areas that connect the part of the brain's visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror (or empathy) neurons -- saw high levels of activity.

It suggests that the brain couldn't compute the incongruity between the android's human-like appearance and its robotic motion. In the other experiments -- when the onscreen perfomer looks human and moves likes a human, or looks like a robot and moves like a robot -- our brains are fine. But when the two states are in conflict, trouble arises.

"The brain doesn't seem tuned to care about either biological appearance or biological motion per se," said Saygin, assistant professor of cognitive science at UC San Diego. "What it seems to be doing is looking for its expectations to be met -- for appearance and motion to be congruent."

In the paper, published in the journal Social Cognitive and Affective Neuroscience, the team writes, "as human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners."

"Or perhaps, we will decide it is not a good idea to make them so closely in our image after all."

*Images: 1) marilink/Flickr 2) UCSD
*

Source: Wired.co.uk

See Also: