Could you say "no" to this face? Christoph
Bartneck of the University of Canterbury in New Zealand recently tested
whether humans could end the life of a robot as it pleaded for survival.
In 2007, Christoph Bartneck,
a robotics professor at the University of Canterbury in New Zealand,
decided to stage an experiment loosely based on the famous (and
infamous) Milgram obedience study.
In Milgram's study, research
subjects were asked to administer increasingly powerful electrical
shocks to a person pretending to be a volunteer "learner" in another
room. The research subject would ask a question, and whenever the
learner made a mistake, the research subject was supposed to administer a
shock — each shock slightly worse than the one before.
As the
experiment went on, and as the shocks increased in intensity, the
"learners" began to clearly suffer. They would scream and beg for the
research subject to stop while a "scientist" in a white lab coat
instructed the research subject to continue, and in videos of the
experiment you can see some of the research subjects struggle with how
to behave. The research subjects wanted to finish the experiment like
they were told. But how exactly to respond to these terrible cries for
mercy?
Bartneck studies human-robot relations, and he wanted to
know what would happen if a robot in a similar position to the
"learner" begged for its life. Would there be any moral pause? Or would
research subjects simply extinguish the life of a machine pleading for
its life without any thought or remorse?
Treating Machines Like Social Beings
Many
people have studied machine-human relations, and at this point it's
clear that without realizing it, we often treat the machines around us
like social beings.
Consider the work of Stanford professor Clifford Nass. In 1996, he arranged a series of experiments testing whether people observe the rule of reciprocity with machines.
"Every culture has a rule of reciprocity, which roughly means, if I
do something nice for you, you will do something nice for me," Nass
says. "We wanted to see whether people would apply that to technology:
Would they help a computer that helped them more than a computer that
didn't help them?"
So they placed a series of people in a room
with two computers. The people were told that the computer they were
sitting at could answer any question they asked. In half of the
experiments, the computer was incredibly helpful. Half the time, the
computer did a terrible job.
After about 20 minutes of
questioning, a screen appeared explaining that the computer was trying
to improve its performance. The humans were then asked to do a very
tedious task that involved matching colors for the computer. Now,
sometimes the screen requesting help would appear on the computer the
human had been using; sometimes the help request appeared on the screen
of the computer across the aisle.
"Now, if these were people
[and not computers]," Nass says, "we would expect that if I just helped
you and then I asked you for help, you would feel obligated to help me a
great deal. But if I just helped you and someone else asked you to
help, you would feel less obligated to help them."
What the
study demonstrated was that people do in fact obey the rule of
reciprocity when it comes to computers. When the first computer was
helpful to people, they helped it way more on the boring task than the other computer in the room. They reciprocated.
"But
when the computer didn't help them, they actually did more color
matching for the computer across the room than the computer they worked
with, teaching the computer [they worked with] a lesson for not being
helpful," says Nass.
Very likely, the humans involved had no
idea they were treating these computers so differently. Their own
behavior was invisible to them. Nass says that all day long, our
interactions with the machines around us — our iPhones, our laptops —
are subtly shaped by social rules we aren't necessarily aware we're
applying to nonhumans.
"The relationship is profoundly social,"
he says. "The human brain is built so that when given the slightest
hint that something is even vaguely social, or vaguely human — in this
case, it was just answering questions; it didn't have a face on the
screen, it didn't have a voice — but given the slightest hint of
humanness, people will respond with an enormous array of social
responses including, in this case, reciprocating and retaliating."
So
what happens when a machine begs for its life — explicitly addressing
us as if it were a social being? Are we able to hold in mind that, in
actual fact, this machine cares as much about being turned off as your
television or your toaster — that the machine doesn't care about losing
it's life at all?
Bartneck's Milgram Study With Robots
In Bartneck's study, the robot
— an expressive cat that talks like a human — sits side by side with
the human research subject, and together they play a game against a
computer. Half the time, the cat robot was intelligent and helpful, half
the time not.
Bartneck also varied how socially skilled the
cat robot was. "So, if the robot would be agreeable, the robot would
ask, 'Oh, could I possibly make a suggestion now?' If it were not, it
would say, 'It's my turn now. Do this!' "
At the end of the game, whether the robot was smart or dumb, nice or
mean, a scientist authority figure modeled on Milgram's would make clear
that the human needed to turn the cat robot off, and it was also made
clear to them what the consequences of that would be: "They would
essentially eliminate everything that the robot was — all of its
memories, all of its behavior, all of its personality would be gone
forever."
In videos of the experiment, you can clearly see a
moral struggle as the research subject deals with the pleas of the
machine. "You are not really going to switch me off, are you?" the cat
robot begs, and the humans sit, confused and hesitating. "Yes. No. I
will switch you off!" one female research subject says, and then doesn't
switch the robot off.
"People started to have dialogues with
the robot about this," Bartneck says, "Saying, 'No! I really have to do
it now, I'm sorry! But it has to be done!' But then they still wouldn't
do it."
There they sat, in front of a machine no more soulful
than a hair dryer, a machine they knew intellectually was just a
collection of electrical pulses and metal, and yet they paused.
And
while eventually every participant killed the robot, it took them time
to intellectually override their emotional queasiness — in the case of a
helpful cat robot, around 35 seconds before they were able to complete
the switching-off procedure. How long does it take you to switch off
your stereo?
The Implications
On one
level, there are clear practical implications to studies like these.
Bartneck says the more we know about machine-human interaction, the
better we can build our machines.
But on a more philosophical
level, studies like these can help to track where we are in terms of our
relationship to the evolving technologies in our lives.
"The
relationship is certainly something that is in flux," Bartneck says.
"There is no one way of how we deal with technology and it doesn't
change — it is something that does change."
More and more
intelligent machines are integrated into our lives. They come into our
beds, into our bathrooms. And as they do — and as they present
themselves to us differently — both Bartneck and Nass believe, our
social responses to them will change.