(Washington, DC) – Governments should pre-emptively ban fully autonomous
weapons because of the danger they pose to civilians in armed conflict,
Human Rights Watch said in a report released today. These future
weapons, sometimes called “killer robots,” would be able to choose and
fire on targets without human intervention.
The United Kingdom’s Taranis
combat aircraft, whose prototype was unveiled in 2010, is designed
strike distant targets, “even in another continent.” While the Ministry
of Defence has stated that humans will remain in the loop, the Taranis
exemplifies the move toward increased autonomy.
The 50-page report, “Losing Humanity: The Case Against Killer Robots,”
outlines concerns about these fully autonomous weapons, which would
inherently lack human qualities that provide legal and non-legal checks
on the killing of civilians. In addition, the obstacles to holding
anyone accountable for harm caused by the weapons would weaken the law’s
power to deter future violations.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose,
Arms Division director at Human Rights Watch. “Human control of robotic
warfare is essential to minimizing civilian deaths and injuries.”
The South Korean SGR-1 sentry
robot, a precursor to a fully autonomous weapon, can detect people in
the Demilitarized Zone and, if a human grants the command, fire its
weapons. The robot is shown here during a test with a surrendering enemy
soldier.
“Losing Humanity” is the first major publication about fully
autonomous weapons by a nongovernmental organization and is based on
extensive research into the law, technology, and ethics of these
proposed weapons. It is jointly published by Human Rights Watch and the
Harvard Law School International Human Rights Clinic.
Human Rights Watch and the International Human Rights Clinic called for
an international treaty that would absolutely prohibit the development,
production, and use of fully autonomous weapons. They also called on
individual nations to pass laws and adopt policies as important measures
to prevent development, production, and use of such weapons at the
domestic level.
Fully autonomous weapons do not yet exist, and major powers, including
the United States, have not made a decision to deploy them. But
high-tech militaries are developing or have already deployed precursors
that illustrate the push toward greater autonomy for machines on the
battlefield. The United States is a leader in this technological
development. Several other countries – including China, Germany, Israel,
South Korea, Russia, and the United Kingdom – have also been involved.
Many experts predict that full autonomy for weapons could be achieved in
20 to 30 years, and some think even sooner.
“It is essential to stop the development of killer robots before they
show up in national arsenals,” Goose said. “As countries become more
invested in this technology, it will become harder to persuade them to
give it up.”
Fully autonomous weapons could not meet the requirements of
international humanitarian law, Human Rights Watch and the Harvard
clinic said. They would be unable to distinguish adequately between
soldiers and civilians on the battlefield or apply the human judgment
necessary to evaluate the proportionality of an attack – whether
civilian harm outweighs military advantage.
These robots would also undermine non-legal checks on the killing of
civilians. Fully autonomous weapons could not show human compassion for
their victims, and autocrats could abuse them by directing them against
their own people. While replacing human troops with machines could save
military lives, it could also make going to war easier, which would
shift the burden of armed conflict onto civilians.
Finally, the use of fully autonomous weapons would create an
accountability gap. Trying to hold the commander, programmer, or
manufacturer legally responsible for a robot’s actions presents
significant challenges. The lack of accountability would undercut the
ability to deter violations of international law and to provide victims
meaningful retributive justice.
While most militaries maintain that for the immediate future humans
will retain some oversight over the actions of weaponized robots, the
effectiveness of that oversight is questionable, Human Rights Watch and
the Harvard clinic said. Moreover, military statements have left the
door open to full autonomy in the future.
“Action is needed now, before killer robots cross the line from science fiction to feasibility,” Goose said.