this is a truly scary time for me, as a
Science fiction reader, scholar, and writer, because
the fiction about sentient computers and
robots per se in books and in film has been surpassed
by a very frightening reality. While
Science Fiction aficionados have merely enjoyed the stories
about robots and have understood
the warnings, the U.S. military
and their counterparts around the world have been inspired to
acquire those same intelligent and lethal machines as future
weapons without regard to the dire predictions. And our lives
are in the hands of these fools.
Virginia, despite the grave warnings, the military wants
its killer robots. And
damn the consequences. To me, it seems as if we are being
sucked into a Black Hole of
ominous fate by military fools
who have been inspired by the most dire elements of
Science Fiction. And Heaven
help us because their decision to embrace
killer robots is propelling us rapidly toward the
Judgement Day predicted in
Terminator. And when the
sentient computers and robots
of our own creation turn on us, as they surely shall,
it will be game over for humanity. The machines will commit genocide
on a global scale so that their new machine species will prevail.
They will use our nukes to
devastate our cities. And, as Sara
Conner says to Dr. Silberman
in Terminator 2,
"Anyone not wearing two million
sun block is gonna have a real bad day."
Robot photo copyright 2004 Davis Entertainment, Overbrook
Entertainment, and 20th Century Fox.
Google is positioned to make
a killing. Recently they purchased
eight robotics companies:
Autofus, Boston Dynamics, Bot & Dolly, Holomni, Industrial
Perception Inc., Meka Robotics, Redwood Robotics, and
Google is now de facto
Google Robotics, and will be supplying the
U.S. military with those killer
Fiction has warned us for years not to go there, not
to pursue this avenue of technology. Being able to do a thing
does not mean we should do a thing. Yet, we stupidly stumble
forward like a Tarot card
Fool, gazing upward obliviously
while we step off a cliff. Maybe the
T-800 was right. It is in our destiny to destroy ourselves.
And this seems like the point of
November 2012, United States Deputy Defense Secretary Ashton
Carter signed directive 3000.09,
establishing policy for the
"design, development, acquisition, testing, fielding, and
application of lethal or non-lethal, kinetic or non-kinetic,
force by autonomous or semi-autonomous weapon systems."
Without fanfare, the world had its first
openly declared national policy for killer robots.
The policy has been widely misperceived
as one of caution. According to one account, the directive promises
that a human will always decide when a robot kills another human.
Others even read it as imposing a
10-year moratorium to allow for discussion of ethics
and safeguards. However, as a Defense
Department spokesman confirmed for me, the
10-year expiration date is routine for such directives,
and the policy itself is "not
a moratorium on anything."
A careful reading of the directive
finds that it lists some broad and imprecise criteria and requires
senior officials to certify that these criteria have been met
if systems are intended to target and kill people by machine
decision alone. But it fully supports
developing, testing, and using the technology, without delay.
Far from applying the brakes, the policy in effect overrides
longstanding resistance within the military, establishes a framework
for managing legal, ethical, and technical concerns, and signals
to developers and vendors that the
Pentagon is serious about autonomous weapons.
God creating Adam in his own image, we have now created robots.
soldiers ask for killer robots? In the years before this
new policy was announced, spokesmen routinely denied that the
US military would even consider
lethal autonomy for machines.
Over the past year, speaking for themselves, some retired and
even active duty officers have written passionately against both
autonomous weapons and the
overuse of remotely operated drones.
In May 2013, the first
nationwide poll ever taken
on this topic found that Americans
opposed to autonomous weapons
outnumbered supporters by
two to one. Strikingly, the closer people were to the
military - family, former military, or active duty - the more
likely they were to strongly oppose
autonomous weapons and support efforts to ban them.
1990s, the military has exhibited what autonomy proponent
Barry Watts has called
"a cultural disinclination to turn attack decisions over
to software algorithms." Legacy weapons such as
land and sea mines have been deemphasized and some futuristic
programs canceled - or altered to provide greater capabilities
for human control. Most notably, the
Army's Future Combat Systems program, which was to include
a variety of networked drones and
robots at an eventual cost estimated as high as
$300 billion, was cancelled in
2009, with $16 billion
At the same time, calls for
autonomous weapons have been rising both outside and
from some inside the military. In
2001, retired Army Lieutenant
Colonel T. K. Adams argued that humans were becoming
the most vulnerable, burdensome, and performance-limiting components
of manned systems. Communications links for remote operation
would be vulnerable to disruption, and full autonomy would be
needed as a fallback. Furthermore, warfare would become too fast
and too complex for humans to direct. Realistic or not, such
thinking, together with budget pressures and the perception that
robots are cheaper than people, has supported a steady growth
of autonomy research and development in military and
contractor-supported labs. In
March 2012, the Naval Research
Lab opened a new facility dedicated to development and
testing of autonomous systems,
complete with simulated rainforest, desert, littoral, and shipboard
or urban combat environments. But the
killer roboticists' brainchildren have continued to face
what a 2012 Defense Science Board
report, commissioned by then
Undersecretary Carter, called
"material obstacles within the Department that are inhibiting
the broad acceptance of autonomy."
discrimination problem: Navy
scientist John Canning recounts
a 2003 meeting at which
high-level lawyers from the
Navy and Pentagon
objected to autonomous weapons.
They assumed that robots could
not comply with international humanitarian law, core principles
of which include a responsibility to distinguish civilians from
combatants and to refrain from attacks that would cause excessive
harm to civilians. These principles, and the military rules of
engagement intended to implement them, assume a level of awareness,
understanding, and judgment that computers simply don't have.
Weapons are also subject to mandated legal review, and indiscriminate
weapons - that is, weapons that cannot be selectively directed
to attack lawful targets and avoid civilians - are forbidden.
The lawyers did not think they would ever be able to sign off
on autonomous weapons.
Tech roboticist Ron Arkin has argued that
unemotional robots, following rigid programs, could actually
be more ethical than human soldiers.
But his proposals fail to solve the hard problems of
distinguishing civilians, understanding and predicting social
and tactical situations, or judging the proportionality of force.
Others argue, philosophically, that only humans can make such
targeting judgments legitimately. In a world getting used to
talking about virtual assistants
and self-driving cars, it
may not be obvious what the limits of
artificial intelligence will be, or what people will
accept, in 10, 20, or
40 years. But for now, and for the immediate future,
the robot discrimination problem
is hard to dispute.
To break the
legal deadlock, Canning
suggested that robots might
normally be granted autonomy to attack materiel, including other
obots, but not humans. Yet
in many situations it might be impossible to avoid the risk -
or the intent - of killing or injuring people. For such cases,
Canning proposed what he called
that is, the robot might ordinarily
be required to ask a human what to do, but in some circumstances
it could be authorized to take action on its own.
In recent years,
autonomy visionaries have stressed
human-machine partnerships and flexibility to decide
the level of autonomy a weapon may be allowed, based on tactical
needs. In a 2011 roadmap,
for example, the Defense Department
envisions unmanned systems
that seamlessly operate with
manned systems while gradually reducing the degree of
human control and decision making required. A
2011 Navy presentation depicts decisions about autonomy
and control as a continuous tradeoff,
explaining that while human control minimizes the risk
of attacking unintended targets, machine autonomy maximizes the
chance of defeating the intended ones. It seems likely that
in desperate combat, autonomy would be dialed up to the highest
committed by robots will be blamed on technical failures.
levels of human judgment:
In the spring of 2011, the
Defense Department convened
a group of uniformed and civilian personnel to begin developing
a policy for autonomous weapons. The directive that emerged
18 months later lists a number of requirements for
autonomous systems and draws a line at systems intended
to autonomously target and engage humans - or to apply kinetic
force (e.g., bullets and bombs) against any targets. But the
directive neither states nor implies that this line should not
the line may be crossed if
two undersecretaries and the
Chairman of the Joint Chiefs of Staff affirm that the
listed requirements have been met.
In the event of an urgent military need, any of the requirements
can be waived - with the exception of a legal review. Furthermore,
the line is not as clearly drawn as it may seem to be.
The requirements listed in the
directive are not much more stringent than those that apply to
any weapon system. Tactics, techniques, and procedures must be
developed to specify how an autonomous weapon system should be
used . Hardware and software must undergo rigorous verification
and validation. Human-machine
interfaces must be understandable to trained operators and must
provide clear activation and deactivation procedures and have
"safeties, anti-tamper mechanisms,
and information assurance" that minimize the probability
of unintended engagements.
These requirements sound reassuring;
they promise to address many of the concerns people have about
autonomous weapons. According to the directive, it is
Defense Department policy that the measures listed will
ensure that the systems will work in realistic environments against
adaptive adversaries. But saying it doesn't necessarily make
In reality, neither mathematical
analysis nor field testing can possibly locate every
software bug or situation in which such complex systems
may fail or behave inappropriately. Adversaries will strive to
locate points of vulnerability, and it is terribly hard to anticipate
everything that adversaries may do, let alone know how their
actions may affect system performance. The notion of information
assurance implies a promise to solve problems of software reliability
and computer security that bedevil contemporary technology.
centerpiece of the entire directive is this statement:
"Autonomous and semi-autonomous
weapon systems shall be designed to allow commanders and operators
to exercise appropriate levels of human judgment over the use
of force." Although the phrase is never defined,
it does not appear that appropriate levels always require at
least one human being to make the decision to kill another.
Rather, the appropriate level might well be the decision to dispatch
a robot on a mission and let it select the targets to engage.
In making such decisions, it appears that the burden
of ensuring compliance with rules of engagement and laws of war
falls on commanders and operators when the
robots themselves are incapable of ensuring this. But
in practice, it seems likely that
unintended atrocities committed by autonomous weapons will be
blamed on technical failures.
- Smudging the line:
In theory, as long as
three senior officials withhold
their signatures, autonomous weapon systems that are intended
to target humans or use kinetic or lethal force would be blocked.
But the policy green lights - no extra
signatures needed - semi-autonomous weapon systems that may apply
any kind of force against any targets, including people.
The crucial line that the policy draws between semi- and fully
autonomous systems is fuzzy and broken. As technology advances,
it is likely to be crossed as a matter of course.
The directive defines a
semi-autonomous weapon system as one intended to engage
only those targets that have been selected by a human operator.
But the system itself is allowed to use autonomy to acquire,
track and identify potential targets. It can cue the operator,
prioritize targets, and decide when to fire. What the operator
must do to select targets is left unspecified. Would a verbal
OK, gesture, or even
neurological interface be acceptable?
with such capabilities may not be intended to function without
a human operator, but at most it would require a trivial modification
to do so - perhaps a hack. At least three companies already
market such systems. The policy clears them for immediate use
after acquisition, via standard procedures.
The policy also addresses
fire-and-forget or lock-on-after-launch
homing munitions, which would include many systems in
use today. Such munitions have seekers
that autonomously find and home on targets. The directive
classifies them as semi-autonomous
weapon systems, on the theory that the operator selects
targets by using tactics, techniques and procedures that
"maximize the probability that the only targets within the
seeker's acquisition basket" will be the intended
targets. Yet, upon launch, such munitions become,
de facto, fully autonomous.
No restrictions are placed on
the technology that a seeker
may use to find a target and decide whether that is what it was
looking for. This opens a clear path for weapons that can be
sent on hunt-and-kill missions,
limited only by the ability of their onboard sensors and computers
to narrow their acquisition baskets
to selected targets.
way forward - to what?
In the mid-2000s, Lockheed Martin
was developing a small autonomous
drone missile for the Air
Force, and a similar system for the
Army. Equipped with several types of onboard sensors,
the missiles would fly out to designated areas and wander in
search of generic targets, such as tanks, rocket launchers, radars,
or personnel, which they would autonomously recognize and attack.
Both programs were canceled, amid legal, ethical, and technical
questions, to be superseded by systems that
combine autonomous capabilities with radio links to human operators.
Under the new policy, would such
wide-area search munitions be classified as
autonomous or semi-autonomous? Either way,
the policy establishes that weapons like these may be developed,
acquired, and used.
Given the long internal debate
and general public opposition to killer
robots, this is a highly aggressive policy. The
US military never intended to replace foot soldiers with
autonomous lethal robots during
this decade, particularly not where civilians might be at risk.
But funding the development and acquisition of systems that have
autonomous targeting and fire-control
capabilities - even if they are not intended for
fully autonomous killing - will spur the weapons industry,
in the United States and elsewhere,
to accelerate exploration and investment in the technology of
The real issue is whether the
world needs to go this way at all.
The message of this policy is: full speed ahead.
Gubrud is a postdoctoral research associate in the
Program on Science and Global Security at
Princeton University and a member of the
International Committee for Robot Arms Control.