
On November 3, the San Francisco Board of Supervisors voted to give the San Francisco Police Department the ability to use remote-controlled robots equipped with explosive charges to contact, incapacitate, or disorient violent, armed, or dangerous suspect when lives are at stake. “Robots equipped in this manner would only be used in extreme circumstances to save or prevent further loss of innocent lives,” SFPD spokesperson Allison Maxie said in a statement.
Thank you for reading this post, don't forget to follow and signup for notifications!
This news may have been surprising to many, especially as there had been significant public outcry when SFPD first rolled out its draft robot policy. Just a month before and only a few miles away, the Oakland Police Department announced that while they had participated in discussions with the Oakland Police Commission and community members to explore getting robots armed with shotguns, they had decided the department no longer wanted to explore that particular option.
We first saw a robot fatally wound a suspect in 2016, after the Dallas Police Department rigged a bomb-disposal robot to kill an armed suspect who had murdered five Dallas police officers. “Other options would have exposed our officers to great danger,” said former Dallas Police Chief David Brown at the time.
Should police be allowed to use robots to deliver lethal force? Our experts debate the issue. Share your thoughts in the box below.
The ground rules: As in an actual debate, the pro and con sides are assigned randomly as an exercise in critical thinking and analyzing problems from different perspectives.
Our debaters: Jim Dudley, a 32-year veteran of the San Francisco Police Department where he retired as deputy chief of the Patrol Bureau, and Chief Joel Shults, EdD, who retired as chief of police in Colorado.
Jim Dudley: Although the headline in the San Francisco Newspaper read “Killer Robots OK’d for SFPD” the truth is, we have had Explosive Ordnance Disposal (EOD) robots for decades at the San Francisco Police Department. The issue became a headline when the department asked for approval to arm the robots with a possible explosive device to neutralize an armed active shooter in extreme situations. I am in agreement with the decision.
The department justified the request based on the rise in mass shootings over the past few years. Over 600 mass shootings have occurred in the United States so far in 2022 – more than double the previous year. Certainly, there is an expectation that law enforcement officers will advance toward an active shooter, but some situations dictate that that is not always possible in extreme situations. Think about the Route 91 Harvest Festival shooting in Las Vegas, where the armed and barricaded suspect fired upon music festival attendees, killing 60 and wounding another 500. Consider the North Hollywood bank robbery, where suspects wearing body armor used fully automatic weapons to fire indiscriminately at police and civilians, wounding several of them, in a raging gun battle for over 40 minutes. There are so many more instances where sending in a robot to address the active shooter would have ended the threat without further carnage.
Joel Shults: The only thing that a robot adds to any law enforcement operation is distance. And therein lies the ethical quandary.
Few would argue against using a drone to surveil a crime scene, or a robot to deliver a message to a barricaded suspect. Using either of those devices to deliver deadly force is a decidedly different proposition.
Like other ethical issues (we can talk about tactical issues later), we can always justify a singular incident as an exception to the rule for a higher moral purpose. When the Dallas Police Department sent a robot in to kill the murderer of five police officers after an hours-long stand-off in 2016, the world gasped, then shrugged its shoulders. It seemed like a good idea at the time and eliminated a clearly dangerous threat. The robot was fitted with an explosive device, rather than a firearm, but deadly force is deadly force whether it results from a .45 slug, a patrol vehicle bumper, or a bomb, but the public is sensitive to what they perceive as “overkill.”
When Philadelphia police dropped a bomb from a helicopter to end a standoff in a residential neighborhood in 1985, 11 occupants of the suspect house were killed and the resulting fire spread destroying 61 homes. Not a robot, but a resolution using a remote device.
The infamous Waco compound of David Koresh was destroyed by fire after tear gas was injected into the structure from armored vehicles (tanks by anyone’s description). Granted, it is probably more likely that the fires were started and fueled by the Davidians themselves although that remains in dispute, but the comparison here is that the operation was capped by what was essentially an application of force by remote, non-human means.
Both of these were, of course, extreme and anomalous situations. In Philadelphia, the police had already expended over 10,000 rounds of ammunition in the extended gun battle in an attempt to serve a search warrant. In Waco, four ATF agents had been killed in an attempt at a tactical entry to serve a warrant and the siege had lasted 51 days. We can say that a human being will always be the one deciding on pulling the robot’s trigger, but it will always be from a distance and never with a full view – notwithstanding advanced audio and video being beamed to the operator. The tactical hypotheticals include whether a remote device can be hacked and control wrested from law enforcement and whether introducing a weapon into a suspect’s presence can be defeated and used against others. I’m just not sure we’re ready to strap a Glock on every remote tanked, wheeled, winged, or four-legged automaton in our tool kits.
Jim Dudley: Great points, Joel, but let’s compare apples to apples. Of course, those opposing the robot idea are probably those who shouted the loudest calls for de-funding. They are the same who believe that cops should sacrifice themselves and charge the shooters – with or without hostages. There will never be a foolproof-enough plan to appease them.
That said, law enforcement officers rush in to confront active shooters, or within “hot zones” where they are being fired upon. In most cases described, neither fire department nor EMS personnel would enter to address the wounded victims. Indeed, in long drawn-out scenarios, many of the victims are left to bleed out, since rescuers would be prevented from evacuating them or applying field aid. I hope we start developing rescue robots to address those situations.
The SFPD plan cited only the “most violent and extreme” situations where the lethal force option would be considered. Your idea of hacking a remote is a real possibility, but that could be addressed with sophisticated software or simply hard-wiring the device. Only the 1st or 2nd in command will authorize the use. When it comes to lethal force, there shouldn’t be an autonomous robot in play.
Without exposing some already nuanced technology in use in policing today, I’m comfortable with having this option available. Clearly, a majority of elected officials in arguably one of the most liberal cities in the country agree.
With any luck, we would never have to resort to using this apparatus, but then again, luck should never be part of any plan to address mass shooters.
Joel Shults: I didn’t mention the apparent irony of this decision coming out of San Francisco of all places. It seems our west coast states don’t want much police action at all, but they are OK with killer bots!
I suppose as we imagine the extreme circumstances envisioned by the San Francisco Board of Supervisors would be a decision where most everyone would cheer. The slippery slope question is not merely speculative, however. Here are some things to consider:
- Will a deadly force by humans now be called into question when the bot option isn’t used or available?
- Will the decision by committee (you can imagine the lawyers, politicians and top brass peering at the monitor in the command post looking like the White House situation room when Bin Laden was killed) become the expected norm?
- Will second-guessing and legal challenges clog up the works on this potential tool in the near future?
- Will manufacturers add trigger fingers to robotic arms or have built-in weaponry integrated into designs?
- Will a non-emotional, non-stressed robot be expected to deliver a bullet to a pinpoint non-lethal body part using laser targeting? If so, will shoot to wound become a standard protocol?
- Must robots be equipped with less lethal options such as press the red button for 9mm, blue button for buckshot, yellow button for TASER probes, green button for bean bag?
- Will drones be firearms equipped?
- Or will explosives be the primary mode of lethal force delivery rather than a bullet?
Our evolving theories on immediate response to active harmers certainly should include remote options – I can’t argue against that. Our current thinking of the first officer on scene shooting their way in to kill the offender (I’m sorry, I meant “stop the threat”) is inclusive of suicidal heroism that may not be the most effective tactical response, but that’s another article. But we all know that seconds count and robots simply won’t be the first responder and will take a relatively long time to deploy. My concern is that the more familiar we get with the robokill option, the more frequently it will be used outside of those extreme parameters we are imagining now. There’s no question that it is a philosophical conundrum to want to keep humanity in the decision to kill, but we could easily become far too comfortable with technology doing the hardest thing any officer can be called on to do.
Jim Dudley: Yes, yes and yes.
Litigation is a way of life in our litigious society. There are thousands to millions of dollars paid out in cases where officers acted appropriately, yet cities still paid awards to families or survivors of offenders.
We will always be second-guessed. Remember when the highest elected official in the country queried, “Why didn’t they just shoot him in the leg?”
Clearly, policy must be airtight. A matrix needs to list all potential possibilities and use of the robot should only occur in the strictest of situations.
As far as wait time is concerned, it would be the same as a SWAT call-out, with the robot set up while the criteria are being addressed. If it doesn’t meet the threshold then “back in the box robot!”
Using robotic technology is a foregone conclusion. We are seeing drones advancing into situations ahead of patrol officers, automated license plate readers are operational in the field and even Boston Dynamics’ DigiDog has been trying to assist police operations. Some private entities use roving robots on campuses and malls to observe and report.
Active and responsive robots are inevitable. It only makes sense to put machines into situations to fill the gaps where the human element is not allowed, unavailable due to recruitment issues, or just too dangerous. Robots have a place in EOD investigations and disposal, hazmat environments, and in the very rare cases of an active mass shooter, when sending in live operators is too dangerous. We shouldn’t have to stand by and watch victims suffer and die when there are no other viable solutions.
Joel Shults: All good points, Jim, especially the inevitability of it all. I just hope we have a good measure of healthy skepticism and a little fear in the mix. We’re only human, after all.
Share your thoughts about this issue in the box below.