Thanks to Anthropic for Exposing the Trump/Hegseth Plans for Mass Surveillance and Killer Robots

In my latest Verdict column, I discuss Anthropic's refusal to bow or bend to pressure from the Trump administration to allow its AI products to be used for mass surveillance or in autonomous weapons. Most of my column addresses the following multi-step puzzle: (1) Anthropic was seeking carveouts from a contract that would allow the Department of War Defense to use its AI products for "any lawful purpose"; (2) but mass surveillance is not lawful, whereas the use of autonomous weapons under current conditions would do so little to minimize civilian casualties as to likely violate international law; (3) thus, these activities are already arguably excluded by the general provision; (4) so why did Anthropic believe it need the carveouts?

In the column, I offer a number of possibilities. The short of it is that Anthropic rightly distrusted the Defense Department to correctly construe the law and steer clear of violations. In the column, I also discuss the circumstances in which mass surveillance or autonomous weapons could be legal. I conclude with the obvious inference that the refusal of Donald Trump and Pete Hegseth to bend reveals that they want to be able to use AI for mass surveillance and autonomous weapons. What could go wrong?

We are familiar with mass surveillance from the Edward Snowden revelations in 2013, but, so far as AI technology goes, that was a very long time ago. One very important limiting factor in 2013 and earlier was the impossibility of human beings working for the government processing all of the private information they were scooping up. Sure, the government could target particular foreigners (or illegally target Americans) but the average person could take some comfort from the fact that even if the government was recording their conversations, there was not the capacity to pay attention. AI changes that because AI systems can look through data many orders of magnitude faster than humans can. To use a speciesist analogy, earlier surveillance made known targets vulnerable to a kind of spear-fishing, but it was easy for the rest of us to swim through the large holes in the net; newer surveillance tools can catch everybody.

As for autonomous weapons, it is sometimes said that they have the great advantage of not risking the lives of American members of the armed forces. That is certainly an advantage of killer robots over human warriors, but that's not the right comparison. The right comparison is remote control weapons. The U.S. and other countries already deploy such weapons. They're called drones.

As for battlefield humanoids or autonomous tanks, we will likely have remote control killer robots before we have fully autonomous killer robots. Consider that Neo, one of the first household robots that consumers can pay for, operates by remote control for a great many tasks. But some day, perhaps in the next few years, there will be autonomous killer robots (and also presumably autonomous robots that can load a dishwasher in under an hour). When that happens, would it be reasonable to put them on the battlefield?

Truly autonomous weapons have advantages over remote-controlled ones. They don't depend on communications that can be faulty or jammed. And they can react and act much more quickly than a remote-operated weapon because AI processes information much more quickly than the humans back at the control center.

But that same advantage is also a moral and potentially legal disadvantage. Taking humans out of the loop gives control to computer systems that can make grave mistakes. The result, as I discuss in my Verdict column, is a potential for the AI to target civilians or to inflict disproportionate collateral damage. As I also note in the column, the experience of AI-assisted targeting by Israeli troops in Gaza is very concerning.

Suppose, however, that some day--perhaps not too far in the future--the error rate of the AI is lower than the error rate of humans. Suppose, in other words, that autonomous weapons programmed with the laws of war are, all things considered, less likely to kill or wound civilians than are either humans acting alone, humans remotely piloting battlefield drones, or humans deploying AI to aid their decision making but still making the ultimate decisions themselves. In such a world, would it still be morally objectionable to deploy autonomous weapons? Might it actually be morally obligatory to do so in such circumstances?

That question is not so different from the questions that arise concerning the transition from human-driven cars to fully self-driving ones. And people in power in many U.S. jurisdictions have already decided that the benefits of autonomous vehicles outweigh their risks. Are autonomous weapons categorically different?

I admit to not having a strong view, at least with respect to the hypothetical future in which autonomous weapons are less likely to attack civilians than humans are. But we live in the reality of today, in which LLMs still hallucinate and are much more likely than humans to escalate to nuclear war. Call me a Cassandra, but I feel like that ought to count for something.

-- Michael C. Dorf