News of the US government’s deal with Anthropic and OpenAI, surrounding AI red lines gave us protests, rage-uninstalls, opinion pieces (this one too) and has renewed questions on no-go zones for AI. The questions on red lines for AI may be moot, though.
First, a quick rewind to 2018. Employees at Google were aghast when the company took on Project Maven for the US Department of Defense, meant to analyze drone surveillance footage using machine learning (AI). This was also the time when drone warfare itself was criticized as a giant leap in the dehumanization of war (of whatever humanity was left in it, if we prefer to look at it that way). While humans could analyze the footage, ML-based software could do it faster. Much faster. And when Google did it for the US Government, its employees protested. About four thousand employees signed a petition, and a dozen went as far as resigning. Google didn’t renew the contract.
Back to the present. We can intellectualize all day about what constitutes just use or abuse of AI (responsible use), ethics, resisting governmental overreach with AI, and allied topics. In the real world, with a government contract of USD 200 mn on the table, OpenAI agreed to the terms of use the US Pentagon wanted, which Anthropic had earlier refused on ethical grounds. That much money may not be a game changer, but it is still something both companies could use, given that they are not (yet) profitable.
So why did Anthropic stay firm on hard bans on its AI being used for domestic (in the US) mass surveillance and fully autonomous lethal weapons, while OpenAI, within hours after the deadline set by the US Government for Anthropic, agreed and hence got the deal? Anthropic’s CEO, Dario Amodei, said in his blog post, “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” He also said use cases of mass surveillance and fully autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do.” (https://www.anthropic.com/news/statement-department-of-war). Someone had to say this, and who better than the head of a prominent AI company? It came at a cost, though.
For sticking to its stance, Anthropic is punished with the tag of “supply chain risk”, a classification reserved for foreign adversaries. If we needed more proof that it was a terrible idea to get started on the “if you are not with us, you are the enemy” line of thinking, here it is.
After OpenAI swiftly responded to the Pentagon’s deal and signed on their terms, protests came in the form of demonstrations outside their office, and mass uninstalls. Headlines screamed about app uninstalls in the US surging 300%, and Anthropic’s Claude AI install base shooting up at the same time (51% day-on-day at peak). The impact on the companies, however, is not going to be as dramatic. We don’t know how many of these uninstalls are of paid customers, and given that these platforms lose money on every non-paid user (and arguably paid users too, depending on their usage), these install/uninstall numbers are not likely to make a dent in revenue, even if a considerable percentage of the leavers don’t return for a while. OpenAI employees too reportedly protested, but CEO Sam Altman defended the deal in an all-hands meeting, calling the backlash ‘genuinely painful’, but the deal a right call. It’s safe to say we are not likely to have the kind of outcome the Google Maven protests had.
AI is already surveilling and ‘aiding’ war, and will continue to
Palantir is already doing surveillance for the US government; and Anthropic was one of the AI companies providing service to Palantir; it’s anybody’s guess that other AI models too are powering specialist defense services / surveillance companies. And it’s not just the US government that surveils.
Interestingly, Anthropic’s Claude AI was reported to have been used (according to multiple reports, including Reuters, citing US officials) for intelligence assessments, target identification, and battle simulations during the opening strikes of the current US-Iran war, despite the company being tagged a “supply chain risk” just hours before the strikes began. To be fair, these uses are not “autonomous weapons”, which Anthropic had objected to. Nevertheless, it an interesting turn of events.
As well-intentioned as the protests and uninstalls may be, they are not going to stop AI deployment in national security. The opportunities are immense, and the “other side” will do it anyway. No nation would risk being left behind if it can help it. Just like any other technology that has security implications. Protests and warnings don’t stop the development of, say, hypersonic missiles, and they won’t stop AI integration into security either. The fact that AI can be misused outside public view will not outweigh the advantages or imperatives of exploiting it in national security. That’s the way it is, whether we agree or not.
