Friday, February 27, 2026

Worrying: Anthropic/DoD



On the day:
1560
The Treaty of Berwick, which would expel the French from Scotland, is signed by England and the Congregation of Scotland
1812
Poet Lord Byron gives his first address as a member of the House of Lords, in defense of Luddite violence against Industrialism in his home county of Nottinghamshire.
1860
Abraham Lincoln makes a speech at Cooper Union in the city of New York that is largely responsible for his election to the Presidency.
1933
Reichstag fire: Germany’s parliament building in Berlin, the Reichstag, is set on fire.

***

The U.S. State Department announced on Friday that it started evacuating "non-emergency" government personnel from the embassy in Israel and their family members, citing "safety risks" amid growing tensions with Iran.

***

So is the federal government going to subsidize a New York economic plan whose homicidal philosophy is directly opposed to the underpinnings of the American founding and spirit?

***

Block, Jack Dorsey’s payments company, will cut 4,000 of its 10,000 workers as it embraces AI.

What jobs is the technology going to create?

***
                                                              Worrying  

This post-Christmas period is a time of Epiphany, and I've just had one. I am a tech illiterate. I know virtually nothing about computers. I don't even know the nouns. But I have come to a realization that has changed my opinion about society, the community of nations, and us. 

There have been periods in history where the world changed; not just was shaken or had supporting struts removed. Fundamentally disrupted and changed. Christianity, the Decline of Rome, Islam, The Plague, the Reformation, The Enlightenment, The American Constitution, Marx, WW1, WW11--all of these events disrupted common life to the degree that required rebuilding. A good example is WW1, where the problems were not grasped and wrestled with but merely continued, vindictively, for another generation to solve. 

Rebuilding.

There are several elements to the notion of a human crisis and response. One, of course, is assessing the potential threats. The Plague, for example, would be hard to anticipate, and its fallout hard to assess. The other aspect is the response. Politics, generally, but more importantly now, when individual leverage is so great, demands insight and courage that Vietnam, the national debt, and provoked social disruption imply are simply not available.

For the next couple of days, I'm going to have an internal discussion of the two threats facing the West that will demand a world rebuilding. My ignorance will limit the insightfulness of my concerns.


Anthropic/DoD

A primer on the Anthropic/DoD situation from Dean Ball 

DoD and Anthropic have a contract to use Claude in classified settings. Right now, Anthropic is the only AI company whose models work in classified contexts. The existing contract, signed by both parties and in effect, prohibits two uses of Anthropic’s models by the military: 

1. Surveillance of Americans in the United States (as opposed to Americans abroad). 
2. The use of Claude in autonomous lethal weapons, which are weapons that can autonomously identify, track, and kill a human with no human oversight or approval. Autonomous killing of humans by machines. 

On (2), Anthropic CEO Dario Amodei’s public position is essentially that autonomous lethal weapons controlled by frontier AI will be essential faster than most people realize, but that the models aren’t ready for this *today.* For Anthropic, these things seem to be a matter of principle. It’s worth noting that when I speak with researchers at other frontier labs, their principles on this are similar, if not often stricter. 

For DoD, however, there is another matter of principle: the military’s use of technology should only ever be constrained by the Constitution or the laws of the United States. One could quibble (the government enters into contracts, like anyone else), but the principle makes sense. A private company regulating the military’s use of AI also doesn’t sound quite right! So, the military has three options: 

1. They could cancel Anthropic’s contract and find some other frontier lab (ideally several) to work with. 
2. They could identify Anthropic a supply chain risk, which would ban all other DoD suppliers (I.e., a large fraction of the publicly traded firms in America) from using Anthropic in their fulfillment of DoD contracts. This is a power used only for foreign adversary companies, as far as I know. Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling. Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic. 
3. They could activate Title I of the Defense Production Act, an authority intended for command-and-control of the economy during wars and emergencies. This is really legally murky, and without going into detail, I feel reasonably confident this would backfire for the administration, resulting in courts limiting the use of the DPA. 

Option 1 is obviously the best. This isn’t even close, and I say this as someone who shares DoD’s principled concerns about the control by private firms over the military’s use of technology. Even the threats do damage to the US business environment, and rightfully so: these are the strictest regulations of AI being considered by any government on Earth, and it all comes from an administration that bills itself (and legitimately has been) deeply anti-AI-regulation. Such is life. One man’s regulation is another man’s national security necessity.

No comments: