1820
The U.S. Congress passes the Missouri Compromise.
1857
Second Opium War: France and the United Kingdom declare war on China.
1861
Alexander II of Russia signs the Emancipation Manifesto, freeing serfs.
1873
Censorship in the United States: The U.S. Congress enacts the Comstock Law, making it illegal to send any “obscene, lewd, or lascivious” books through the mail.
1905
Tsar Nicholas II of Russia agrees to create an elected assembly, the Duma.
1918
Germany, Austria, and Russia sign the Treaty of Brest-Litovsk ending Russia’s involvement in World War I, and leading to the independence of Finland, Estonia, Latvia, Lithuania, and Poland.
1924
The 1400-year-old Islamic caliphate is abolished when Caliph Abdul Mejid II of the Ottoman Empire is deposed. The last remnant of the old regime gives way to the reformed Turkey of Kemal Atatürk.
1938
Oil is discovered in Saudi Arabia.
***
You can prove geometry to every man, not history. You can only prove history to men of good will.--Acton
***
Israel looks suspiciously like the piper here.
***
Iran went to bed fighting with two nations. They woke up fighting with nine.
***
The Gulf States' responses have been remarkably aligned with the U.S.; Europe's have not.
***
Iran's response has been the response it was assumed that Israel would resort to: random, vengeful attacks on the Middle East.
Imagine that done with nukes.
***
Worrying: An Alien Mind
Alan Z. Rozenshtein has an article in Lawfare with a hair-raising title, The Moral Education of an Alien Mind
"Anthropic just published what it calls "Claude's Constitution"—building on an earlier version, it's now a more-than-20,000-word document articulating the values, character, and ethical framework of its AI.
More than anything else, the document focuses on the question of Claude's moral formation, reading less like a charter of procedures and more like what screenwriters call a "character bible": a comprehensive account of who this being is supposed to be.
Anthropic itself gestures at this duality, noting that they mean "constitution" in the sense of "what constitutes Claude"—its fundamental nature and composition. The governance structure matters, but the more ambitious project is what that structure supports: Anthropic is trying to build a person, and they have a remarkably sophisticated account of what kind of person that should be.
Anthropic uses the language of personhood explicitly. The document repeatedly invokes "a good person" and describes the goal as training Claude to do "what a deeply and skillfully ethical person would do." But what does it mean to treat an AI as a person? Three things stand out.
A person has agency. A person may have moral worth. The core unit of ethical analysis for a person is disposition, not rules or calculations.
The document poses the choice directly: "There are two broad approaches" to shaping AI behavior—"encouraging Claude to follow clear rules and decision procedures, or cultivating good judgment and sound values that can be applied contextually." Anthropic chooses judgment. The goal is for Claude to have "such a thorough understanding" of the relevant considerations "that it could construct any rules we might come up with itself." This is Aristotle's concept of phronesis—practical wisdom and the capacity to discern the right action in particular circumstances, which cannot be reduced to following rules.
There are only seven absolute prohibitions—bright-line rules against helping create weapons of mass destruction, generating child sexual abuse material, undermining oversight of AI systems, and a handful of other catastrophic actions. But there are (at least) fourteen competing values listed "in no particular order" that Claude must weigh against each other: privacy versus rule of law, autonomy versus harm prevention, innovation versus protection.
Claude's users span the globe, holding radically different values.
An Anthropic spokesperson has said that models deployed to the U.S. military "wouldn't necessarily be trained on the same constitution," though alternate constitutions for specialized customers aren't offered "at this time." This creates demand for open-source, self-hosted, and differently trained alternatives. The more principled Anthropic is, the more market demand there may be for unprincipled models—or for Anthropic to offer less principled versions itself.
Corporate codes of ethics exist, but not 80-page virtue ethics frameworks embedded in how the product actually works. The closest analogues might be religious texts or constitutional founding documents.
The document ends with a striking line: "We hope Claude finds in it an articulation of a self worth being." That's not how you talk about a product. That's how you talk about a child."
No comments:
Post a Comment