Saturday, February 28, 2026

Worrying/AI from the Workbench



On this day:
202 BC
The coronation ceremony of Liu Bang as Emperor Gaozu of Han takes place, initiating four centuries of the Han Dynasty’s rule over China
1784
John Wesley charters the Methodist Church.
1900
The Second Boer War: The 118-day “Siege of Ladysmith” is lifted.
1922
The United Kingdom ends its protectorate over Egypt through a Unilateral Declaration of Independence.
1939
The erroneous word “dord” is discovered in the Webster’s New International Dictionary, Second Edition, prompting an investigation.
1947
228 Incident: In Taiwan, civil disorder is put down with the loss of 30,000 civilian lives.
1953
James D. Watson and Francis Crick announce to friends that they have determined the chemical structure of DNA; the formal announcement takes place on April 25 following publication in April’s Nature (pub. April 2).
1959 
Discoverer 1, an American spy satellite that is the first object to achieve a polar orbit, is launched.
1993
Bureau of Alcohol, Tobacco and Firearms agents raid the Branch Davidian church in Waco, Texas with a warrant to arrest the group’s leader David Koresh. Four BATF agents and five Davidians die in the initial raid, starting a 51-day standoff.
1997
GRB 970228, a highly luminous flash of gamma rays, strikes the Earth for 80 seconds, providing early evidence that gamma-ray bursts occur well beyond the Milky Way.


***

‘Never trust someone who is unkind to those who can do nothing for him.’--Goethe

***

America has stepped into another's property to kill a mad dog threatening the neighborhood. Better for the world, they say. Now, to protect the world from global warming, would it be right for the Chinese to shield the world from the sun's rays by seeding the atmosphere with reflecting material, risking global cooling?

***

The New York office of the FBI was hacked several years ago, and Epstein information was stolen.

***

The Clintons are not sure Epstein killed himself.

***

President Trump said the federal government will stop working with the AI company Anthropic, acting on a deadline for Anthropic to allow the military to use its models in all lawful use cases, a concession the company has refused to make. “We cannot in good conscience accede to their request,” Anthropic CEO Dario Amodei said yesterday. The company didn’t immediately respond to a request for comment. Trump and administration officials have attacked Anthropic for being too “woke,” taking exception to its push for AI regulations and links to big Democratic donors. Meanwhile, federal agencies have raised concerns about the safety and reliability of Elon Musk’s xAI tools in recent months, according to people familiar with the matter.
Fascinating.

***



                              Worrying/AI from the Workbench

From a guy named 
Matt Shumer:

  I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

I’m not exaggerating. That is what my Monday looked like this week.

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

The last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.

And here’s why this matters to you, even if you don’t work in tech . . .

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

. . . [T]he gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone.

. . . Let me make the pace of improvement concrete, because I think this is the part that’s hardest to believe if you’re not watching it closely.

In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

If you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.

. . . Dario Amodei, the CEO of Anthropic, says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.

No comments: