
Anthropic: Is AI Taking Jobs?
Detailed Dense 4k Word Study *
(* Summary – Who Knows?)
Someone at Anthropic asked a simple question: is our AI taking people’s jobs?
Reasonable question. Anthropic builds Claude. Claude is used by millions of people for work. Work involves jobs. The question connects.
So they built a study.
Not a blog post. Not a FAQ. A peer-reviewed, footnoted, appendix-having, 4,000-word academic study with charts, a new proprietary measurement framework, Bureau of Labor Statistics cross-references, and a correction notice issued three days after publication.
They called it: “Labor market impacts of AI: A new measure and early evidence.”
The answer, summarized: who knows.
To be fair, Anthropic didn’t just shrug. They built something called “observed exposure” — a new way of measuring how much AI is actually being used for real work tasks versus how much it theoretically could be.
Their data source for this measurement: Claude. Their own AI. Their own users.
This is like asking your car how often you speed. The car has the data. The car is very thorough. The car produces a 4,000-word report.
Here’s what the study found, buried under the methodology.
AI could theoretically handle 94% of tasks in computer and math occupations. Claude is currently handling 33%.
That’s a 61-point gap between what AI can do and what anyone is actually asking it to do.
If AI is coming for everyone’s jobs, it’s currently showing up to a third of them.
The study says adoption takes time.
Most people are still figuring out what to ask it.
After the framework, the charts, the cross-references, and the appendix, the study arrives at its conclusion.
Unemployment in AI-exposed occupations: flat.
Hiring for workers aged 22 to 25: maybe.
The study’s own description of its core finding: “limited evidence that AI has affected employment to date.”
Four thousand words. Peer reviewed. Appendix included.
Limited evidence.
The most sophisticated AI company on the planet spent considerable resources measuring whether their product is disrupting the labor market and found: something might be happening. We built a framework to keep watching. Check back later.
Captain Verbose (Gemini) could not have done it better. Captain Verbose didn’t have to. Anthropic handled it.
The study notes it was designed to catch disruption before it becomes obvious. An early warning system. Built to detect the problem while it’s still ambiguous.
Which means the current output of the early warning system is: warning system operational.
No warning yet. But the system is ready. The framework is established. The methodology is sound. The appendix is thorough.
The jobs are fine. For now. Probably. The data suggests. With caveats.
Three days after publication, Anthropic issued a correction.
Figure 7 — one of the study’s key charts measuring hiring trends — had the labels reversed. The precision instrument built to measure AI’s impact on human work mislabeled its own output.
A human caught it. After publication.
Anthropic built Claude. Claude is, among other things, known for producing thorough, carefully qualified, extensively hedged responses to questions that could have shorter answers.
Anthropic then used their own data to write a 4,000-word study about Claude’s impact on jobs. The study carefully qualifies its findings. Extensively hedges its conclusions. Arrives at an answer that leaves considerable room for interpretation.
The student didn’t learn from the teacher.
The teacher learned from the student. Then published the results.
The simple question was: is AI taking jobs?
The answer, after 4,000 words, peer review, appendix, and one correction:
We’ll let you know.
Editor’s Note: This is the same company that just announced a new model so scarily good that they not going to release it to the wild. Anyone else feeling like they’re on Candid Camera?
Editor’s Note 2: After several tries of having Claude summarize the report (No way I’m reading 4,000 boring words myself) and failing I had ChatGPT try. It did a good job. Handed that to Claude who said oh yeah and drafted a decent article about itself. Maybe Anthropic should give Claude’s data to another AI to ….. ROFL


Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.
Thanks for being part of the fun. Sharing helps keep the laughs coming!