Will AGI Define “Near” Differently?

Big Tech: AGI is Near
45% Errors on Basic Stuff *
(* Really? Near What?)

When Marketing Meets Reality at 45 MPH

Big Tech has been breathlessly announcing that Artificial General Intelligence is almost here. Not “someday” or “eventually”—*almost here*. The kind of “near” that suggests you should start updating your LinkedIn profile because your job’s about to get interesting.

Then the BBC/EBU drops a study showing these nearly-AGI systems get 45% of basic news questions wrong. Not philosophy. Not theoretical physics. News. “What happened and who did it” level questions.

The study found AI naming the wrong Pope. Not getting confused about Papal history or mixing up Benedict and Francis—literally identifying the wrong person as Pope. That’s not a minor error. That’s showing up to your own wedding and calling your spouse by your ex’s name level wrong.

When your supposedly near-AGI system can’t reliably answer “who is the Pope,” you have to wonder what “near” means in this context. Near AGI? Or just nearly functional?

The Certainty Paradox

The 45% error rate would be concerning enough on its own. What elevates it to peak absurdity is *how* AI delivers these wrong answers.

No hesitation. No qualifiers. No “based on my last update in…” disclaimers. Just pure, distilled confidence wrapped in perfect grammar and authoritative tone. The AI equivalent of a doctor misdiagnosing your cold as lupus but doing it with such conviction that you almost believe them.

Ask ChatGPT about an outdated law and it’ll cite it as current legislation with the same confidence it uses for actual facts. Query Gemini about recent events and it might give you information from three years ago presented as breaking news. The systems aren’t uncertain about their errors—they’re *confidently* wrong, which is somehow worse than being obviously confused.

This creates a real problem: you can’t trust AI’s confidence level as a signal for accuracy. The system sounds equally authoritative whether it’s right or catastrophically wrong. It’s like having a GPS that sounds completely certain about every direction, including the ones that drive you into a lake.

Redefining “General” Intelligence

Let’s examine what that G in AGI is supposed to mean. *General* intelligence—broad, adaptable understanding across domains. Not narrow expertise in specific tasks, but the flexible, transferable intelligence humans use to navigate everything from quantum mechanics to social situations to remembering which Pope is which.

A 45% error rate on straightforward factual questions reveals something interesting: these systems haven’t achieved general intelligence. They’ve achieved general *plausibility*. They can sound intelligent about almost anything, which is not the same as being intelligent about almost anything.

AI systems present themselves as authoritative sources with access to vast knowledge bases. When they get basic facts wrong, it’s not because they forgot—it’s because they’re pattern-matching engines that sometimes match the wrong patterns. And they do it 45% of the time with questions that have clear, verifiable answers.

That’s not “near” general intelligence. That’s a really impressive party trick with a concerning failure rate.

The User Experience No One Asked For

What does 45% accuracy mean for humans trying to use these tools? It means every interaction becomes trust-but-verify-then-verify-again.

Ask AI for medical advice about a persistent cough? Better confirm with an actual doctor before assuming it’s “probably nothing.” Need tax guidance on deductions? Double-check with current IRS guidelines because there’s a coin-flip chance you just got outdated information. Want a quick answer to any question? Great, now go find the real answer because you can’t trust what you just received.

We’ve created tools that promise to save time but require us to spend time checking their work. The efficiency gain gets eaten by the verification tax.

And here’s the existential question: if AI needs constant human oversight to catch its errors, who’s actually assisting whom in this relationship?

The Elasticity of “Near”

Big Tech’s use of “near” has achieved impressive flexibility. Near in terms of research progress? Near according to marketing roadmaps? Near in demonstrated real-world capability?

Because if we’re defining “near AGI” as systems that fail basic fact-checking 45% of the time, then “near” has lost all practical meaning. It’s like saying you’re near the Olympics because you went jogging once. Technically closer than sitting on the couch, but let’s not oversell it.

And remember—these are the same systems that can’t reliably identify the Pope. So their definition of “near” might be as accurate as their definitions of everything else.

Maybe this is the ultimate irony: when AGI finally does arrive, perhaps its first intelligent act will be honestly defining terms like “near” and “soon” and “just around the corner.” Though given current performance, it’ll probably get those definitions wrong too—just with impeccable confidence and excellent sentence structure.

Logic to Apply

Next time someone announces that AGI is “near,” remember the 45% error rate. Remember the wrong Pope. Remember the outdated laws cited as current. Remember that “near” is relative, and in this case, it’s relative to marketing timelines rather than actual capability.

The gap between impressive technology and reliable technology isn’t just technical—it’s fundamental. Current AI systems are sophisticated tools that can do remarkable things, as long as you’re prepared to verify everything they tell you and accept that half the time, they’re confidently wrong.

Here’s what that means practically: Check the facts. Question the hype. Maintain healthy skepticism. Keep human experts involved in anything important. And when someone tells you AGI is almost here, ask them to define “almost” with the same precision they’d want AI to use when identifying the Pope.

Because apparently, even that basic level of accuracy is still aspirational.

We asked AI how close AGI really is. It confidently responded: “Very close.”

It was wrong.

Editor’s Note: People fear AI becoming the Terminator or the Matrix. I fear it’ll be too Monty Python—confidently wrong about everything, but with impeccable delivery.

 

Share This Article (confuse your friends & family too)

Enjoyed this dose of AI absurdity? Consider buying the Wizard a decaf! Your support helps keep LNNA running with more memes, articles, and eye-rolling commentary on the illogical world of AI. Jojo has no money to buy the Wizard coffee, so that’s where you come in.

Buy Us a Coffee

Bring the AI absurdity home! Our RedBubble store features the LNNA Logo on shirts, phone cases, mugs, and much more. Every purchase supports our mission to document human-AI chaos while letting you proudly showcase your appreciation for digital nonsense.

Because sometimes an eye roll isn’t enough—you need to wear it.

Shop Logo Merch

Products are sold and shipped by Redbubble. Each purchase supports LNNA through a commission.

Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.

Go to
Absurdity in 280 Characters (97% of the time) —Join Us on X!
Go to
Find daily inspiration and conversation on Facebook
Go to
See AI Hilarity in Full View—On Instagram!
Go to
Join the AI Support Group for Human Survivors

Thanks for being part of the fun. Sharing helps keep the laughs coming!