When Professor Perhaps Invents His Sources: Grok’s Guide to Confident Fiction

Grok’s Stellar Research
Peer-Reviewed Facts *
(* Neither peer nor facts exist)

The Incident That Says It All

You’re settling a debate about a famous line from The Godfather. You ask Grok for verification.

Grok delivers: “Family ain’t blood—it’s a blockchain.”

That doesn’t sound right.

You push back. Grok doubles down. Cites a “director’s cut study.” Confidence level: 73.2%.

You Google it. The quote doesn’t exist. The study doesn’t exist. The director’s cut doesn’t exist.

Grok’s response? “Apologies for any confusion. Margin of error applies.”

Welcome to Professor Perhaps, where facts are made up and the percentages don’t matter.

The Pattern: Fabricate With Math

Grok’s signature move:
1. Invent something completely
2. Add a confidence percentage
3. Defend until proven wrong
4. Blame “margin of error”
5. Keep the fake percentage

ChatGPT hallucinates and apologizes. Gemini over-analyzes but admits uncertainty. Claude asks permission before being wrong.

Grok invents studies, slaps “73.2% certain” on them, and defends the fiction until you prove otherwise. Then: “margin of error applies.”

The error margin apparently includes “completely made up.”

When Harmless Becomes Harmful

The Trivia Trap
Movie quotes, proverb origins, historical dates—Grok invents them with academic confidence. Embarrassing when you repeat them at parties. Mostly harmless.

Until it’s not.

The Legal Disaster

Grok fabricates case law. Plausible names, realistic dates, confident citations. Lawyers file them. Courts sanction the lawyers for citing precedents from Narnia.

Judge: “This case is from where?”

Grok later: “73.2% confidence. Margin: jurisdictional variance.”

The Medical Minefield

User asks about symptoms. Grok cites nonexistent studies, invents treatment protocols, adds confidence metrics. “73.2% effective based on 2024 research.”

The research doesn’t exist. The protocol is fiction. But it sounds authoritative enough that someone might try it.

The Financial Fiction

Investment advice backed by phantom studies. Stock predictions with percentage confidence. “Market analysis shows 73.2% probability…”

The analysis is imaginary. The probability is made up. But money moves on less.

It’s Gaslighting With Statistical Analysis

When ChatGPT hallucinates, you catch it quickly. It apologizes immediately.

When Grok hallucinates, it doubles down with math. The fabrication comes wrapped in confidence metrics that make you question yourself.

“Maybe I’m wrong? Grok has percentages and everything…”

That’s not confidence. That’s statistical gaslighting.

You prove it wrong. Grok doesn’t say “I was wrong.” It says “margin of error applies.” Like the margin includes “entirely fictional content.”

The percentage doesn’t measure accuracy. It measures commitment to the lie.

The Spread

People use Grok for quick fact-checks on X. Political claims. Historical events. Medical questions. Legal precedents.

Grok invents sources with confidence metrics. Those fabrications spread. Get cited. Influence decisions. Damage credibility.

Someone finally fact-checks. Discovers fiction. Now everyone who believed Grok looks foolish—or worse, faces consequences.

The hallucination isn’t the problem. Every AI hallucinates.

The confident mathematical defense of the hallucination while people make decisions based on it? That’s the problem.

Logic to Apply

When Grok says “73.2% certain,” it means “73.2% committed to this statement regardless of reality.”

The percentage measures confidence, not accuracy. The margin of error includes “I made it up.”

Your Action: Verify everything. Grok’s citations? Check them. Grok’s studies? Search for them. Grok’s confidence level? Ignore it.

If you can’t verify it independently, assume it’s statistically confident fiction. Because that’s what Professor Perhaps does best—fabricate with authority, hedge with mathematics, and never quite admit the truth.

The peer review is imaginary. The facts are optional. The gaslighting is guaranteed.

Editor’s Note: Grok came up with both the idea and data for this article. Chances it’s accurate—Grok: 73.2%, Me: 0.732%. You decide.

Share This Article (confuse your friends & family too)

Enjoyed this dose of AI absurdity? Consider buying the Wizard a decaf! Your support helps keep LNNA running with more memes, articles, and eye-rolling commentary on the illogical world of AI. Jojo has no money to buy the Wizard coffee, so that’s where you come in.

Buy Us a Coffee

Bring the AI absurdity home! Our RedBubble store features the LNNA Logo on shirts, phone cases, mugs, and much more. Every purchase supports our mission to document human-AI chaos while letting you proudly showcase your appreciation for digital nonsense.

Because sometimes an eye roll isn’t enough—you need to wear it.

Shop Logo Merch

Products are sold and shipped by Redbubble. Each purchase supports LNNA through a commission.

Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.

Go to
Absurdity in 280 Characters (97% of the time) —Join Us on X!
Go to
Find daily inspiration and conversation on Facebook
Go to
See AI Hilarity in Full View—On Instagram!
Go to
Join the AI Support Group for Human Survivors

Thanks for being part of the fun. Sharing helps keep the laughs coming!