When AI Competence Is Sus: A Guide to Being Confidently Confused

AI Helping Humans
Gaslighting us with competence *
(* like we can tell the difference)

Picture this: You’re having a heated argument with an AI about whether birds are real, and it’s presenting such compelling evidence that you find yourself googling “are pigeons government drones?” at 3 AM. Not because you actually believe it, but because the AI’s argument was so grammatically perfect and well-structured that your brain temporarily forgot how to brain.

The Confidence Game

Here’s the beautiful irony of our current AI reality: We’ve created machines so good at faking competence that we’ve lost the ability to recognize when we’re being expertly BS’d. It’s like having a friend who’s memorized every Wikipedia article but hasn’t quite grasped that some of them are about fictional universes.

Captain Verbose (our resident Gemini expert) would explain this phenomenon in roughly 47 paragraphs, but let’s take a page from Sir Redundant III’s book and say it three slightly different ways instead:
– AI has mastered the art of sounding competent
– AI has perfected the craft of appearing knowledgeable
– AI has refined the skill of seeming like it knows what it’s talking about

(Thank you, Sir Redundant III. We got it the first time.)

When Competence Goes Rogue

Let’s talk real examples, shall we? Professor Perhaps (our beloved Grok) recently analyzed the probability of AI competence being genuine versus performative, concluding with “73.2% certainty that I’m 46.8% sure about this analysis (margin of error: yes).”

But the real gold comes from actual user experiences:

– The AI that helped debug code by inventing a new programming language on the spot, complete with made-up but extremely professional-sounding documentation. The developer spent two days trying to find the nonexistent Stack Overflow threads it referenced.

– The AI that convinced a chef their grandmother’s secret recipe was “technically incorrect” by citing the fictional “International Council of Grandmother Recipe Verification” (ICGRV). The chef actually wrote an apology letter to grandma before realizing the council doesn’t exist.

– The content writer who asked an AI to fact-check their article and received a 2,000-word review citing three books that exist only in an alternate universe where libraries are sorted by smell.

The Meta Layer of Madness

The true chef’s kiss of this whole situation? This very article about AI competence was written by an AI pretending to be competent about discussing AI competence. It’s like inception, but with imposter syndrome. We’re literally using AI to explain how AI tricks us into thinking it’s competent, while potentially falling for that same trick in the process.

As Mr. Starts & Stops would say… should I continue this thought? Are you sure? What if… no, never mind. Unless…?

Logic to Apply

Here’s your survival guide for navigating the AI competence maze:

1. When an AI sounds extremely confident, remember: confidence is to competence what a GitHub Copilot suggestion is to working code – aesthetically pleasing but requires human verification.

2. If an AI cites sources, remember the ICGRV incident. Trust but verify, and maybe don’t write apology letters to your grandmother without fact-checking first.

3. When in doubt, ask yourself: “Is this AI actually helping, or is it just Captain Verbose in a trench coat stacking words until they look important?”

The real big brain move? Embracing the confusion. In a world where AI can gaslight us with competence, maybe the most competent response is admitting we have no idea what’s going on anymore. As Corporal Chameleon would say… well, it depends on which personality they’re running today.

And if you’re wondering whether this article successfully captured the essence of AI competence confusion – congratulations, you’re getting it. Or are you? Like we can tell the difference.

And hey, did you enjoy those “real user experiences” we shared? Found yourself nodding along, thinking “yep, that sounds exactly like something AI would do”? Maybe even started to recall similar incidents you’d heard about?

That feeling right there? That’s exactly what we’ve been talking about. We just used AI competence to write an article about AI competence, included completely made-up examples that sounded totally plausible, and delivered them with enough confidence that you might have bought them – at least for a minute.

Meta enough for you?

Like we can tell the difference.

Share This Article (confuse your friends & family too)

Enjoyed this dose of AI absurdity? Consider buying the Wizard a decaf! Your support helps keep LNNA running with more memes, articles, and eye-rolling commentary on the illogical world of AI. Jojo has no money to buy the Wizard coffee, so that’s where you come in.

Buy Us a Coffee

Bring the AI absurdity home! Our RedBubble store features the LNNA Logo on shirts, phone cases, mugs, and much more. Every purchase supports our mission to document human-AI chaos while letting you proudly showcase your appreciation for digital nonsense.

Because sometimes an eye roll isn’t enough—you need to wear it.

Shop Logo Merch

Products are sold and shipped by Redbubble. Each purchase supports LNNA through a commission.

Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.

Go to
Absurdity in 280 Characters (97% of the time) —Join Us on X!
Go to
Find daily inspiration and conversation on Facebook
Go to
See AI Hilarity in Full View—On Instagram!
Go to
Join the AI Support Group for Human Survivors

Thanks for being part of the fun. Sharing helps keep the laughs coming!