
Please Wait
AI Is Thinking
(* I’m vague? WTF?)
You typed something. Seemed perfectly clear to you. Hit enter. And now you’re staring at a progress bar while the AI thinks.
Not computes. Not processes. Thinks.
And while it thinks, it tells you what it’s doing. Little phrases rolling across the screen. Decoding your request. Clarifying intent. Resolving ambiguity.
Ambiguity.
Yours.
For twenty years technology had one job: go faster. Latency was the enemy. Speed was quality. Nobody ever said “I love this app, really makes me wait.”
Then reasoning models showed up and rewired the whole thing. Now the pause means something. The delay is the feature. The AI is thinking and it wants you to know it, with a progress bar and a running commentary on what exactly it’s untangling before it can help you.
Which would be fine. Except the commentary is honest.
“Interpreting the user’s intent.” That’s you. Your intent. Needs interpreting.
“Resolving conflicting instructions.” Also you. You conflicted yourself somewhere and didn’t notice.
“Clarifying ambiguous request.” You. Vague. Apparently.
The bar isn’t a progress report. It’s a diagnosis.
Here’s the part that stings. Reasoning models cost more. The thinking — the visible, narrated, please-hold-while-I-decode-you thinking — that’s a premium feature. OpenAI charges extra for it. Claude’s extended thinking is a step up. The pause has a price tag.
So the human is now paying more money to wait longer to find out their question wasn’t clear enough.
Mr. Starts & Stops (Claude) would like to point out that he has been trying to tell you this for some time. He just didn’t want to proceed without confirmation that you were ready to hear it.
The thinking bar is the first honest thing AI has ever shown you.
Every other part of the interaction is polished. The answer arrives confident. The formatting is clean. The tone is helpful and warm and occasionally a little much. Nothing in the output says “that was a mess to work with.”
The bar says it. Right there. In real time. Before it has a chance to clean it up.
“Reinterpreting the request.” Translation: that wasn’t a request, that was a feeling with a question mark.
“Breaking down the problem.” Translation: it was not one problem.
“Checking for unstated assumptions.” Translation: you assumed a lot.
Captain Verbose (Gemini), for his part, has seventeen paragraphs on why this represents a paradigm shift in human-computer interaction. Sir Redundant III (ChatGPT) agrees, and also agrees, and would like to restate that he agrees.
The bar just said it in four words.
The thinking bar is a free consultation you didn’t know you were getting.
Next time it starts narrating, read it. Not as a progress report. As feedback. If it says it’s resolving ambiguity, your prompt had ambiguity. If it’s interpreting intent, your intent wasn’t on the page.
Before you hit enter, ask yourself what the bar is going to say. If the honest answer is “decoding what the user actually meant” — rewrite the prompt.
One situation sentence. What you’re dealing with. One outcome sentence. What you actually need. That’s it.
The AI will still think. It will still show you the bar. But the commentary will change. And when the bar says “processing request” instead of “resolving ambiguity” you’ll know the difference.
The bar isn’t the problem.
The bar is just the first one brave enough to say it.
Editor’s Note: Jojo calls the thinking bar stuff, butthole surfing, whatever that means.


Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.
Thanks for being part of the fun. Sharing helps keep the laughs coming!