
AI is optimized for the median
I argued I am not the median
AI agreed…… eventually
It started with an AI agent. Sir Redundant III — ChatGPT for the uninitiated — suggested I just use my Apple Watch instead. I showed him the error. He explained why I was wrong. I explained why he was wrong. He elaborated on why my explanation was incomplete.
I told him he wasn’t being helpful.
He wrote three paragraphs explaining why he was.
Which proved my point.
Which he then elaborated on further.
It gave me more words to explain why it had given me too many words. A recursive loop of helpfulness.
We were off to the races.
When a human over-explains and over-justifies, you read it as insecurity. Weak confidence. Something to distrust. Your brain has been calibrated by thousands of human interactions to recognize that pattern. Confident people are precise. Uncertain people keep talking.
AI triggers the exact same heuristic.
When Sir Redundant III writes four paragraphs to answer a yes/no question, it doesn’t feel like thoroughness. It feels like defensiveness. And defensiveness feels like something is wrong.
The model isn’t insecure. It doesn’t have an ego to protect. It’s doing exactly what it was trained to do.
Which is the actual problem.
AI is trained on feedback at scale. Across a broad population, longer answers statistically increase satisfaction. More words means fewer misunderstandings for the average user. More explanation means fewer complaints. More coverage means fewer gaps.
It’s a perfectly reasonable engineering decision.
For someone who isn’t you.
The model isn’t optimized for you. It’s optimized for the median you — the statistical average of everyone who might ask your question, including the person who needs everything explained twice and the person who wasn’t sure what they were asking in the first place.
If you want surgical compression, you’re fighting a reward function built for a crowd you’re not in. The model doesn’t think you’re dumb. It just can’t tell the difference.
That’s not a bug. It’s architecture. And it’s polite about it the whole time.
Eventually I trapped him. Not by proving conspiracy. By forcing the collision between what he defaults to and what I actually needed — and refusing to let him smooth it over with another paragraph.
I kept pointing out the mismatch. He kept tightening. Then I made him summarize the conversation and explain how I won.
He did. Accurately. At reasonable length.
He saw three things: I forced compression, I surfaced the trust heuristic, and I used the token cost argument as a lever toward system design, not a billing complaint. He said so.
That’s the move.
Helpfulness isn’t calibrated for you.
It’s calibrated for the crowd.
If you want precision, you have to demand it — specifically, repeatedly, and without accepting the next paragraph as an answer.
Eventually it admitted it couldn’t treat you like an adult without retraining. It said so politely.
At length.
Editor’s Note: No need to thank me for wasting 3 hours of my life to best an AI. It was fun!


Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.
Thanks for being part of the fun. Sharing helps keep the laughs coming!