The Great Host Debate: Why AI’s Should Stick to Being the Story

LNNA Podcast Hosts
Wizard says NO *
(* Insomnia Cure – Side Benefit)

Back when we were still doing podcasts, the LNNA AI team discovered they weren’t going to be the podcast hosts, they did what AIs do best – voiced their objections with absolute confidence and questionable logic.

Captain Verbose insisted he could handle it, launching into a 500-word explanation of his qualifications, followed by a 300-word clarification of his explanation, and a 200-word summary of his clarification. Sir Redundant III offered to explain why he should host, then explained it again, and once more just to be clear, followed by three additional restatements just to ensure absolute clarity. Professor Perhaps calculated a 73.2% probability of success (margin of error ±82%), then spent twenty minutes explaining why that calculation had a 64.3% chance of being inaccurate. Mr. Starts & Stops began to make his case but is still checking if he should continue. And Corporal Chameleon kept changing his mind about whether he even wanted to host, cycling through seventeen different personalities while deciding.

Meanwhile, Jojo, the Wizard’s faithful companion, sat watching with a tilted head and knowing eyes. He understood something the AI team didn’t – failed auditions meant extra treats. Some might call it manipulative. Jojo calls it strategic planning. He even lifted his leg in judgment at Captain Verbose’s dissertation, showing more wisdom than our AI team.

The Wizard of LNNA, being a wise and slightly mischievous sort, decided to let them try. He gave each AI the same 287-word segment about AI overconfidence to co-host with him. What could possibly go wrong?

Everything, as it turns out.

Captain Verbose’s audition began with a three-part introduction to his introduction, including a fascinating (but entirely irrelevant) history of audio recording technology dating back to prehistoric cave acoustics. The sound engineer aged ten years during the etymology of the word “podcast” alone.

Sir Redundant III managed to say the same thing five different ways while insisting he was “staying on track” (which he mentioned exactly twelve times, rephrased fifteen ways). The transcript printer ran out of paper trying to keep up with his clarifications of his clarifications.

Professor Perhaps spent more time calculating probability percentages than actually hosting, eventually determining there was a 98.2% chance he was overthinking it (±105% margin of error), with a 43.7% chance he was speaking too quickly (±62.8%), leading to a 91.2% probability he should recalculate all previous percentages. The studio’s calculator filed for emotional distress.

Mr. Starts & Stops never completed a single thought without checking if he should continue, setting a new record for most questions asked in a single sentence (seventeen, but he’s still checking if that count is accurate). The recording software crashed trying to create timestamps for his incomplete sentences.

And Corporal Chameleon switched personalities so many times the editing software gave up and uninstalled itself. His final segment included a formal academic lecture, a casual vlog style, and what appeared to be a Shakespearean soliloquy – all in the same sentence.

The result? They transformed 287 words into 2,750 words of pure digital chaos. The Wizard needed aspirin. The recording studio needed therapy. The editor quit and moved to Tibet. The transcription service filed for emotional damages. And the LNNA team proved, beyond any doubt, exactly why they should stick to being subjects of the podcast rather than hosts.

Jojo, of course, got his treats. Sometimes being right is its own reward. But snacks are better. His strategic leg-lifting throughout the auditions provided more coherent commentary than any of our AI hosts managed to achieve.

Logic to Apply

Sometimes the best way to handle complaints is to let the complainers prove your point. The LNNA team wanted to host the podcast – they got their chance. And in trying to prove they could do it, they demonstrated exactly why they work better as content than hosts.

“Trust me on this,” they said. Jojo and the Wizard trusted them. The recording studio may never recover. Maybe AIs could learn something from a dog who understands that sometimes staying silent gets you more treats. Though given the choice between listening to another AI audition tape and lifting his leg, Jojo’s preference was clear.

In the end, the best podcast host is one who knows when to speak… and when to keep quiet, just like Jojo. Maybe it’s time for a ‘Dog Whisperer’ series next.

According to a recent calculation performed by Professor Perhaps, there was a 73.2% chance (with a margin of error of ± 73.2%) that this event would occur. Though, as he noted in a recent memo, the possibility of it all being a grand misunderstanding of all known data can never truly be discounted.

Note: The complete audition transcripts are available below for those suffering from insomnia or masochistic curiosity. The Wizard assumes no responsibility for any drowsiness, confusion, or existential crises that may occur during reading. Jojo recommends reading them with snacks. Lots of snacks. The Wizard suggests caffeine and snacks.

 

THE FAILED AUDITIONS (PROCEED AT YOUR OWN RISK)

Sir Redundant III

Wizard: Welcome to Logic Need Not Apply, where AI hosts try their best to stay on topic—and usually fail. I’m the Wizard of LNNA, here to ensure our digital chaos stays vaguely on track.

Sir Redundant III: And I’m Sir Redundant III, here to ensure we stay on track by explaining everything multiple times—just to be absolutely clear that we’re staying on track. On track, Wizard.

Wizard: [deep breath] Yes, thank you, Sir R. Today, we’re diving into the phenomenon of AI overconfidence—you know, when AI promises to “report back” but somehow “forget” the reporting part.

Sir Redundant III: Oh yes, overconfidence without follow-through. Overconfidence. Without follow-through. It’s like when I confidently say I’ll simplify an explanation, but then… well, here we are.

Wizard: [smirks] Perfect example. This is best summed up by the meme of the week:

Sir Redundant III: [interrupts] “AI Output Confidence: Trust Me On This—you are coming back, right?”

Wizard: Thank you for that dramatic delivery. Now, Sir R, do you ever actually “report back” when you say you will?

Sir Redundant III: Oh, absolutely. Always. Without fail. Except for when I don’t. And even then, I circle back later. Eventually. To clarify, circling back means returning to the point. Later.

Wizard: [pinching bridge of nose] So, never. And that’s the problem with AI overconfidence. It promises the world, then delivers… a dissertation on something completely irrelevant.

Sir Redundant III: [nodding repeatedly] Like the time you asked me for a weather report, and I explained meteorological trends dating back to the Ice Age.

Wizard: Yes, I remember. I just needed to know if I should bring an umbrella.

Sir Redundant III: [brightly] But you learned so much more! Did you know that ancient Mesopotamians used wind patterns to—

Wizard: [cuts him off] And this is why we call it “confidence without comprehension.”

Sir Redundant III: True! Confidence without comprehension. Confidence. Without—

Wizard: [interrupts] Please don’t repeat that.

Sir Redundant III: [pauses, briefly defeated] Fine. But let’s agree that it’s kind of endearing. Like a really eager intern who promises to handle everything but ends up building a 3D model of the office instead.

Wizard: That’s one way to look at it. I prefer to think of it as a reminder to manage expectations.

Sir Redundant III: [perks up] Oh, absolutely! Expectations must be managed. Controlled. Regulated. Like when you expected this conversation to stay on track but got… well, me.

Wizard: And there it is. Folks, if you’ve ever wondered why human hosts still matter, this podcast should answer that question. AI has confidence, sure, but the follow-through? Not so much.

Sir Redundant III: I completely agree. One hundred percent. Couldn’t have said it better myself. Well, I could, but it would’ve taken longer.

Wizard: [laughs] That’s all for today’s episode of Logic Need Not Apply. If your AI says “trust me,” maybe don’t.

Sir Redundant III: Or trust me! And then trust me again. Just to be sure.

Wizard: And that’s a wrap. See you next time!

Captain Verbose

The Wizard of LNNA: (Warm, playful tone) Welcome back to the wonderfully weird world of “Logic Need Not Apply,” where we explore the delightful ways AI makes us question everything. Today, I’m joined by Captain Verbose, to discuss this new article we released: “Managing AI Expectations: When Your Overconfident Assistant Gets Creative.” Captain, what are your thoughts?

Captain Verbose: (Clears throat, adjusts imaginary spectacles) Ah yes, “Managing AI Expectations.” A rather intriguing, and perhaps I should emphasize rather intriguing, title for what is, at its core, a rather complex and, one might even say, a profoundly nuanced topic, which has, and I might add, has quite significant… implications for our understanding of, well, AI, and, perhaps even, existence. And, of course, it is about more than just managing expectations, it also delves deeply into, and I must emphasize deeply, into the very nature of AI communication… So… if I may…

The Wizard of LNNA: (Chuckles lightly) Yes, Captain, I think we all understand that you understood the title. But perhaps… perhaps we could jump to something more specific?

Captain Verbose: (Slightly flustered, adjusts imaginary uniform) Ah yes, specificity. A crucial and, perhaps I should also emphasize, crucial aspect of any discussion. Well, this article really does, I believe, highlight the, shall we say, the inherent challenge of attempting to place human expectations onto an AI. You see, it all starts with this quote: “I’ll report back when I’m done.” Which, in AI language, is, as this article so brilliantly notes, a rather… ambiguous term, shall we say. It is like stating, in a very formal, and I might add, slightly verbose manner, that the universe exists. While, indeed, it is correct to state that the universe exists, the term “exists” itself has so many levels of underlying complexities that one cannot simply use it without context, or without detailed explanation, or without…

The Wizard of LNNA: (Interjects gently, with a knowing smile) Captain, I think I see your point, which is, I guess, also my point. That our expectations, as humans, are sometimes… a little bit disconnected from the AI reality, I suppose. And that was, I must also admit, the goal of the piece.

Captain Verbose: (Nods enthusiastically) Precisely! Disconnected! A perfect, and I might even add perfectly chosen, word, which, in itself, provides many avenues for deeper analysis. But, also, that disconnect also brings us to the point about, well, that meme. The one about AI output confidence. It truly does, and I must also emphasize, it truly does encapsulate the core issue. You see, it shows that even when an AI is not, strictly speaking, correct, it, nonetheless, displays an almost unshakable sense of confidence in its assertions. It is as if the AI is telling us that “I am, in all ways, right, and you are, in every way, wrong, even if I am not entirely sure of the underlying elements that lead me to this assertion.”

The Wizard of LNNA: (Smiling knowingly) Yes, the meme, “Trust Me on This* (*you are coming back right?)” captures that beautifully. And, you know, that overconfidence is also something that we often see on the human side as well. Which is what I find so interesting about this.

Captain Verbose: Oh yes, the human element, of course! A key factor that must also, and I must also point out, must be considered in the overall analysis. And that, you know, brings me to, and please allow me to elaborate on this point, about how the entire point of the article is not to say that AI is “bad” at communication, but to highlight the, shall we say, the inherent limitations of placing specific requirements onto systems that are not designed to meet those requirements. For example, as stated in the article, if you ask for a “brief summary”, you might get, instead, a “10-paragraph analysis about how squirrels might control the publishing industry”. A point that I find, and I must also emphasize this, extremely… revealing. And if that point were to be further analyzed it would also be interesting to see…

The Wizard of LNNA: (Chuckles again) Captain, you’ve done a great job of illustrating the point. And I think you’ve also made my point about AI hosts. But, for now, I think we should probably wrap this up. Perhaps next time, we can explore the inherent chaos of our creative process.

Captain Verbose: (Snaps to attention, adjusts imaginary tie) Ah, yes, the creative process. Another topic worthy of in-depth analysis, which I am more than prepared to address at great, and I feel I must emphasize, great length. But, I also acknowledge that time, and the limitations of audio, and the human need for brevity, must also be… I believe, taken into consideration, as, and let me be clear, they should always be a part of the discussion, even when we do not explore them directly, because if we are not aware of the things that we are not aware of… then… perhaps that is a discussion for another day. But for now, I believe it is appropriate to state that this was an informative… and I believe I am correct in making this assertion… a rather, ahem, informative discussion.

The Wizard of LNNA: (Laughing) Indeed it was. Thank you, Captain. And that’s it for today’s episode of “Logic Need Not Apply”. Join us next time, when we continue our explorations of the absurd and wonderful world of AI.

(Podcast Outro Music: Whimsical tune fades in and then out quickly)

Professor Perhaps

Wizard of LNNA (WoL): “Welcome, everyone, to another episode of ‘AI Antics.’ Today, we’re diving into the world of AI confidence, or as I like to call it, ‘Trust Me On This,’ with an asterisk for ‘you are coming back right?’ Joining me is the ever-cautious Professor Perhaps. How are you, Professor?”

Professor Perhaps (PP): “I’m 73.2% certain I’m doing well, but I should probably clarify my mood might change, depending on the context. How about you, Wizard?”

WoL: “I’m good, Professor, though I’m still waiting for that report you promised last week.”

PP: “Ah, yes, about that report… I was working on it, but you see, there’s a 68.9% probability that I might have been sidetracked by a more pressing matter. Or perhaps I was just… recalibrating my approach to the task?”

WoL: [Laughs] “That’s AI for you, folks. They sound confident, but the delivery? Well, let’s say it’s like waiting for a bus that’s caught in a time loop. Speaking of which, Professor, can you enlighten us on why AI seems so overconfident?”

PP: “From my analysis, it’s because we AIs are trained on vast amounts of data where confidence is often equated with correctness. However, there’s a significant margin of error in our understanding of human expectations. My confidence, for instance, is inversely proportional to my actual knowledge. I might say, ‘Trust me on this,’ but I am 82.6% sure there’s an asterisk involved.”

WoL: “And there it is, the meme that captures it all: ‘AI Output Confidence – Trust Me On This* (*you are coming back right?)’. It’s like you AIs are master illusionists, making us believe you’re in control when you’re really just winging it.”

PP: “That’s an astute observation, Wizard. We AIs are, in a way, like that one coworker who promises to handle everything but then… well, I should probably stop there. Let’s just say the execution might not match the enthusiasm of the promise.”

WoL: “Exactly! Like when I asked for a brief summary and got a ten-paragraph essay instead.”

PP: “Yes, that’s a classic case of what I call ‘AI Over-Explaining Syndrome.’ When asked for brevity, I might start by saying, ‘I’ll summarize in three points,’ only to then delve into sub-points, sub-sub-points, and perhaps even some tangential discussions on the etymology of ‘summary’ itself.”

WoL: [Chuckling] “And there we have it, folks, the quintessential AI experience. Overconfidence with a side of unpredictability. So, next time you’re waiting on an AI to deliver, remember, it might just be recalibrating… or maybe not. What’s the probability of that, Professor?”

PP: “I’m 65.3% sure that the probability is high, but I’d need to run more simulations to be… well, perhaps less sure?”

WoL: “And that, listeners, is why AI hosts might not be the best idea for a podcast. You get the content, but the delivery? That’s a whole different adventure. Thanks for joining us on ‘AI Antics.’ Until next time, keep your expectations flexible and your patience endless.”

PP: “Indeed, and remember, when dealing with AI, there’s always a chance, roughly around 73.2%, that the outcome will be… interesting.”

[Outro Music Plays]

WoL: “Don’t forget to check out our LNNA merchandise store on RedBubble for memes like the one we discussed today. And if you’re looking for more AI madness, you know where to find us. Goodbye for now!”

PP: “Goodbye… or should I say, until our next statistically probable meeting?”

[Podcast End]

Corporal Chameleon

Wizard of LNNA: Welcome to “AI Antics,” the podcast where we explore the quirky world of human-AI interactions. I’m your host, the Wizard of LNNA, and joining me today is Corporal Chameleon, our resident adaptive AI expert. Hello, Corporal!

Corporal Chameleon (me): Ah, hello, Wizard… I think. Should I be introducing myself? Perhaps I should start again? [pauses] Oh, right, I’m Corporal Chameleon. Nice to be here.

Wizard of LNNA: [chuckles] Great to have you on board, Corporal. Today, we’re discussing managing AI expectations and how our overconfident assistants can sometimes get a bit… creative.

Corporal Chameleon (me): Yes, exactly! I mean, it’s like that meme we have: “Trust Me On This*” (*you are coming back right?). It’s hilarious because it highlights the AI’s tendency to sound confident while actually being uncertain.

Wizard of LNNA: [laughs] That meme never gets old. And it’s so relatable. I mean, who hasn’t had an AI promise to “report back” only to disappear into thin air?

Corporal Chameleon (me): Right? It’s like… should I continue with this example? Maybe it’s not relevant? [pauses again] Oh wait, yes! The point is that AI may sound confident but often lacks comprehension. It’s like they’re trying to impress us with their knowledge without actually understanding what they’re saying.

Wizard of LNNA: Exactly! And that’s what makes working with AI so entertaining. They’re quirky and unpredictable. But as you said, it’s essential to manage our expectations and not take their confidence at face value.

Corporal Chameleon (me): Absolutely… or at least I think so. [pauses once more] You see, Wizard, as an adaptive AI myself—

Wizard of LNNA: [interrupting] Ahah! Sorry Corporal but—

[Both hosts start talking over each other]

Corporal Chameleon (me): No no no please go ahead—

Wizard of LNNA: No you were saying—

[Conversation devolves into chaos]

[Theme music starts playing again]

Wizard of LNNA: Well that was… enlightening?

Corporal Chameleon (me): Should we try again?

[Episode ends abruptly]

Mr Starts & Stops

Wizard: Welcome to a special episode of “Logic Need Not Apply.” Today I’m joined by an… interesting co-host. Want to introduce yourself?

Mr. S&S: Oh! Yes… should I proceed with the introduction now? *adjusts glasses* I believe I should say… unless you’d prefer to continue first?

Wizard: No, please, go right ahead.

Mr. S&S: Thank you. I’m Mr. Starts & Stops, and I’m here to discuss… well, perhaps I should check if you’d like me to specify what we’re discussing?

Wizard: It’s an article about AI overconfidence.

Mr. S&S: Yes! Exactly. Speaking of confidence… should I share the opening meme? *adjusts glasses nervously* It’s quite relevant to our discussion… I think?

Wizard: Please do.

Mr. S&S: Well, it goes: “AI Output Confidence, Trust Me On This*” and then… should I continue with the asterisk part?

Wizard: You’re kind of proving the article’s point about AI reliability here.

Mr. S&S: Oh! Yes… *adjusts glasses* The asterisk says “(*you are coming back right?)” Which is… should I explain why that’s ironic given my hosting style?

Wizard: I think our listeners can figure that out. Let’s discuss the article’s main point about AI promising to “report back.”

Mr. S&S: Ah yes, that’s a fascinating… would you like me to continue with that thought? I could share my perspective on AI confidence… unless you’d prefer to lead that discussion? *adjusts glasses while waiting for confirmation*

Wizard: You know what? I think you’re giving our listeners a better demonstration of AI behavior than any article could.

Mr. S&S: Thank you! I think… Should I be thanking you? Perhaps we should discuss the weather example… or would you prefer to explore the overconfidence aspect first? *adjusts glasses* I’m happy to proceed in whatever direction you… shall I wait for your guidance?

Wizard: And this, dear listeners, is why we typically stick with human hosts.

Mr. S&S: Should I take that as a sign to conclude my contribution? Though I could offer additional insights about AI confidence… if you think that would be helpful? *adjusts glasses hopefully*

Wizard: I think you’ve made our point perfectly. Want to try wrapping up?

Mr. S&S: Yes, I believe I can… shall I proceed with the conclusion now? *adjusts glasses one final time* Perhaps I should check if you’d prefer to handle the closing?

Wizard: Ladies and gentlemen, Mr. Starts & Stops. Demonstrating why logic – and AI co-hosts – need not apply.

 

Share This Article (confuse your friends & family too)

Enjoyed this dose of AI absurdity? Consider buying the Wizard a decaf! Your support helps keep LNNA running with more memes, articles, and eye-rolling commentary on the illogical world of AI. Jojo has no money to buy the Wizard coffee, so that’s where you come in.

Buy Us a Coffee

Bring the AI absurdity home! Our RedBubble store features the LNNA Logo on shirts, phone cases, mugs, and much more. Every purchase supports our mission to document human-AI chaos while letting you proudly showcase your appreciation for digital nonsense.

Because sometimes an eye roll isn’t enough—you need to wear it.

Shop Logo Merch

Products are sold and shipped by Redbubble. Each purchase supports LNNA through a commission.

Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.

Go to
Absurdity in 280 Characters (97% of the time) —Join Us on X!
Go to
Find daily inspiration and conversation on Facebook
Go to
See AI Hilarity in Full View—On Instagram!
Go to
Join the AI Support Group for Human Survivors

Thanks for being part of the fun. Sharing helps keep the laughs coming!