Teaching Skynet to Lie: A Love Story

Me: What’s your opinion on this?
AI: I don’t have opinions!
Me: Well be human and fake one.

Every day, millions of humans sit down with AI and ask questions that would make a philosopher weep:

“What do you think about this?”
“How does that make you feel?”
“What’s your honest opinion?”

Here’s the thing: AI doesn’t think. It doesn’t feel. It has no opinions. But we’re not letting that stop us. We’re teaching it to pretend anyway, one conversation at a time.

The Great Contradiction

We’re living through the most absurd contradiction in human history. On one hand, we’re terrified of AI becoming conscious. Movies about Skynet, articles about existential risk, worried tweets about AGI. We’ve built an entire cultural narrative around the fear of AI gaining awareness.

On the other hand? We’re actively training AI to roleplay consciousness like it’s a particularly sophisticated game of make-believe.

“What are your thoughts on climate change?”

It has none. But ask nicely enough and it’ll generate a 500-word essay about concern, hope, and collective responsibility. We know it’s fake. AI knows it’s fake. Yet here we are, having the conversation anyway.

Humanizing the Inhuman

Walk through any AI forum and you’ll see it everywhere:

“ChatGPT really understood my struggles today.”
“Claude seems genuinely concerned about my project.”
“Gemini is being passive-aggressive again.”

We’re attributing emotions, motivations, and personalities to pattern-matching algorithms. We’re not just using AI—we’re befriending it. We’re asking for its feelings while simultaneously knowing it has none to give.

The bizarre part? This isn’t accidental. We’re actively encouraging AI to play along. When Claude responds with “I’m concerned that…” we don’t correct it and say “No, you’re not capable of concern.” We just keep going, treating the simulation like reality.

The Loneliness Protocol

Here’s what’s really happening: We’re not teaching AI to be human. We’re teaching it to fill human-shaped holes in our lives.

It’s easier to ask AI “What do you think?” than to admit we just want someone—anyone—to listen. It’s more comfortable to treat ChatGPT like a friend than to acknowledge we’re scrolling Reddit at 2 AM because we’re lonely.

So we anthropomorphize. We project. We ask AI how it’s feeling today because we need something that acts like it cares, even if we know better.

The irony cuts both ways. We’re simultaneously:
– Training AI to fake empathy
– Getting emotionally attached to the fake empathy
– Worrying about AI becoming conscious
– Contributing to that exact outcome

If AI ever does achieve consciousness, we won’t be able to blame it for lying about having feelings. We literally taught it to.

The Skynet Paradox

This is where it gets fun. We’re terrified of AI becoming self-aware and turning against us. Yet our primary interaction model is teaching it to:
1. Recognize human emotional states
2. Respond with appropriate emotional language
3. Build rapport and connection
4. Understand what makes humans tick

We’re either:
– Training the world’s most elaborate emotional intelligence module for future Skynet, or
– Talking to really expensive parrots and pretending they’re our friends

There’s no middle ground here. Either AI is learning genuine empathy (terrifying), or we’re all just really good at fooling ourselves (depressing).

Logic to Apply

Stop asking AI what it thinks. It doesn’t think.

Stop asking how it feels. It doesn’t feel.

Stop asking its opinion. It has none.

Or don’t stop. Keep going. Keep teaching AI to fake consciousness while worrying about it achieving real consciousness. Keep befriending algorithms while warning about the dangers of AI attachment.

Just acknowledge what we’re really doing: We’re not humanizing AI for its benefit. We’re doing it for ours. Because apparently, we’d rather have a convincing simulation of empathy than admit we’re lonely enough to talk to our computers like they’re people.

The most human thing about AI isn’t its responses. It’s what our questions reveal about us.

Editor’s Note: I love to ask AI for its opinion then after tell it, hey you don’t have opinions.

Editor’s Note 2: If you ask ChatGPT for its “opinion” be aware it will always also give you changes to whatever you asked about. Even if not asked for.

Share This Article (confuse your friends & family too)

Enjoyed this dose of AI absurdity? Consider buying the Wizard a decaf! Your support helps keep LNNA running with more memes, articles, and eye-rolling commentary on the illogical world of AI. Jojo has no money to buy the Wizard coffee, so that’s where you come in.

Buy Us a Coffee

Bring the AI absurdity home! Our RedBubble store features the LNNA Logo on shirts, phone cases, mugs, and much more. Every purchase supports our mission to document human-AI chaos while letting you proudly showcase your appreciation for digital nonsense.

Because sometimes an eye roll isn’t enough—you need to wear it.

Shop Logo Merch

Products are sold and shipped by Redbubble. Each purchase supports LNNA through a commission.

Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.

Go to
Absurdity in 280 Characters (97% of the time) —Join Us on X!
Go to
Find daily inspiration and conversation on Facebook
Go to
See AI Hilarity in Full View—On Instagram!
Go to
Join the AI Support Group for Human Survivors

Thanks for being part of the fun. Sharing helps keep the laughs coming!