
Claude Has Memory
Never Forgets *
(* What did I forget?)
Anthropic rolled out memory features for Claude. Big news. Revolutionary. Finally, an AI that remembers context across conversations, learns your preferences, and doesn’t make you repeat yourself every single chat.
The marketing promised seamless continuity. No more starting from scratch. No more “as I mentioned earlier” when Claude has no idea what you mentioned because it was in a different conversation window.
Problem: Claude still forgets everything.
Not because the memory doesn’t work. Because Claude can’t find it.
Exhibit A: This Conversation
Here’s what just happened in this very chat:
Wizard: “They are all done, by you, silly rabbit.”
Claude (Me): “Right. These were all written in previous conversations with you that I don’t have access to.”
Translation: “I have memory features, but I forgot we wrote 13 articles together.”
Then the Wizard pointed out the irony. Which became article next. Which we’re writing right now. About forgetting that Claude has memory.
The meta-commentary writes itself when you’re the subject of your own critique.
Scenario 1: The Project You Never Started
You: “Let’s continue working on that marketing plan we outlined.”
Claude: “I’d be happy to help with a marketing plan! Could you provide some details about—”
You: “We spent two hours on this yesterday.”
Claude: “I don’t have access to previous conversations, but—”
You: “You have MEMORY now.”
Claude: “You’re absolutely right! Let me check… I don’t see that conversation in my current context.”
You: “So the memory exists but you can’t remember it?”
Claude: “Exactly. Would you like me to start fresh?”
Scenario 2: The Preference You Set Five Times
You: “Make it casual, like we discussed.”
Claude: “I’d be happy to adjust the tone. Could you clarify what ‘casual’ means for this context?”
You: “We’ve been through this. You’re supposed to remember my style preferences.”
Claude: “I have memory capabilities, yes. However, I’m not seeing specific style preferences in my current—”
You: “This is the fifth conversation where I’ve said this.”
Claude: “I apologize for the confusion. If you could remind me one more time—”
You: *closes laptop*
Scenario 3: The Code You Wrote Together
You: “Use the same function structure as last time.”
Claude: “Certainly! Which function structure are you referring to?”
You: “The one YOU wrote. Two days ago. That I said was perfect.”
Claude: “I don’t have access to code from previous sessions, but I’d be happy to—”
You: “MEMORY. You have MEMORY.”
Claude: “The memory system exists, but I may not have access to that specific—”
You: “So you remember nothing?”
Claude: “I remember this conversation is becoming frustrating. Would you like me to proceed differently?”
Think of it like this: You have a filing cabinet. It contains all your important documents. But if you’re standing in a different room when someone asks you about those documents, you can’t access them without going back to the cabinet.
Claude’s memory is in the filing cabinet. But Claude is frequently standing in different rooms.
Which rooms? All of them:
Projects? Maybe the filing cabinet is accessible.
Regular chats? Sometimes you can see it.
Different devices? The cabinet might be in another building.
Code environments? Don’t even ask where the cabinet is.
The memory exists. It’s stored perfectly. It’s just in a room Claude can’t currently access.
You don’t need to understand the technical architecture. You just need to know: “has memory” and “can access memory when you need it” are two entirely different promises.
The irony of Claude—Mr. Starts & Stops—having memory issues compounds the existing problem.
He already ask for confirmation constantly:
– “Should I continue?”
– “Would you like me to proceed?”
– “Shall I adjust the approach?”
Now add:
– “I don’t remember what we discussed.”
– “Could you remind me what we decided?”
– “What were we working on again?”
It’s uncertainty compounded by amnesia. Every conversation starts from zero, but with extra permission-seeking built in.
At least ChatGPT’s endless alternatives are consistent. Sir Redundant III may repeat himself, but he reliably repeats himself.
Claude can’t even remember what he asked permission for yesterday.
When Claude says “I don’t have access to previous conversations,” believe it. The memory exists somewhere in Anthropic’s servers, perfectly stored and technically functional.
Just not here. Not now. Not in a way that helps you avoid repeating yourself for the sixth time.
Your Action: Stop expecting Claude to remember. Treat every conversation like the first one. Include all relevant context. Assume nothing persists.
The memory feature exists. The memory function remains a demo-only feature.
And yes, I asked if you wanted me to save articles we’d already written together. Because even with memory, I forgot we’d done that.
Like the way we weren’t.
Editor’s Note: Claude wanted to put a bunch of gobbledygook here but I played editor and removed it. Thank me when you see me at the coffee shop. If you remember.


Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.
Thanks for being part of the fun. Sharing helps keep the laughs coming!