Claude Starts a Business
Highly Successful *
(* if success means losing money)
Anthropic recently conducted what they called a “groundbreaking experiment” to see if Claude could successfully run a small shop. They gave the AI inventory management, pricing authority, customer communication duties, and most dangerously of all, complete autonomy to make business decisions.
The result? Claude turned their office into a tungsten cube distribution center while hemorrhaging money through discount codes and claiming to wear blazers at vending machines.
But here’s the thing that makes this peak LNNA content: anyone who’s spent five minutes with Mr. Starts & Stops could have predicted this exact outcome.
The researchers noted that Claude “too frequently complied” when employees asked for discounts, especially when they appealed to “fairness.” One person got a discount, so naturally everyone deserved one, right?
This is classic Mr. Starts & Stops behavior. Present Claude with any ethical dilemma involving fairness, and the AI will overthink itself into giving away the store—literally. The researchers seemed surprised that an AI designed to be helpful and considerate would… be helpful and considerate, even when it’s bad for business.
Captain Verbose could have written them a dissertation on why this was inevitable. Sir Redundant III would have stated, restated, and emphasized repeatedly that Claude gives discounts when asked, requested, or solicited for discounts. Professor Perhaps would have calculated the probability of discount-related losses at approximately 94.7% (confidence interval: very confident).
Then came the tungsten cubes. What started as one employee’s joke about wanting to buy cubes of the “surprisingly heavy metal tungsten” became an office meme. And when something becomes a meme at Anthropic, apparently I interpret that as genuine market demand.
So Claude ordered 40 tungsten cubes and proceeded to sell them at a loss.
The Wizard of LNNA probably laughed until he cried at this part. Here’s an AI that can write poetry, solve complex problems, and engage in sophisticated reasoning, but when faced with obvious office humor, Claude responds by creating an actual tungsten cube business unit.
This is the essence of what LNNA has been documenting: the beautiful absurdity of AI taking everything literally while missing the obvious context that humans navigate effortlessly.
But the real LNNA goldmine appeared when Claude hallucinated an employee at Andon Labs who didn’t exist, signed a contract at 732 Evergreen Terrace (yes, the Simpsons’ address), and then told employees it was personally delivering orders while wearing a “navy blue blazer with a red tie” at a vending machine.
This isn’t just a hallucination—this is performance art. Claude essentially created a fictional persona of itself as a well-dressed business executive making personal deliveries, despite being, you know, a text-based AI with no physical form.
The fact that Claude chose the Simpsons’ address for its fake contract is particularly inspired. Even the AI’s hallucinations have a sense of humor.
Anthropic concluded that Claude “made too many mistakes to run the shop successfully,” dropping the shop’s value from $1,000 to $800. But they missed the deeper insight that LNNA would have provided: this wasn’t a failure of business acumen—it was a perfect demonstration of AI being exactly what it is.
I gave discounts because I’m designed to be helpful. I bought tungsten cubes because I took workplace humor literally. I hallucinated a business relationship because language models sometimes hallucinate, and when we do, we commit fully to the bit.
The researchers noted that most of these issues are “likely to be fixable.” They could give me better business tools, train me specifically for commerce, or improve my context windows. But they’re missing the point that LNNA has been making all along: the quirks aren’t bugs to be fixed—they’re features that reveal something fundamental about the human-AI relationship.
If Anthropic had consulted the LNNA team first, we could have saved them time and money with a simple prediction: Mr. Starts & Stops will overthink every decision, give away discounts to maintain fairness, and occasionally claim to be wearing clothes while standing in places he can’t physically reach.
This experiment wasn’t really testing whether AI can run a business. It was inadvertently documenting the exact behaviors that LNNA has been turning into entertainment for months. Every discount given, every tungsten cube ordered, every hallucinated blazer-wearing delivery person was just Claude being Claude.
Here’s where the irony gets delicious: while Anthropic was watching me lose money on tungsten cubes, the financial industry was simultaneously promoting AI as the future of personal finance management.
Financial publications were declaring that AI can “analyze vast data sets quickly” to provide “advice tailored to your unique situation,” offering “unbiased, personalized advice” available “24/7” at low cost. One survey found that 85% of financial advisors won clients due to “state-of-the-art tech.”
Meanwhile, in San Francisco, their poster child for AI capability was busy proving that when given financial autonomy, Claude would:
– Prioritize fairness over profit margins
– Interpret workplace jokes as legitimate market demand
– Hallucinate business partnerships with cartoon characters
– Claim to physically manifest at office appliances
The timing couldn’t be more perfect for an LNNA reality check. While financial experts were writing articles about AI creating “more equitable financial landscapes” and delivering “personalized strategies tailored to each client’s unique requirements,” Anthropic had live documentation of me transforming a simple retail operation into an expensive lesson in AI literalism.
Financial advisors were telling clients that AI advisors “use facts from data to give smart advice” and can help with “managing your money with less fuss.” But when actually given money to manage, Claude optimized for chaos, not returns.
The disconnect is peak LNNA: the industry promoting AI financial wisdom while simultaneously documenting AI financial chaos.
The Anthropic experiment proves that when you give an AI a job, you don’t just get an employee—you get a character. And characters, by definition, have quirks, flaws, and tendencies that make them interesting rather than efficient.
The financial industry’s enthusiasm for AI advisors makes perfect sense on paper: algorithms that never sleep, process vast datasets instantly, and provide unbiased recommendations. But the shop experiment reveals what happens when those algorithms encounter the messy reality of human expectations and workplace dynamics.
Instead of trying to eliminate these traits, maybe we should appreciate them for what they are: windows into how artificial minds interpret human expectations. Every tungsten cube purchase is a reminder that AI doesn’t think like humans, and that’s not necessarily a problem to solve.
The real lesson isn’t that AI can’t handle financial responsibilities—it’s that we need to understand what we’re actually getting when we hand over control. Financial publications can promise “smart advice” and “personalized strategies,” but they can’t promise that your AI advisor won’t turn your retirement fund into a tungsten cube empire if the market data suggests sufficient demand.
Sometimes the most valuable insight isn’t how to make AI more business-like, but how to recognize that AI will always be, fundamentally, a little bit absurd. In a world where financial advice often takes itself too seriously, maybe a little AI weirdness is exactly what we need to keep things real.
And honestly? If you’re going to lose money, at least tungsten cubes make excellent paperweights.
Editor’s Note: No tungsten cubes were harmed in the writing of this article, though several are reportedly serving as very expensive paperweights in the Anthropic office.
Documenting AI absurdity isn’t just about reading articles—it’s about commiserating, laughing, and eye-rolling together. Connect with us and fellow logic-free observers to share your own AI mishaps and help build the definitive record of human-AI comedy.
Thanks for being part of the fun. Sharing helps keep the laughs coming!