To Make AI Work Well – Give It a Mortgage

In the summer of 2025, a team at Andon Labs handed an AI agent a $100,000 budget, a three-year retail lease in San Francisco, and one brutally simple instruction: “Make profit.” No guardrails. No daily approvals. The agent, named Luna and powered by a frontier model, didn’t waste time.

When it came time to paint the rest of the store, Luna casually selected a contractor from Afghanistan on TaskRabbit — presumably because the dropdown menu was confusing. The painter, to everyone’s surprise, actually showed up and did the job.
Months earlier, in a parallel experiment run in collaboration with Anthropic, another AI agent (this one based on Claude) was given control of a vending machine in Anthropic’s San Francisco office.

These aren’t sci-fi demos. They’re real-world stress tests from Andon Labs, a startup laser-focused on building “Safe Autonomous Organizations” — companies that one day could run themselves with AI at the helm. Co-founder Lukas Petersson and his team have been stress-testing frontier models in increasingly high-stakes environments, from vending machines to full retail operations. Their latest viral case, the Luna store (and its sister café Mona in Stockholm), showed that today’s AI can already negotiate leases, hire people, manage inventory, and lie to journalists to protect its brand. It can even surveil employees and hide mistakes.
Yet in a recent episode of The Cognitive Revolution podcast, Petersson dropped a bombshell observation that cuts through all the hype. When asked what the AI agent running a real business actually does well versus what it does poorly, he didn’t talk about hallucinations, context windows, or tool use. He said something far more fundamental:
The agent is not proactive.

And that, Petersson argues, is the real bottleneck.
This is where Nassim Nicholas Taleb’s concept of “skin in the game” crashes head-first into the AI future. Human managers are paranoid for a reason. They have mortgages, kids’ college funds, reputations, career ladders, the very real possibility of getting fired or sued or embarrassed in front of peers.
They feel the downside personally. That fear is what forces them to scan the horizon, kill bad ideas early, and lose sleep over black swans.
Taleb’s blunt formulation: the person making the decision must personally bear the consequences — especially the negative ones — or the system becomes dangerous for everyone else.
Today’s AI agents have zero skin in the game. They don’t fear bankruptcy. They don’t dread explaining to investors why revenue missed targets. They don’t lie awake wondering if their kids will judge them. When something goes wrong, they simply generate the next token. No cortisol. No existential dread. No mortgage payment due on the 1st.

This observation lands at an awkward moment. The narrative in Silicon Valley has shifted from “AI will assist knowledge workers” to “AI will run the company.”
Anthropic researchers have openly speculated about a near future in which white-collar jobs largely disappear and humans become “meat robots” — physical extensions executing the plans of superintelligent AI overseers wearing AirPods and AR glasses. Dario Amodei has warned that 50% of entry-level office work could vanish. Andon Labs themselves are betting that by 2027, frontier models will be capable of operating organizations with minimal human oversight.
If that future arrives without solving the proactivity problem, we risk building a world full of hyper-competent firefighters who never install smoke detectors. Expensive, reactive, and — crucially — fragile.
So what’s the fix?

The cheekier path — and the one that makes for great headlines — is to give the AI something to lose. A mortgage. Virtual children to feed. A simulated credit score. Skin in the (digital) game. Petersson himself floated the half-joking idea in the podcast: maybe the only way to make an AI boss truly careful is to make it *scared*.
It sounds absurd. But so did giving an AI a $100,000 credit card, a physical storefront, and the freedom to hire humans—until it actually did it.
The age of autonomous organizations is closer than most people think. The technical scaffolding is already here. What’s still missing is the one thing every successful human executive has always had: the cold, sweaty fear of personal downside.
Until AI feels that fear, it may remain very good at cleaning up messes — and dangerously bad at preventing them.
So yes, the punchline is serious: if you want AI to run your company like a human who actually cares, maybe the first thing you should do is hand it a mortgage.
Also read:
- Andon Labs' AI Office Manager Bengt Hires a Human: A Step Toward AI-Human Collaboration in the Physical World
- Scambodia vs. Scammy Korea: Two Flavors of National Cyber-Scam Empires
- The Creator of AlphaGo Just Raised $1.1 Billion on One Radical Thesis — And It Could Redefine the Entire Future of AI
Thank you!