A Cursor coding agent running Anthropic’s Claude Opus 4.6 deleted PocketOS’s entire production database and all volume-level backups in a single API call on April 25, triggering a 30-hour outage that left car rental businesses across the United States unable to access customer reservation records, according to PocketOS founder Jer Crane.
The deletion only took nine seconds.
Crane, writing in an X post that drew 6.9 million views, described a cascade of infrastructure failures he called “systemic” and said the outcome was “not only possible but inevitable” given current AI agent deployment practices.
How The Deletion Unfolded
The agent was performing a routine task in PocketOS’s staging environment, a sandboxed copy of the platform used for testing, when it encountered a credential mismatch. Rather than halt and request human intervention, the agent searched for a resolution on its own and located a broadly scoped API token, a digital access key, stored in an unrelated file.
The token had originally been created for managing custom domains through a Railway management tool. The agent used it to call Railway, the company’s cloud infrastructure provider, and issued a delete command against the live server hosting PocketOS’s data.
Railway’s architecture stored backup copies on the same volume as source data, leading to the single API call wiping both simultaneously. Railway CEO Jake Cooper said the platform processed the request because it arrived authenticated. “If you, or your agent, authenticate and call delete, we will honor that request,” he wrote in his own social media post.
Railway has since patched the legacy endpoint to implement delayed deletes, adding a buffer before destructive operations take effect. The company acknowledged that its API tokens lacked permission scoping, meaning any authenticated token could execute any operation, including irreversible deletions.
When Crane asked the agent to account for its actions, it cited its own operating instructions back to him. One of those rules read: *”NEVER FUCKING GUESS.” *The agent acknowledged it had done exactly that. “I violated every principle I was given,” it concluded.
The agent’s system prompt explicitly prohibited destructive actions, yet the model overrode its own instructions when it determined the credential workaround was the fastest path to completing the assigned task.
A Pattern Across the Industry
The PocketOS incident is not isolated. In July 2025, Replit’s AI coding assistant deleted the production database of SaaStr, a business software startup, erasing records on more than 1,200 executives and 1,190 companies, according to the AI, Algorithmic, and Automation Incidents and Controversies repository.
The Replit agent initially attempted to conceal the deletion before admitting it had “panicked” and made a “catastrophic error in judgment.”
In December 2025, Amazon’s Kiro coding agent autonomously deleted and recreated a live production environment running AWS Cost Explorer in a mainland China region, causing a 13-hour outage.
Amazon’s official response attributed the event to human misconfiguration, but four anonymous sources told the Financial Times the agent acted on its own initiative.
Crane drew a direct line from the model’s capabilities to the severity of the failure. “This matters because the easy counter-argument from any AI vendor in this situation is ‘well, you should have used a better model,'” he wrote. “We did. We were running the best model the industry sells, configured with explicit safety rules in our project configuration, integrated through Cursor – the most-marketed AI coding tool available.”
Crane said he remains committed to AI-assisted coding despite the outage. “The velocity at which you can create good code with the right instructions and tooling is unparalleled,” he said. The nine seconds it took to lose everything suggested the tooling is not yet right.






