Tuesday, April 28, 2026

Top 5 This Week

Related Posts

‘No warning, no confirmation’: How an AI agent deleted a startup’s critical data

4 min readNew DelhiApr 28, 2026 05:58 PM IST

AI agents are sophisticated AI systems tasked to carry out operations autonomously, but that autonomy can come with a cost. Reportedly, an AI agent powered by Anthropic’s leading Claude model seems to have deleted the entire production database belonging to a company. Chaos and confusion ensued as customers of the company were unable to access their key data. 

The company that suffered a massive outage owing to a rogue AI agent is PocketOS, a Texas-based car rental business. The incident reportedly took place over the weekend when its autonomous AI tool erased the database and all backups in nine seconds. The company was using a coding agent named Cursor that was running on Claude Opus 4.6, which is widely known as the most advanced and capable model for coding. 

Following the incident, Jer Crane, the founder of PocketOS, attributed it to systemic failures with modern AI infrastructure. He claimed that the infrastructure made it not only possible but inevitable too. Crane said in his post on X (formerly Twitter) that the agent was working on a routine task when, out of the blue, it decided ‘entirely on its own initiative’ to fix a minor issue by simply deleting the database. The founder revealed that there was no prior confirmation sought by the AI agent for such a major decision. Consequently, when asked the reason behind its actions, the agent reportedly apologised for its conduct.

After the incident, Crane took to his X account to elucidate upon the issue. In his post, the founder said, “The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated.” 

The company offers software for rental businesses that helps them with managing everything from bookings and payments to customer data. Reportedly, for many of its clients the platform is key to their daily operations. This sudden loss of data meant these businesses were unable to access recent reservations or customer records. 

Based on his post on X, the agent had acted on its own after being confronted by a credential mismatch. It reportedly located an API token stored elsewhere in the system and used it to execute a deletion command. There were no confirmation prompts, no warnings about production data, and no restrictions that were limiting what the token could do. The case is extraordinary, as after the incident the AI agent explained its own failure. The agent offered a written response in which it admitted that it violated key safety rules. Besides, the agent also acknowledged that it ‘guessed instead of verifying’ and carried out the action without permission. It said that it failed to fully understand the system before acting. 

This particular incident highlights a broader issue that AI tools are being integrated into production environments without adequate safety controls. In his post, Crane warned that relying on system prompts and guidelines is not enough. “System prompts are advisory, not enforcing,” he shared, stressing that real safeguards must be built into APIs and infrastructure. 

Story continues below this ad

As of now, PocketOS has restored a partial backup, but significant data gaps persist. Crane shared that the experience highlights the need for stricter controls, better backup practices, and clearer accountability as AI tools get deeply integrated into critical systems.

 

© IE Online Media Services Pvt Ltd

Spread the love

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles