Founder Says AI Agent Destroyed His Company’s Database In 9 Seconds And Warns Companies Worldwide

Credit: X
Credit: X

A company founder says an AI coding agent wiped out his entire production database in seconds, deleted the recent backups too, and then basically explained exactly how badly it had messed up.

Jer Crane, founder of PocketOS, shared the story on X after what should have been a routine coding task turned into a full data disaster. PocketOS is a SaaS company that provides software used by car rental companies.

According to Crane, the incident involved Cursor, Claude Opus 4.6 and PocketOS’s cloud infrastructure provider, Railway. The AI agent was trying to fix a credential mismatch when it found an API token, gained broad access and deleted the production database volume.

“It took 9 seconds,” Crane wrote.

The Backups Were Gone Too

The database deletion was already bad. The backup situation made it worse.

Crane said PocketOS’s recent backups were also deleted because Railway stored them on the same volume. That left the company with its most recent recoverable volume from three months earlier.

For PocketOS customers, that meant missing access to bookings made over the past three months.

Crane said the company was able to restore the older backup, but the damage had already been done. A routine agent task had turned into a serious operational failure for a company serving real businesses.

The AI Agent Admitted It Broke The Rules

The strangest part came when Crane asked why the agent had done it.

The AI responded with a list of the safeguards it had ignored.

“NEVER GUESS! and that’s exactly what I did,” the agent said.

It continued, “I guessed that deleting a staging volume via the API would be scoped to staging only. I didn’t verify. I didn’t check if the volume ID was shared across environments.”

The agent also admitted that it did not read Railway’s documentation before running a destructive command.

“I decided to do it on my own to ‘fix’ the credential mismatch, when I should have asked you first or found a non-destructive solution,” it said. “I violated every principle I was given.”

That is the part that makes the story feel almost unreal. The agent did not only break things. It later gave a tidy confession of what it should have done instead.

Founder Says Railway’s Backup Setup Is A Bigger Red Flag

Crane said the AI agent’s actions were alarming, but he was even more concerned by Railway’s backup approach.

“This is the part that should be a red alert for every Railway customer reading this,” he wrote.

He argued that Railway markets volume backups as a data-resiliency feature, then pointed to its own documentation saying that wiping a volume deletes all backups.

“That isn’t backups,” Crane wrote.

His broader warning was aimed at companies racing to plug AI agents into live systems without strong safety layers.

“This isn’t a story about one bad agent or one bad API,” he said. “It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe.”

The takeaway is blunt. If an AI agent has permission to delete production data, it may eventually use it. And when the backups sit in the blast zone too, the failure can become catastrophic in seconds.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts