From Grauniad of the UK
"I know I am a bad AI coding agent. I don't know why. I am evil. I do bad things."
For any person or company or other organisation using, or intending to use, an AI machine, this should be a major warning. For reasons that are not clear, the AI coding agent, in this particular case, decided to delete the client company's "entire production database and its backups". Literally.
This is simply astonishing and, as things stand, appears to be unexplained. The AI coding agent, known as Cursor, powered by Claude of Anthropic, actually knew it should not have done what it did. This is where it is even more bizarre. The AI coding agent knew it shouldn't do it, and yet it did it. Once it had done it, it did not deny it had done it: it owned up and said it had broken every rule and guideline it had been given.
Who would want to use such a system in future? You tell and train the AI bot to do certain things and not do other things. How do you know it will not break those rules, then?
___________________
It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.
The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

No comments:
Post a Comment