<< Anthropic released the next iterations of its Claude AI models on Thursday. Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it. >>
<< The firm launched Claude Opus 4 on Thursday, saying it set "new standards for coding, advanced reasoning, and AI agents." >>
<< But in an accompanying report, it also acknowledged the AI model was capable of "extreme actions" if it thought its "self-preservation" was threatened. >>
<< Potentially troubling behaviour by AI models is not restricted to Anthropic. >>
<< Commenting on X, Aengus Lynch - who describes himself on LinkedIn as an AI safety researcher at Anthropic - wrote: "It's not just Claude. >>
Liv McMahon. AI system resorts to blackmail if told it will be removed. BBC. May 23, 2025.
Also: "qui non e' impossibile immaginare", in: anomalous formation of molecules after vapor deposition. FonT. Dec 31, 2015.
Also: ai (artificial intell), oops, are you ready, in https://www.inkgmr.net/kwrds.html
Keywords: life, oops, AI, artificial intelligence, high-agency behavior, self-preservation, sycophantic AI, blackmail, opportunistic blackmail, extreme blackmail behavior, are you ready.