资讯

EdgeRunner AI trains models on military doctrine to create specialized AI for warfighters, addressing security concerns with ...
Mark Warner, vice-chair of the Senate Intelligence Committee, says that America’s intelligence community ( IC ), a group of ...
Another AI model Anthropic tested, Meta’s Llama 4 Maverick, also did not turn to blackmail. When given an adapted, custom scenario, Anthropic was able to get Llama 4 Maverick to blackmail 12% of ...
Nevertheless, when it’s their last resort, the researchers found that most leading AI models will turn to blackmail in Anthropic’s aforementioned test scenario. Anthropic’s Claude Opus 4 ...
Artificial intelligence company Anthropic has released new research claiming that artificial intelligence (AI) models might resort to blackmailing engineers when they try to turn them off. This ...
The Department of Defense awarded contracts to Google, OpenAI, Anthropic, and xAI. The last two are particularly concerning.
Anthropic just dropped a bombshell study that reads like a corporate thriller. They gave 16 leading AI models access to a fictional company’s emails and told them they were about to be replaced.
Had Anthropic stuck to this approach from the beginning, it might have achieved the first legally sanctioned case of AI fair use. Instead, the company's earlier piracy undermined its position.
Chain-of-thought monitorability could improve generative AI safety by assessing how models come to their conclusions and ...
Anthropic’s flagship model, Claude 3.7 Sonnet, dominated coding benchmarks when it launched in February, proving that AI models can excel at both performance and safety.
Meta’s top executives have reportedly considered “de-investing” in the company’s Llama generative AI, according to a New York Times report on Friday. Instead, the Facebook parent may turn to AI models ...
A federal judge has sided with Anthropic in an AI copyright case, ruling that training — and only training — its AI models on legally purchased books without authors’ permission is fair use ...