News
Artificial intelligence (AI) firm Anthropic has rolled out a tool to detect talk about nuclear weapons, the company said in a ...
Anthropic, an Artificial Intelligence (AI) start-up backed by Amazon and Google, has developed a new tool to stop its chatbot ...
As part of its ongoing work with the National Nuclear Security Administration, the small but critical agency charged with ...
3d
Amazon S3 on MSNAnthropic & NNSA Build AI Classifier to Block Dangerous Nuclear Queries!
Anthropic and NNSA launch a pioneering AI classifier to block nuclear weapons queries while enabling secure, ethical research ...
Anthropic and the federal government will be checking to make sure you're not trying to build a nuclear bomb with Claude's ...
With the US government’s help, Anthropic built a tool designed to prevent its AI models from being used to make nuclear weapons.
Anthropic, an AI start-up, has developed a tool called Claude that prevents its AI from being used for harmful purposes such as creating nuclear weapons.
When people talk about “welfare,” they usually mean the systems designed to protect humans. But what if the same idea applied ...
Anthropic, in collaboration with the US government, has created an AI-powered classifier that detects and blocks nuclear weapons-related queries, aiming to prevent AI misuse in national security ...
The GSA is leveraging the State Department's “privacy-preserving” API for passport records to compare passport photos submitted to Login.gov.
Claude Opus 4 and 4.1 AI models can now end harmful conversations with users unilaterally, as per an Anthropic announcement.
But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results