BBS:      TELESC.NET.BR
Assunto:  Anthropic draws line in the sand in standoff with US government
De:       Mike Powell
Data:     Sat, 28 Feb 2026 10:55:50 -0500
-----------------------------------------------------------
Trump just banned Anthropic from government use - here's why its CEO
refused the Pentagon's 'dystopian' request

Opinion By Lance Ulanoff last updated 17 hours ago

A voice of reason

Anthropic AI just got banned from all use across US Government agencies.
President Donald Trump's order is the fallout from Anthropic CES Dario Amodei
denying the Pentagon's request to loosen Anthropic's safety policy.

Now that the company and its Claude AI are banned, the Department of War and
other agencies will spend the next six months disengaging from Anthropic's AI
models.

Lingering questions, like how this will impact the US's effectiveness in
competing with other AI-armed countries, how hard or easy it will be to remove
Anthropic, and which major AI company will take its place, remain. We already
know that OpenAI is standing with Anthropic, as per CEO Sam Altman.

Elon Musk's Grok AI is a possible candidate, but then there's a letter he
signed nine years ago.

How we got here

"Lethal autonomous weapons threaten to become the third revolution in warfare.
Once developed, they will permit armed conflict to be fought at a scale greater
than ever, and at timescales faster than humans can comprehend."

That's not a quote from Anthropic CEO Dario Amodei refusing to accede to the US
Department of War's request that it allow its Claude AI models for mass
surveillance and perhaps more problematically "Fully autonomous weapons."
Instead, it comes from a 2017 Open Letter to the UN, co-signed by, among dozens
of other AI and robotics leaders, Elon Musk, asking the global organization to
ban autonomous weapons.

It's a window into long-brewing concerns over the abuse and misuse of
autonomous systems for warfare. It's also likely, despite Musk's closeness to
the current Trump administration, that US Secretary of Defense (or War) Pete
Hegseth has never read it.

Anthropic is now at risk of losing a $200M US Department of War contract,
despite, as Amodei describes it, already working "proactively to deploy our
models to the Department of War and the intelligence community."

Amodei is by no means anti-defense or against the use of AI by the US
government. In his letter explaining Anthropic's decision, Amodei writes, "I
believe deeply in the existential importance of using AI to defend the United
States and other democracies, and to defeat our autocratic adversaries."

However, what Hegseth has asked is for Anthropic to countermand its own
"Constitution", a set of principles and safety restrictions for the use and
behavior of its AI models. The US Department of War basically wants Anthropic
to remove the guardrails. Anthropic Constitution Principles, such as being
"Broadly Safe" and "Broadly Ethical," are in direct conflict with Hegseth's
demands that the AI be used for mass surveillance and for fully autonomous
weapons.

Amodie makes it clear that his systems are not ready for any of this.

"Today, frontier AI systems are simply not reliable enough to power fully
autonomous weapons," writes Amodei, adding, "Without proper oversight, fully
autonomous weapons cannot be relied upon to exercise the critical judgment that
our highly trained, professional troops exhibit every day."
Armed and dangerous

These are not new concepts. Many in the tech industry have been pondering these
issues for almost a decade (if not longer). Musk and the AI and robotics
community raised the alarm in 2017 because we were already seeing AI-backed
robot systems being used in questionable ways.

In 2016, a bomb disposal robot was used to kill a mass shooting suspect in
Texas. Dallas PD put an explosive device on the robot's arm, guided it to where
the suspect was holed up, and then they detonated the explosive device and
killed the suspect.

At the time, some saw it as an inflection point, and a concerning one at that.
Episodes like that may or may not have triggered that 2017 letter to the UN.

Keep in mind that this happened before the current generative and agentic AI
revolution.

Amodei knows better than most the massive leaps foundational models are taking
every few months and, as he makes clear in his letter, our rules and strategies
for managing AI in these circumstances have already fallen behind their
capabilities.

"AI-driven mass surveillance presents serious, novel risks to our fundamental
liberties. To the extent that such surveillance is currently legal, this is
only because the law has not yet caught up with the rapidly growing
capabilities of AI," he wrote.

Essentially, with AI, we don't know what we don't know. Hegseth's willingness
to recklessly use powerful AI models in both surveillance and warfare indicates
he has zero knowledge or interest in the past and even less understanding of
the intricacies of these systems.

A very bad idea

I've yet to talk to a technologist, a roboticist, or someone within the AI
community who thinks letting an AI (or an AI-powered robot) control or carry a
weapon is a good idea.

Hegseth isn't necessarily spelling out that scenario, but his requirement to
remove the guardrails Anthropic has smartly put in place indicates to me that
he doesn't really care about repercussions and AI casualties. He's focused on
results, perhaps at any or all costs, including safety and liberty.

Amodei's done the right thing here, basically calling Hegseth's bluff. As the
Anthropic CEO made clear, Claude AI is already being used in many Department of
War systems. Pulling it out and retrofitting for another, perhaps less powerful
and intelligent set of models might not be easy and probably won't have the
desired outcome of a system ready to carry out Hegseth's bidding.

Clearer heads must prevail here. As the tech leaders and, yes, even Elon Musk,
wrote in 2017, "Once this Pandora's box is opened, it will be hard to close."


https://www.techradar.com/ai-platforms-assistants/today-frontier-ai-systems-are
-simply-not-reliable-enough-to-power-fully-autonomous-weapons-anthropic-ceo-on-
why-it-wont-agree-to-pete-hegseths-scary-request

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)

-----------------------------------------------------------
[Voltar]