BBS:      TELESC.NET.BR
Assunto:  AI workers call for surveillance, weapons limits
De:       Mike Powell
Data:     Thu, 5 Mar 2026 11:19:53 -0500
-----------------------------------------------------------
Hundreds of Google and OpenAI employees sign open letter urging limits on
military AI

By Eric Hal Schwartz published 15 hours ago

AI workers call for limits on surveillance and autonomous weapons

    Almost a thousand employees from Google and OpenAI signed an open letter
calling for clear limits on military uses of AI
    The letter urges tech companies to push back against government plans for
AI surveillance and autonomous weapons
    The move reflects growing tension inside the AI industry over government
contracts and defense partnerships

Nearly a thousand employees of Google and OpenAI have signed an open letter
urging their companies to resist pressure from the U.S. military to loosen
restrictions on how AI systems can be used. The letter declares "We Will Not
Be Divided" over the subject, even after the Pentagon designated Anthropic a
"supply chain risk after the company refused to allow its technology to be
used for domestic mass surveillance or fully autonomous weapons.

That move shocked many observers in Silicon Valley and sparked a wave of
concern among the engineers building today's frontier AI models. Especially
as OpenAI and Google are reportedly negotiating to take up the arrangement
rejected by Anthropic.

The signatories frame their message in unusually blunt language for an industry
known for cautious corporate communication. The letter alleges that government
officials are attempting to pressure AI companies to abandon certain ethical
boundaries.

"They're trying to divide each company with fear that the other will give in.
That strategy only works if none of us know where the others stand," the letter
states. "This letter serves to create shared understanding and solidarity in
the face of this pressure from the Department of War."

The open letter is notable as it includes people from rival companies that
normally compete fiercely. The argument they put forward is that AI is now
powerful enough that decisions about its use cannot be treated as routine
business agreements.

These concerns are not purely theoretical. Governments around the world are
exploring how AI might be integrated into defense planning and intelligence
analysis. Military agencies have long used software tools for surveillance and
targeting. Advanced generative models could accelerate those capabilities
dramatically. And when studies are starting to show how AI prefers the nuclear
option in war games, letting it control weapons and surveillance systems seems
like an even worse idea.

AI war

It's a bit of a throwback for Google workers, thousands of whom protested the
company's involvement in the Pentagon's Project Maven plan to use machine
learning to analyze drone footage in 2018. After widespread internal backlash,
Google ultimately allowed that contract to expire and published a set of
ethical guidelines known as its AI Principles.

Those principles were meant to define how Google would approach sensitive uses
of artificial intelligence. At the time, the company said it would not develop
technologies designed to cause harm or enable surveillance that violated
international norms. The latest open letter suggests that similar tensions are
resurfacing as governments become more interested in deploying powerful
language models.

The letter may or may not change corporate decisions, but at least the workers
can point to it as a message that can't be misconstrued.

https://www.techradar.com/ai-platforms-assistants/gemini/hundreds-of-google-and
-openai-employees-sign-open-letter-urging-limits-on-military-ai

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)

-----------------------------------------------------------
[Voltar]