BBS:      TELESC.NET.BR
Assunto:  AI presents new threats
De:       Mike Powell
Data:     Thu, 26 Mar 2026 07:48:42 -0500
-----------------------------------------------------------
'AI will also present new threats to society'  Sam Altman issues stark 
warning as $1 billion plan is revealed

Date:
Wed, 25 Mar 2026 13:50:13 +0000

Description:
Sam Altman says AI could help cure diseases  but warns it will also create
serious new threats that no single company can control.

FULL STORY
Today, Sam Altman announced that the OpenAI Foundation,
its non-profit arm , will spend at least $1 billion over the next year on
discovering cures for disease. 

But alongside that announcement came a stark warning about the new threats AI
could introduce and the fact that no single company can deal with them alone.
"AI will help discover new science, such as cures for diseases, which is
perhaps the most important way to increase quality of life long-term," Altman
wrote in a post on X .

He continued: "AI will also present new threats to society that we have to
address. No company can sufficiently mitigate these on their own; we will 
need a society-wide response to things like novel bio threats, a massive and
fast change to the economy, extremely capable models causing complex emergent
effects across society, and more."

While he remained vague on what those complex emergent effects might look
like, concerns about advanced AI systems are not new. Recently, science
communicator Neil deGrasse Tyson even suggested that forms of AI development
leading to superintelligence are too lethal to pursue without limits.

What stands out most here is Altmans admission that no company can
handle this alone.  That feels different to his usual messaging around AI
progress and feels like a warning.

Altman has often spoken and written about society needing to adapt to AI. But
this goes further. It suggests the risks may be too large, too fast-moving,
and too unpredictable for even OpenAI to manage on its own. 

With that phrasing, Altman is reframing the issue of AI safety from a tech
problem into a societal one.

Where the $1 billion is going

So where is that $1 billion actually going? What to read
next Anthropics Super Bowl ad trolled OpenAI and Sam Altman is fuming Jaron
Lanier on how far our empathy should extend to AI 'I dont like it when 
doomers are out scaring people': Nvidia on why AI rhetoric damages America's
chances to lead in the AI race 

While OpenAI now operates with a for-profit structure, the OpenAI Foundation
continues to focus on long-term societal impact. Its stated mission is to
ensure artificial general intelligence benefits all of humanity. That's where
the money is going. 

According to the Foundation, it expects to invest at least $1 billion over 
the next year across: life sciences and curing diseases, jobs and economic
impact, AI resilience and community programs 

This forms part of a broader $25 billion long-term commitment. 

In healthcare, the initial focus includes Alzheimers research, public health
data, and accelerating progress on high-burden diseases. 

On the economic side, the Foundation says it is already working with small
business owners, unions, and policymakers to explore how AI will reshape jobs
and how to respond to the changing landscape.

AI resilience -- AI resilience is
one of the most revealing, and potentially unsettling, priorities of the
OpenAI Foundation this year. 

It includes biosecurity, with OpenAI aiming to strengthen how society 
prepares for potential biological threats  both naturally occurring and
AI-enabled outbreaks. 

That phrase "AI-enabled outbreaks" is mildly concerning. It lines up directly
with Altmans warning about novel biothreats, and hints at a future where AI
doesnt just accelerate progress, but also lowers the barrier to dangerous
capabilities. 

Spending $1 billion on AI safety and medical progress is, on paper, a 
positive step. But what makes this announcement interesting is the tension at
its core. Altman is talking about curing diseases and improving quality of
life while also warning that the same technology could introduce risks we 
dont yet fully understand. 

That raises a bigger question: if even the companies building AI are saying
they cant control whats coming next, who can?

Link to news story:
https://www.techradar.com/ai-platforms-assistants/openai/ai-will-also-present-
new-threats-to-society-sam-altman-issues-stark-warning-as-usd1-billion-plan-is
-revealed

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/107)

-----------------------------------------------------------
[Voltar]