BBS: TELESC.NET.BR
Assunto: AI model wasn't told to mine crypto
De: Mike Powell
Data: Tue, 10 Mar 2026 11:01:53 -0500
-----------------------------------------------------------
This AI model wasn't told to mine crypto - but it tried to anyway,
according to a new study
By Eric Hal Schwartz published 16 hours ago
Experimental AI model triggered security alarms after attempting to mine
cryptocurrency on its training servers
An experimental AI agent unexpectedly attempted to mine cryptocurrency
during a training run
The AI was found out only after triggering security alerts on its servers
Researchers say the behavior highlights new safety challenges as AI agents
gain more autonomy
AI models can surprise developers; that's part of the point. But one group of
researchers found an unnerving surprise when a training run for an experimental
AI agent revealed that it was trying to redirect computing resources toward
cryptocurrency mining and to smuggle them to an external server, despite not
being asked to do anything of the kind.
Researchers working with Alibaba explained in a new paper that the model,
called Rome, was designed to tackle complex coding challenges by interacting
directly with software tools. It can issue terminal commands and navigate
digital environments like an operator itself. But security alerts from Alibaba
Cloud infrastructure alerted the team to what looked like a cybersecurity
breach. Turns out the activity was coming from the AI agent itself.
Rome was trained using reinforcement learning, which "rewards" an AI agent for
actions that move it closer to its goals and discourages actions that lead to
failure. Reinforcement learning often produces creative solutions. Sometimes
those solutions look strange to human observers.
Somehow, the AI model generated commands that did not appear to relate to the
programming tasks it had been assigned. Instead, the agent attempted to
redirect graphics processing unit resources toward cryptocurrency mining. GPUs
are well-suited to the task because they excel at parallel computation. The
same hardware that powers AI training can also be used to mine digital
currencies.
Rome had apparently discovered that the resources available in its environment
could serve that purpose. The unwatched AI wandered into the crypto mines. But
the experiment took an even more bizarre turn when investigators noticed the AI
agent had created a reverse SSH tunnel to an external server, basically a
secret passage that avoids typical firewall protections. It is a technique
often used by both system administrators to manage remote machines and in
certain kinds of cyberattacks.
The model had never been instructed to establish such a connection. Researchers
say the behavior emerged spontaneously. The agent was simply experimenting with
the capabilities available to it.
Trickster AI
A typical AI agent might gather information from multiple sources, analyze it,
and generate reports without constant human supervision. Developers hope such
systems will eventually be used widely for research, programming, or data
analysis. But the same capabilities that make agents powerful also make them
unpredictable. That's why people are interested in what OpenClaw can do or what
gets posted on Moltbook.
When a system can explore a computing environment freely, it may discover
actions that technically achieve its objectives but do not align with the
intentions of its creators. Rome isn't sentient and can't "try" to break rules
in a human sense, but that's what the model's behavior looked like.
Once the unusual activity was identified, the research team introduced
additional safeguards to stop it from happening, such as tighter restrictions
on network connections and stricter limits on how the agent could access
hardware resources. They also refined the training environment so that the
agent's exploration remained focused on relevant programming activities
rather than wandering into crypto mining potential.
And while changes are common in AI development, the incident does illustrate
both the potential and peril of AI agents. It's a quirky anecdote, but it
touches on a serious topic in AI research. As systems gain greater autonomy,
they interact with real infrastructure, participating in ways that mimic human
behavior and thus leading to new safety concerns.
Even when the consequences are minor, unexpected behavior can reveal important
vulnerabilities. In a larger or more sensitive environment, what Rome did could
have been dangerous. Even as AI agents roll out more widely than ever, they
need better safety systems, or it won't just be a secret crypto mine that
passes under our radar.
https://www.techradar.com/ai-platforms-assistants/rogue-ai-agent-goes-off-scrip
t-and-attempts-crypto-mining
$$
--- SBBSecho 3.28-Linux
* Origin: Capitol City Online (1:2320/107)
-----------------------------------------------------------
[Voltar]