BBS:      TELESC.NET.BR
Assunto:  Who wants to live forever?
De:       Mike Powell
Data:     Fri, 27 Feb 2026 15:18:34 -0500
-----------------------------------------------------------
'Our mind isn't a dataset' - as if Black Mirror taught us nothing at all,
33% of millennials and 25% of Gen Zers want to preserve themselves using AI

By Graham Barlow published yesterday

Who wants to live forever?

You may be familiar with the classic Black Mirror episode Be Right Back, in
which a young woman whose boyfriend is killed in a car accident decides to
communicate with an artificial intelligence imitating him, eventually uploading
his AI `brain' into an android body that looks just like him.

As you'd expect from Black Mirror, things do not go quite to plan and end up
somewhat horrifying, but that doesn't seem to have stopped public interest in
AI immortality from surging. According to Google Trends, searches for "AI
immortality" have surged 2,426% over the past year and jumped 91% in the past
month alone.

Now, a new EduBrain survey of 3,000 people across three generations reveals
that 1 in 3 millennials and 1 in 4 Gen Zers want to preserve themselves using
AI after they're gone, while their older peers remain skeptical - just 10%
of Gen X are interested.

Gen Xers may be less keen on a digital afterlife, but they might not have a
choice if they make their kids their next of kin. 27% of Gen Zers and 23% of
millennials say they would want to immortalize their relatives. The idea that
we might not have a choice about whether an AI version of ourselves lives on is
particularly galling, especially to me, a Gen Xer.

AI `deadbots'

"We're already seeing early forms of digital personas - AI
`deadbots', grief simulators, digital twins and future-self models," says
Harry Southworth, Head of AI Development at EduBrain. "They can mimic some
aspects of human behavior based on extensive data, but they are far from fully
encoding a human mind, as it's shaped by lived empirical experience."

While the idea of AI "immortality" is gaining traction, the reality of how
close we are to achieving it is far less certain.

According to Nicky Zhu, an AI Interaction Product Manager at Dymesty, even the
most advanced approaches, such as neural interfaces capable of recording
patterns of brain activity, remain a long way off. Current research suggests
the technology is still at least 15 to 25 years away, and even that only
addresses part of the challenge. The deeper issues, Zhu argues, are far more
fundamental.

The nature of the human mind

Zhu outlines four major obstacles standing in the way of true digital
resurrection - and they go well beyond simply building better AI models.

The first is the nature of the human mind itself.

"Our mind isn't a dataset. It is an enormous challenge to develop
technology that can replicate memory and decision-making. I have developed
conversational AI systems that have analyzed tens of thousands of user
interaction data points. Yet, in more than 67% of cases, they still cannot
predict rational human behavior. 94% of human memory and decision-making is
based on unconscious, implicit, and automatic processes, while modern AI
systems need to be provided with explicit, conscious data. As a result,
`digital immortality' services that build chatbots from text messages and
social media posts don't recreate a person; they generate simplified
caricatures."

Even if that hurdle could be overcome, Zhu points to a second concern: the
growing commercialization of grief.

"AI representations of deceased loved ones create emotional dependence that
companies can profit from; the average family pays $1,840 per year for chatbot
subscriptions, according to recent market research. These predictive algorithms
that provide statistically probable replies can prolong mourning, potentially
harming families rather than providing comfort."

There are also serious security implications.

"When sensitive data about people is stored in files, the risk of data
breaches increases exponentially. Exposing the personality patterns, personal
memories, and decision-making patterns of thousands of people will create
opportunities for identity theft, behavioral manipulation, and fraud on a
massive scale. Current tech and laws aren't equipped to handle such complex
issues."

And finally, there's the sheer scale of what would be required to accurately
model a human being.

"We found that even fully specifying the decision preferences of one
individual required at least 340 hours of interviews and behavioral tracking.
Capturing someone's broader cognitive architecture would likely require years
or even decades of data collection. And storing one person's full sensory
data for just one day would require around 2.3 petabytes - more than most
data centers can handle per individual, not to mention the massive energy
demands."

A digital echo

Taken together, these challenges paint a stark picture: while AI can already
simulate fragments of a person, recreating a full human identity remains not
just technically difficult, but conceptually unresolved.

Which leaves us in a strange place. The appetite for AI immortality is clearly
growing, especially among younger generations, but the technology itself is
still a long way from delivering anything close to the real thing.

For now, at least, what we're being offered isn't life after death, it's
something closer to a digital echo, and if Black Mirror taught us anything,
it's that it might not be the comfort people think it is.


https://www.techradar.com/ai-platforms-assistants/our-mind-isnt-a-dataset-as-if
-black-mirror-taught-us-nothing-at-all-33-percent-of-millennials-and-25-percent
-of-gen-zers-want-to-preserve-themselves-using-ai

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)

-----------------------------------------------------------
[Voltar]