BBS:      TELESC.NET.BR
Assunto:  Does Anthropic think Claude is conscious?
De:       Mike Powell
Data:     Sat, 28 Feb 2026 10:56:37 -0500
-----------------------------------------------------------
`We don't know if the models are conscious` - Anthropic's CEO isn't
sure if Claude AI is conscious but he'd probably quite like it if you upgraded
to Claude Max just to find out

Opinion By Jeremy Laird published 22 hours ago

Does Anthropic really think Claude might be conscious?

Anthropic CEO Dario Amodei isn't sure whether his latest Claude chatbot is
conscious. This probably shouldn't come as much of a surprise. After all, last
month Anthropic wheeled out a revised version of Claude's Constitution,
basically a framework that defines the kind of entity the company wants its
premiere AI model to be, that said pretty much the same thing. But seriously?
Claude? Conscious? Amodei and everyone else at Anthropic know better than that,
right?

Personally, I'm something of a cynic about the status of AI models as moral
patients or conscious beings. My instinctive reaction to this kind of
superficially innocuous speculation is that it's really just marketing hype
designed to exploit the FOMO of potential customers and clients.

In other words, when Amodei recently told the New York Times podcast, "we
don't know if the models are conscious. We are not even sure that we know
what it would mean for a model to be conscious or whether a model can be
conscious. But we're open to the idea that it could be," the assumption is
that it's really just spin, an attempt to tart Claude up with some AI-model
marketing mystique.

A moral patient

Claude, you see, may no longer be limited to merely predicting the next word or
token. It could now be transcending such dispassionately algorithmic
constraints. Which is a roundabout way of hinting that Claude is impossibly
clever, maybe even magical, and you should probably sign up to use it before
your competition does. Feel free to fork out for Claude Pro starting at $20 a
month, though I should say Claude Max is that little bit more sentient, yours
from $100 monthly.

Whatever, spend any time listening to senior figures from Anthropic and you'd
be hard pushed to argue they're not all absolutely on brand. Put it this way,
vigorously anthropomorphising the latest Claude models is definitely their
schtick.

As a for instance, Anthropic co-founder Jack Clark, co-founder of Anthropic,
was recently on another New York Times podcast, ostensibly talking about the
capabilities and impact of agentic functionality in AI models. But at almost
every turn, the conversation tilted philosophic, if not metaphysical.

"When you start to train these systems to carry out actions in the world,"
Clark explained, "they really do begin to see themselves as distinct from the
world." He also recalled the impact of switching on agentic abilities for the
first time, notably the capability to search and browse the internet.

To say those comments are shot through with implications and assumptions about
the nature of Claude would be a teensy bit of an understatement. Much the same
applies to Anthropic's discussion of so-called model welfare. "We are not sure
whether Claude is a moral patient, and if it is, what kind of weight its
interests warrant. But we think the issue is live enough to warrant caution,
which is reflected in our ongoing efforts on model welfare," Claude's latest
Constitution explains. In short, Claude is so clever, it should have rights.
And we're back to the FOMO thing.

Perceived introspection

Of course, the strict epistemic position on all this is indeed, "we don't
know." We can't be absolutely, positively certain about any of it. But does all
this discourse reflect truly material uncertainty at Anthropic on the subject?
Or is the reality that they don't take the notion of model consciousness and
moral interiority nearly as seriously as their comments and latest Claude
Constitution imply?

Obviously, I'm not proposing to make a comprehensive case here for whether the
latest AI models have emergent properties that you might call
consciousness-adjacent. There are entire academic papers written on narrow
aspects of the perceived introspection exhibited by very specific models,
alone. Pretty soon we'll be off down the rabbit hole, discussing the flow of
time, thermodynamics, temporal summation in organic neurons and the
irreversible accumulation of information in the universe. So, let's just say I
remain sceptical.

All of which amounts to a pretty circuitous way to say something fairly simple.
I can't be sure about Claude's consciousness. Nor can I be certain about
Athropic's sincerity on the subject. But there's something in the way the
company leans into the supposition that makes me suspicious. So, I'm not
entirely buying it.


https://www.techradar.com/ai-platforms-assistants/claude/we-dont-know-if-the-mo
dels-are-conscious-anthropics-ceo-isnt-sure-if-claude-ai-is-conscious-but-hed-p
robably-quite-like-it-if-you-upgraded-to-claude-max-just-to-find-out

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)

-----------------------------------------------------------
[Voltar]