BBS:      TELESC.NET.BR
Assunto:  AI ambition outpacing reliability at Davos
De:       Mike Powell
Data:     Tue, 3 Mar 2026 10:38:38 -0500
-----------------------------------------------------------
AI conversations at Davos have sprinted ahead - we need to go back to basics

Opinion By Elliot Burke Perrin, VP of Engineering at UnlikelyAI 
Published 24 hours ago

AI ambition is outpacing reliability at Davos

Once again, AI was one of the biggest items on the agenda at the 2026 World
Economic Forum.

Only this year, the tone was noticeably more tense - businesses and
journalists anxiously asked deeptech leaders about AI security, governance,
infrastructure strain, if the dreaded `AI bubble' is really a bubble, and
when investments will start delivering economic returns. In other words, the
stakes have never been higher.

Of the many AI leaders that spoke at Davos, Microsoft CEO Satya Nadella came
closest to hitting the nail on the head. He warned that AI only avoids becoming
a bubble if it produces real, widely-distributed outcomes, rather than
concentrating value among a handful of companies and economies.

Unreliable AI (particularly the issue of hallucinations) continues to deepen a
business trust deficit that obstructs positive economic impact.
Why today's AI debate starts in the wrong place

Much of the conversation at Davos reflects the reality that today's dominant
AI systems - large language models (LLMs) - are where capability,
attention, and investment are currently concentrated.

Regulation, infrastructure planning and economic modelling are all being built
around that reality. As a result, hallucinations are treated as an unfortunate
but unavoidable risk to be disclosed or mitigated.

LLMs are probabilistic systems, meaning they generate outputs by predicting
what comes next based on statistical patterns learned from vast datasets. This
is what makes them linguistically fluent and flexible, but it's also why they
hallucinate.

When an LLM produces a convincing but false answer, that isn't a bug -
it's a consequence of how it's engineered.

At Davos, hallucinations were frequently discussed as a governance or safety
problem, but they are inherent to the probabilistic approach itself.

This distinction matters, because it determines whether hallucinations are
treated as something to work around, or as a signal that different system
designs may be needed for certain use cases.

If hallucinations are treated as inevitable, the only available responses are
warnings, disclaimers, human oversight, and increasingly complex guardrails.

That is why so much of the Davos conversation focused on disclosure, risk
transfer, and regulation - all necessary, but none of them capable of turning
unreliable systems into dependable infrastructure.

Combining flexibility with reliability

So what's the alternative? The fact that probabilistic models are not the
only way to build AI systems. Long before generative AI captured the public
eye, symbolic reasoning systems were used to encode knowledge as explicit
rules, facts and constraints.

These systems don't guess. Given the same input, they always produce the same
output.

Most people interact with symbolic systems every day without even thinking
about it - spreadsheets are just one example. When a spreadsheet calculates a
result, users don't worry that it might hallucinate an alternative answer
that "sounds right". Businesses want and need this same determinism from
AI.

The vast majority of software in use today are symbolic systems - they just
can't handle natural language processing well, which is where LLMs excel. But
the choice between neural and symbolic is not binary.

Today, a growing class of hybrid systems, known as neurosymbolic AI,
deliberately combines the strengths of both approaches.

Neural networks are used where flexibility is needed, such as when interpreting
language or extracting information from documents, while symbolic reasoning
layers apply explicit rules, constraints and logic to determine outcomes.

Crucially, this means outputs are not driven by statistical plausibility alone.
Neurosymbolic systems can trace how a conclusion was reached, produce the same
result for the same input, and clearly signal when a question cannot be
answered with confidence.

In environments where decisions must be explained, audited and defended, such
properties are essential.

The cost of missing the alternatives

This narrow focus has real consequences. Much of the anxiety expressed at Davos
stems from grappling with the genuine limitations of LLMs - systems that
offer extraordinary capability but unavoidable reliability challenges.

When these limitations become apparent, trust erodes, human oversight becomes
mandatory, and any productivity gains become harder to realize.

Many organizations can find that pilot projects struggle to scale, particularly
when legal and compliance teams raise concerns about outputs that can't be
reliably defended or audited.

While ROI outcomes are mixed across the industry, a recurring challenge is that
the systems offering the most impressive capabilities were never designed to
justify their own decisions.

Businesses at Davos were right to ask how AI should be governed, regulated and
embedded into the global economy. Neurosymbolic systems offer natural solutions
to business concerns about adoption, where LLM-only systems present new risks
through reliability and explainability.

But those questions can't be answered meaningfully without first broadening
the conversation about what AI actually is.

Most practitioners understand the limitations of LLMs and are already debating
mitigation strategies. But there's a difference between mitigating inherent
limitations and choosing architectures that avoid them for specific use cases.

The question isn't whether to abandon LLMs, but whether we're too readily
defaulting to them even when reliability requirements suggest a different
approach.

If AI is to underpin economies rather than simply impress in demos, reliability
can't be an afterthought. It has to become the standard for designs to be
auditable, compliant, and trustworthy from the get go.

Davos 2026 raised some pressing questions, and the answers already exist in
approaches that combine LLM flexibility with deterministic reasoning.

Too much of the debate still treats hallucinations as unavoidable, rather than
recognizing that they're inherent to probabilistic systems and that
alternatives exist for use cases where reliability is most important.

Reliable AI isn't something we're waiting to invent. It already exists in
the form of neurosymbolic AI. Until that reality is reflected in mainstream
deployment, the gap between Davos ambition and what organizations can safely
rely on will continue to widen.

This article was produced as part of TechRadarPro's Expert Insights channel
where we feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those
of TechRadarPro or Future plc. If you are interested in contributing find out
more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro


https://www.techradar.com/pro/ai-conversations-at-davos-have-sprinted-ahead-we-
need-to-go-back-to-basics

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)

-----------------------------------------------------------
[Voltar]