BBS:      TELESC.NET.BR
Assunto:  From Turing's ideas to Dartmouth research project
De:       Mike Powell
Data:     Wed, 18 Feb 2026 09:52:10 -0500
-----------------------------------------------------------
The $13,500 that changed the fate of humanity: how the term Artificial
Intelligence was first coined 71 years ago - but sadly without the legendary
visionary soul who imagined it

By Wayne Williams published 21 hours ago

From Turing's ideas to a Dartmouth research project, the origins of AI are
fascinating

Although AI may still feel like something new, the term itself was born more
than seven decades ago, during a modest proposal for a summer research project
at Dartmouth that carried a budget request of $13,500.

That proposal, submitted to the Rockefeller Foundation in 1955, marked the
first known appearance of the phrase "artificial intelligence."

It was an academic document, not a manifesto, but it quietly laid the
foundation for one of the most consequential technological movements in human
history.

The sad irony is that the field's most famous philosophical ancestor, Alan
Turing, was already gone by this point.

Turing had asked the defining question years earlier - "can machines
think?" - and designed what became known as the Turing Test, a method to
judge whether a machine could convincingly imitate human thought.

His work framed the entire discussion, yet he died in 1954, two years before
the Dartmouth meeting that officially named the field he had helped imagine.

Turing's death followed his prosecution in the UK for homosexuality, then
criminalized, and he died from cyanide poisoning in what was widely ruled a
suicide - a loss that removed one of computing's most original thinkers
just before his ideas began reshaping science.

Long before artificial intelligence had a name, Turing had already come up with
the question that would define it. In his 1950 paper Computing Machinery and
Intelligence, he proposed what became known as the Turing Test, or "imitation
game," replacing abstract debates about whether machines could truly think
with a simpler challenge: could a machine hold a written conversation well
enough that a human judge would be unable to reliably tell it apart from
another human?

By focusing on observable behavior instead of philosophy, Turing turned
intelligence into something researchers could actually test.

The idea was strikingly forward-looking given the reality of computers at the
time. Early machines were slow, expensive and limited to mathematical
calculation, yet Turing suspected that intelligence might emerge from
sufficiently complex symbol processing.

Rather than asking whether machines possessed a mind or consciousness, he asked
whether they could convincingly imitate intelligent behavior - something that
inspired later researchers to treat thinking as an engineering problem.

That conceptual leap directly influenced the group that gathered at Dartmouth
just a few years later, even though the man who posed the question would never
see the field formally named.

The Dartmouth Summer Research Project on Artificial Intelligence, organized by
John McCarthy with Marvin Minsky, Claude Shannon, and Nathaniel Rochester, was
small and ambitious.

According to the proposal, researchers hoped to prove that "every aspect of
learning or any other feature of intelligence can in principle be so precisely
described that a machine can be made to simulate it." The goal sounded
ambitious then and still does now: language, abstraction, reasoning, and
self-improvement, all encoded into machines.

McCarthy would later become one of AI's most influential voices. In a 1979
issue of ComputerWorld, he said bluntly that the computer revolution
"hasn't happened yet," even while predicting that it eventually would.

He argued that computers had not yet impacted life in the way electricity or
automobiles had, but he believed that applications in the coming decade would
initiate a genuine revolution.

McCarthy's realism often contrasted with the hype that surrounded the field,
a tension that has followed AI ever since.

Alan Turing: The Scientist Who Saved The Allies - https://youtu.be/XGqbieVcjPU

AI as a hot topic

By the early 1980s, interest in AI had surged again, but confusion about what
it really meant was widespread.

Writing in a 1984 issue of InfoWorld, reporter Peggy Watt noted that artificial
intelligence had become a "hot topic," with shelves filled with books and
software companies racing to label products as intelligent. Yet she warned that
"the term is being used and abused widely, almost to the point of losing its
usefulness as a description."

The frustration among researchers was obvious. In that same InfoWorld report,
Dr. S. Jerrold Kaplan of Teknowledge said, "Whenever anybody says, `I'm
selling AI,' I'm suspicious."

Kaplan argued that AI was not a single program. "The science of AI is a set
of techniques for programming," he said, describing systems that represented
"concepts and ideas, explanations and relationships," rather than just
numbers or words.

This tension between promise and reality also defined the work of Marvin
Minsky, one of Dartmouth's original architects. In a 1981 issue of
ComputerWorld, covering the Data Training '81 conference, Minsky described AI
as fundamentally paradoxical: "Hard things are easy to do and easy things are
hard to do."

Computers excelled at calculations that challenged humans, but struggled with
common sense, language ambiguity, and contextual understanding.

Minsky explained that "common sense is the most difficult thing to inculcate
into a computer."

Humans absorb countless exceptions and nuances over years of living, but
machines require explicit instruction. A logical rule like "birds can fly"
breaks down immediately when confronted with dead birds or flightless species
- a simple example revealing why intelligence is more than pure logic.

Expert systems

The optimistic early years of AI had already produced striking milestones. The
Lawrence Livermore National Laboratory later described how researchers in the
1960s developed programs such as SAINT, an early "expert system" capable of
solving symbolic integration problems at the level of a college freshman.

The program solved nearly all the test problems it faced, hinting that machines
could emulate specialist reasoning long before modern machine learning.

Yet progress came in waves. Funding boomed in the 1960s as government agencies
backed ambitious research, then cooled massively in the 1970s.

The dream of building human-like intelligence proved far harder than expected.
Even McCarthy admitted that "human-level" AI was still "several
conceptual revolutions away."

By the time AI returned to the spotlight in the 1980s, companies were marketing
expert systems and natural-language tools as breakthroughs.

Some systems impressed users by tolerating spelling mistakes or translating
plain English commands into database queries.

Others, however, leaned more on clever engineering than genuine reasoning. As
one unnamed researcher quoted in InfoWorld warned, the real test of an expert
system was whether it could explain its conclusions.

Still, the vision persisted. Industry observers imagined computers capable of
understanding natural language, translating documents, and even correcting
grammar automatically.

Kaplan predicted AI would change how people programmed because it was "much
more natural to work with symbolic terms than math algorithms." The idea that
software could assist, advise, and collaborate with humans was already taking
shape.

Looking back, what stands out is how many early predictions were both wrong and
right. McCarthy thought the revolution had not yet arrived, but he believed it
would come through practical applications. Minsky warned that common sense
would remain stubbornly difficult.
Hmm

Today, as AI systems write text, generate images, and assist scientific
discovery, the echoes of those early conversations remain.

The Dartmouth organizers imagined machines that could "use language, form
abstractions and concepts, solve kinds of problems now reserved for humans, and
improve themselves." All of which are (mostly) true today.

The $13,500 proposal did not seem remarkable at the time. It was just one
funding request among many. Yet it gave a name to an idea that continues to
change society, shaped by optimism, frustration, paradox, and unresolved
questions.

And perhaps that is the real legacy of artificial intelligence. It began not as
a single invention, like the transistor or the microprocessor, but as a wager
that intelligence itself could be understood, described, and eventually
reproduced.

Seventy-one years later, humanity is still testing that idea, still arguing
about definitions, and still pursuing the vision imagined by twentieth-century
minds who believed thinking machines might one day become real.


https://www.techradar.com/pro/the-usd13-500-that-changed-the-fate-of-humanity-h
ow-the-term-artificial-intelligence-was-first-coined-71-years-ago-but-sadly-wit
hout-the-legendary-visionary-soul-who-imagined-it

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)

-----------------------------------------------------------
[Voltar]