AI and Natural Languages - 20th Century to Today

AI and Natural Languages - 20th Century to Today

“The limits of my language means the limits of my world.” — Ludwig Wittgenstein, Tractatus Logico-Philosophicus

1. The Straw


The year is 1950 — humanity is recovering from one of the most harrowing conflicts it has ever seen. Major world powers have witnessed the capabilities of computers in war and are in a race for technological hegemony.

At this point, all eyes are set on Alan Turing — the father of computer science and major codebreaker of Bletchley Park — who has just published an article in Mind titled ‘Computing Machinery and Intelligence’.

Little does anyone realize that this article would set the stage for today by asking one simple question: can we tell if a machine thinks?

Turing’s answer is simple — rather than trying to determine if it can think, one should see if it can mimic someone that does.

Turing proposes what he calls the imitation game — simply have a human evaluator talk to a person and a machine through a computer terminal and see if they can tell the difference.

This, of course, would be the first major publication in the nascent fields of artificial intelligence and natural language processing.


Rene Descartes, French philosopher and mathematician of Cartesian coordinate fame.

The earliest researchers in natural language processing — interested in mechanizing language translation — employed logical models in the study of language.

The idea of using a set of rules to approach natural language wasn’t new — in the early 17th century, famous philosopher and mathematician Rene Descartes, in a letter to Martin Mersenne of Mersenne prime fame, proposed the idea of a universal language that assigned common codes to equivalent ideas across languages.

Later, Gottfried Leibniz would publish a similar idea in ‘De Arte Combinatoria', where he would posit the existence of a universal `human thought alphabet' on the basis of the commonality of human intellect.

Meanwhile in the 1930s, Petr Troyanskii and Georges Artsrouni would independently file patents for mechanical tape-machines for multilingual translation.

Noam Chomsky's 1957 book, Syntactic Structures, laid the foundations for studying languages using generative grammars. The advent of the digital computers, however, would rapidly accelerate developments in the area.

“There is no use disputing with the translating machine. It will prevail.” P. P. Troyanskii


Chomsky, author of Syntactic Structures, which approached linguistics via generative grammar.

2. Call To Action


In 1947, Warren Weaver had first suggested using computers for machine translation in a conversation with Andrew Booth, who would go on to conduct the first set of experiments with punched cards the following year.

At the urging of colleagues, Weaver would publish his now celebrated 1949 memorandum, simply titled ‘Translation’, which outlined four major proposals for machine translation:

  • Words should be judged using a context window.
  • Languages must be understood using logical models.
  • Languages may lend themselves to statistical analysis.
  • All translation requires an understanding of a hidden ‘universal human language’.

Responses to Weaver’s memorandum were mixed — many believed mechanizing translations was a pipe dream. Some, however, gave Weaver’s ideas serious consideration, such as Erwin Reifler and Abraham Kaplan.

That said, perhaps the most important consequence of Weaver’s memorandum was its role in the appointment of logician Yehoshua Bar-Hillel at MIT to a research position in machine translation in 1951, who would go on to organize the first conference on machine translation the following year.


Warren Weaver, mathematician and science administrator, and a central figure in machine translation.

3. Early Contenders


Natural language processing first earned the general public’s attention with the Georgetown-IBM demonstration in 1954.

Using a small set of 250 words and six syntax rules, the accompanying IBM 701 mainframe computer translated sentences on fields such as mathematics, organic chemistry, and politics in Russian to the English language.

The project was headed by Cuthbert Herd, head of Applied Sciences at IBM, and Leon Dostert, the central figure behind interpretations made in the Nuremberg trials, with Thomas Watson at IBM being the key facilitator.

How’d this happen? Bar-Hillel’s 1952 conference had persuaded an initially skeptic Dostert that mechanized translation was feasible, and it was his idea that a practical, small-scale experiment should be the next step in approaching machine translation.

The demonstration made it to the front pages of the New York Times and several other newspapers across America and Europe, and it was the first time the average person was exposed to the idea of computers translating languages.

And, up until the 1980s, rule-based systems would dominate natural language processing as a whole.


Weizenbaum interacting with ELIZA on a computer terminal with printable output, circa 1966. Image taken from ‘Desiring Fakes’ by Daniel Becker.

Some major examples of rule-based developments in the ‘60s include:

  • Daniel Bobrow’s PhD thesis in 1964, which presented an AI system called STUDENT. Written in Lisp, STUDENT provided numerical answers to elementary algebra word problems. It is one of the earliest examples of a question answering system.
  • 1967 saw Joseph Weizenbaum’s ELIZA — an early chatbot that simulated a Rogerian psychotherapist by reiterating the user’s input in a different way.
  • 1968, Terry Winograd released SHRDLU, a rudimentary conversational AI operating in an internalized ‘block world’ that users could instruct.

The 1970s and 1980s continued the general trend of rule-driven explorations in natural language processing, e.g. chatbots such as Jabberwacky trying to tackle humor and methods such as Lesk attempting to address word sense disambiguation.


Interested in reading the next 4 sections? Please visit the beehiiv version.

If you are building AI agents, we can possibly partner up. Visit our website or schedule a call.

Muhammad Shahab Alam

| Translator b/w Human & Machine | xDevOps intern @iEngineering | CC @ISC2 | Senior @FAST Cyber Security |

2 个月

Interesting read, will keep this in mind when I rewatch Ex Machina

Miodrag O. Markovic

CEO of Vacabee ?? | Combining the travel industry with web3 and blockchain |

2 个月

And yet here we are today... :)

Imran Horton

Founder @ MediPath AI | CS @ MINES '28 | Harvey Scholar | Ron Brown Captain

2 个月

fascinating how ai has evolved from simple pattern matching to complex conversations! i wonder what eliza would think now?

要查看或添加评论,请登录

Soban Raza的更多文章

  • To Wrap or Not To Wrap

    To Wrap or Not To Wrap

    1. Delusions When I told my colleagues I’d be pivoting to AI-powered business solutions, one remark kept finding its…

    2 条评论
  • Making a Man in the Mirror

    Making a Man in the Mirror

    1. Digital Engagement I was out grabbing a bite with a couple of mates of mine a couple of weeks ago when one of them…

    7 条评论
  • Powering AI

    Powering AI

    1. Lights On What’s the biggest question we’re stuck asking ourselves when we want to build an AI-based application?…

    7 条评论
  • Reasoning About Reasoning - III

    Reasoning About Reasoning - III

    1. The Setup We’re back, and we’re now going to talk about reasoning evaluations in scenarios involving black-box…

    1 条评论
  • Reasoning About Reasoning - II

    Reasoning About Reasoning - II

    1. The Setup In this issue, we’ll finally discuss the first kind of reasoning evaluation for language models — whitebox…

    1 条评论
  • Reasoning About Reasoning

    Reasoning About Reasoning

    "An alleged scientific discovery has no merit unless it can be explained to a barmaid." — Lord Rutherford 1.

    1 条评论