Languages & Computers Tongue

Languages & Computers Tongue

Preamble

Speaking in tongues (aka Glossolalia) is the fluid vocalizing of speech-like syllables without any recognizable association with a known language. Such experience is best (not ?) understood as the actual speaking of a gutted language with grammatical ghosts inhabited by meaningless signals.

Usually set in religious context or circumstances, speaking in tongue looks like souls having their own private conversations. Yet, contrary to extraterrestrial languages, the phenomenon is not fictional and could therefore point to offbeat clues for natural language technology.

Computers & Language Technology

From its inception computers technology has been a matter of language, from machine code to domain specific. As a corollary, the need to be in speaking terms with machines (dumb or smart) has put a new light on interpreters (parsers in computer parlance) and open new perspectives for linguistic studies. In due return, computers have greatly improve the means to experiment and implement new approaches.

During the recent years advances in artificial intelligence (AI) have brought language technologies to a critical juncture between speech recognition and meaningful conversation, the former leaping ahead with deep learning and signal processing, the latter limping along with the semantics of domain specific languages.

Interestingly, that juncture neatly coincides with the one between the two intrinsic functions of natural languages: communication and representation.

Rules Engines & Neural Network

As exemplified by language technologies, one of the main development of deep learning has been to bring rules engines and neural networks under a common functional roof, turning the former unfathomable schemes into smart conceptual tutors for the latter.

In contrast to their long and successful track record in computer languages, rule-based approaches have fallen short in human conversations. And while these failings have hindered progress in the semantic dimension of natural language technologies, speech recognition have strode ahead on the back of neural networks fueled by increasing computing power. But the rift between processing and understanding natural languages is now being fastened through deep learning technologies. And with the leverage of rule engines harnessing neural networks, processing and understanding can be carried out within a single feedback loop.

From Communication to Cognition

From a functional point of view, natural languages can be likened to money, first as medium of exchange, then as unit of account, finally as store of value. Along that understanding natural languages would be used respectively for communication, information processing, and knowledge representation. And like the economics of money, these capabilities are to be associated to phased cognitive developments:

  • Communication: languages are used to trade transient signals; their processing depends on the temporal persistence of the perceived context and phenomena; associated behaviors are immediate (here-and-now).
  • Information: languages are also used to map context and phenomena to some mental representations; they can therefore be applied to scripted behaviors and even policies.
  • Knowledge: languages are used to map contexts, phenomena, and policies to categories and concepts to be stored as symbolic representations fully detached of original circumstances; these surrogates can the be used, assessed, and improved on their own.

As it happens, advances in technologies seem to follow these cognitive distinctions, with the internet of things (IoT) for data communications, neural networks for data mining and information processing, and the addition of rules engines for knowledge representation. Yet paces differ significantly: with regard to language processing (communication and information), deep learning is bringing the achievements of natural language technologies beyond 90% accuracy; but when language understanding has to take knowledge into account, performances still lag a third below: for computers knowledge to be properly scaled, it has to be confined within the semantics of specific domains.

Sound vs Speech

Humans listening to the Universe are confronted to a question that can be unfolded in two ways:

  • Is there someone speaking, and if it’s the case, what’s the language ?.
  • Is that a speech, and if it’s the case, who’s speaking ?.

In both case intentionality is at the nexus, but whereas the first approach has to tackle some existential questioning upfront, the second can put philosophy on the back-burner and focus on technological issues. Nonetheless, even the language first approach has been challenging, as illustrated by the difference in achievements between processing and understanding language technologies.

Recognizing a language has long been the job of parsers looking for the corresponding syntax structures, the hitch being that a parser has to know beforehand what it’s looking for. Parser’s parsers using meta-languages have been effective with programming languages but are quite useless with natural ones without some universal grammar rules to sort out babel’s conversations. But the “burden of proof” can now be reversed: compared to rules engines, neural networks with deep learning capabilities don’t have to start with any knowledge. As illustrated by Google’s Multilingual Neural Machine Translation Systemsuch systems can now build multilingual proficiency from sufficiently large samples of conversations without prior specific grammatical knowledge.

To conclude, “Translation System” may even be self-effacing as it implies language-to-language mappings when in principle such systems can be fed with raw sounds and be able to parse the wheat of meanings from the chaff of noise. And, who knows, eventually be able to decrypt languages of tongues.

Further Reading

External Links

要查看或添加评论,请登录

Rémy Fannader的更多文章

  • Stories In Logosphere

    Stories In Logosphere

    As championed by a brave writer, should we see the Web as a crib for born again narratives, or as a crypt for redundant…

    3 条评论
  • Value Chains & EA

    Value Chains & EA

    PREAMBLE The seamless integration of enterprise systems into digital business environments calls for a resetting of…

    2 条评论
  • Self-driving Cars & Turing’s Imitation Game

    Self-driving Cars & Turing’s Imitation Game

    Preamble The eventuality of sharing roads with self-driven vehicles raises critical issues, technical, social, or…

  • GDPR Ontological Primer

    GDPR Ontological Primer

    Preamble European Union's General Data Protection Regulation (GDPR), to come into effect this month, is a seminal and…

    2 条评论
  • Why Virtual Reality (VR) is Late

    Why Virtual Reality (VR) is Late

    Preamble Whereas virtual reality (VR) has been expected to be the next breakthrough for IT human interfaces, the future…

    2 条评论
  • The Agility of Words

    The Agility of Words

    Preamble Oral cultures come with implicit codes for the repetition of words and sentences, making room for some…

  • Squaring EA Governance

    Squaring EA Governance

    Enterprise governance has to face combined changes in the way business times and spaces are to be taken into account…

  • Alternative Facts & Augmented Reality

    Alternative Facts & Augmented Reality

    Preamble Coming alongside the White House creative use of facts, the upcoming Snap’s IPO is to bring another…

    1 条评论
  • New Year: 2016 is the One to Learn

    New Year: 2016 is the One to Learn

    Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in…

    1 条评论
  • Business Agility & the OODA Loop

    Business Agility & the OODA Loop

    Preamble The OOAD (Observation, Orientation, Decision, Action) loop is a real-time decision-making paradigm developed…

    3 条评论

社区洞察

其他会员也浏览了