My 1982 Word Encoding & Embedding patent led to an AI NLP Cognitive Chabot and a Turing Compatible-APS model used in Major Corporations to this day!
Embedding words in1982!

My 1982 Word Encoding & Embedding patent led to an AI NLP Cognitive Chabot and a Turing Compatible-APS model used in Major Corporations to this day!

NLP Cognitive Chatbots are not innovations. AI algorithms are not new either. They are extensions of research and applications mostly dating back over 30+ years ago. I designed, developed, and implemented a chatbot that became multi-purpose optimizing bots used to this day in the corporate world. Others worked on these topics but somehow faded into the shadows during the AI winter. I was lucky to go through an AI spring and summer!

Chatbots are now disruptive innovations meaning that they are widely distributed at the low end of the market. However, chatbots were revolutionary in the 1980's meaning built for only a segment of users at the high end of the market.

Let's be clear from the start. The main reason I succeeded in AI is that I designed and wrote the source code of the core AI algorithms(C++, Java, Python, PyTorch). I go from mind to keyboard without having to wait for anything or anyone. Then only do I ask teams to package the algorithms in nice user interfaces. Now let's get back to the origins.

If we exclude the military and a few corporations, the Internet did not exist in the 1980s. Emails did not exist. Secretaries were bustling and had a job. Hotlines were people, not bots. In that environment, building a cognitive chatbot did not interest many people. In those days, services were people. However, I invented an NLP Cognitive AI chatbot and sold it successfully to a few corporations starting with Mo?t et Chandon(LVMH) as described below. That got my AI career rolling. The auto-learning capacity of the system got me wide media coverage and opened corporate doors. However, I had to move on to solutions for the industry if I wanted to pay my 12% 1980's interest rate mortgage on my house.

No alt text provided for this image

I was fortunate to obtain an Artificial Intelligence aerospace(Aérospatiale, Division des Engins Tactiques) defense contract for the corporation which has become Airbus. It was a success and my AI algorithm was deployed on 200 sites. The project earned me an official AI recommendation. This led over time to several aerospace projects.

Since then, my AI engine has been applied to numerous Supply Chain Management(SCM) projects from sales to delivery, optimizing manufacturing, storage(warehouse), maintenance, and more. My main AI engine is still running to this very day in leading corporations!

In the 1980s, there was no Amazon Alexa framework, no IBM Watson conversational, and no Google Dialogflow platform. If you wanted a chatbot, you had to write it from scratch! If you wanted NLP (Natural Language Processing), you had to write your parser, and create the rules, structures, and the automatic dialog system. It was a 7/7 24/24 pizza effort to get it done. It gave you the exhilarating feeling you were creating a new world!

In the past few years, I have been designing powerful cognitive multi-function sales and training chatbots connected to deep learning services on IBM Watson, Google Dialogflow, Facebook, Twitter, and other platforms. Today, the IBM Watson and Google Dialogflow conversational tools have efficient functionality especially since you can connect to services (translations, weather forecasts, concept insights, and many more services). When I started entering a complex dialog to implement my AI service capability based on intent classification, entities, context scripts and jump to features, a realization suddenly hit me. I had developed this same cognitive chatbot 30+ years ago! All I had to do is reproduce and improve the model (see below). I noticed the same thing for all of the AI algorithms presented today as "new".

A few years back, I started to assemble everything I had done and implement AI using the key present-day algorithms/libraries/modules: Google TensorFlow, Keras, Scikit Learn, Python, PyTorch, and everything I can get my hands on!

No alt text provided for this image

I packed all the source code and case study experience in Artificial Intelligence by Example(2018 and Second Edition in 2020).

In this book, I go through the main AI algorithms and also beyond the cutting edge into the world of quantum computing and neuromorphic computing, e.g.

The source code is available on GitHub through the book.

The book is available on Amazon. Just type Denis Rothman.


No alt text provided for this image

Once I wrote AI by Example, my editor found that there was something missing: Explainable AI(XAI). I then went back, built the XAI Python code, and explained the algorithms.

XAI is not AI. AI cannot explain itself. XAI is an independent set of methods and equations that explain the output of an AI model.

The book is available on Amazon. Just type Denis Rothman.

As you can see, I have been working on Natural Language Processing(NLP) since the early 1980s. I always found RNNs overrated. But transformers surely take NLP to another level with attention:

I was immediately fascinated by Transformers when Tushar Gupta my partner at Packt, suggested we should work on this subject. We published the book in January 2021:

Transformers contain an attention head that reminded me of the combinatory logic approaches we started out with during the early days of NLP. As I dug into the research, I found the innovation of transformers mind-blowing!

You might be thinking "this guy has an issue!" "He is inventing the whole story!" "That cannot be possible. NLP Cognitive chatbots in the 1980s? No way! "

In fact, in those days, we AI innovators were often tackled by people who would say "Computer Science is not a science!" or "Artificial Intelligence will never exist!" or "I don't believe what you're saying is happening".

However, this article bears the dated proof of the innovation of the Cognitive NPL Chatbot I wrote 30+ years ago. Technology is disruptive today but was a revolution back then. I first describe the Mo?t et Chandon(LVMH) project and then produce official documents.

Let's first have some AI Champagne.

1. An AI NLP innovation for a chatbot...in the 1980s...!

Mo?t and Chandon was one of the first corporations to purchase my cognitive language teaching chatbot in the 1980s and publicize it. Many of us exist only thanks to great corporations that promote innovation and performance at the very start such as LVMH.

No alt text provided for this image

The chatbot was a very powerful cognitive machine for executive language learning containing an NLP system based on a natural language parser :

  • There were hundreds of intents in the intent base (key goals for the language learning executives).
  • There were hundreds of entities ranging from politics to business.
  • The dialogs had a basic structure like in the Watson conversation platform and contained multiple services like the ones in the IBM service list. The dialogs were also packed with artificial intelligence natural language parsers which were a new field of AI research.

The services linked to the dialog were cutting-edge technological innovations :

  • A digitalized voice for the dialog on the machine side. No fake synthesized voice like in the old SciFi movies. Only real human voices were recorded and managed by algorithms.
  • A combinatory logic propagation engine to create a surprising conversational service
  • keyboard or mouse functionality on the user end with an embedded algorithm that prevented the user from receiving the same answer twice
  • a learning library that stored the user's answers in a knowledge base and adapted to the user at each new login ensuring language progress. A pattern parser that stored a user's personality: political views, sports, and other topics. At each lesson, the chatbot would converse more and more intimately with the user.
  • Dictionary functionality that brought new words through synonym analysis into the system with hypernym and hyponym capability
  • a telephone service so that the user could connect to the system from home and answer the questions with the dial tone touchpad
  • embedded recurrent syllogism functionality that provided deep conversations
  • and many other services (images, sounds, music, etc.).

It was a success for several years and finished its life cycle in one of the first Apple Catalogs. After that, in the early 1990s, I moved on to innovations for corporations using artificial intelligence applied to optimize algorithms for industries and services.

And now the dated proof.

2. The dated proof

The proof that follows represents just a portion of the official documents proving that this cognitive chatbot originated in the 1980s.

  • Below, is the patent I registered on December 28th, 1982 for a mathematical, statistical approach to natural language oral speech which quickly leads to one of the first programs that could store, memorize and adapt to language learning. Bear in mind that in 1982, writing that artificial intelligence is a branch of applied mathematics was not well perceived. People were wary of AI. I was sure right there, and then, that mathematics would be the key to writing equations and replacing humans. Machine Learning and Deep Learning are now both applied mathematics models.

No alt text provided for this image


  • This publication accompanied the patent written in 1982. It contains a statistical approach to human speech which was the basis of the digitalized human interface of the cognitive chatbot.
  • Using statistics to analyze students in a language course and predict their future lessons were entirely new. Many frowned at using probabilities to build language courses and scorned machine learning teachers. Innovating is fun, but it is also a battle.
  • This publication contained a word embedding guide, word piece tokenization examples, a booklet, and an audiobook.

No alt text provided for this image

  • With this publication, I created one of the first layers (or a slab or a matrix) in the history of NPL artificial intelligence as shown in the patent design in the following image. It was one of the first word embedding systems designed and patented in 1982, 30+ years before word embedding become disruptive.

No alt text provided for this image

At first, it was a physical multi-layer system. Each layer represented a sequence of learning of a student. Statistics were represented on paper charts and then entered into a small computer.

The following reproduction is one of the pages of the patented system that shows that I had invented a word embedding system that went down to word piece tokenization many years before others registered similar patents. The system went done to the phoneme level for speech analysis. Hundreds of students used the system.

No alt text provided for this image

I added a matrix/tensor 2D representation of each student on which results of their errors were noted providing a language perception portrait of a student. No two students had the same embedded portrait. The following document accompanied each student and was also recorded on a mini-computer with no mouse and a little screen in the early80's when IBM and Apple were designing the first personal computers:

No alt text provided for this image

The system also contained a "LogoMetric" system that would store statistics on each student to make predictions on how they would perform when viewing a video, having a conversation, or listening to a song. Using probabilities to estimate how a student performs proved both innovative and efficient.

The statistics on each student I obtained with my primitive embedding system led to a distributed representation of a type of student. I could recognize a student with her/his distributed features! I could even create clusters of students and recognize the type of student with only a sparse distribution representation!

The system has an R-V-I-O measurement.

R, rules, represented the probable number of grammar rules a student had really understood. The rules became the rule dataset R of the Chabot.

No alt text provided for this image


V, vocabulary, represented the probable number of words a student had really memorized. The vocabulary became the variable dataset V of the Chatbot.

No alt text provided for this image


I, input, represented the input, the comprehension abilities of the student. I became the input of the Chabot in a keyboard version of the Chatbot.

No alt text provided for this image










O, output, represented the output, the expression speed ability of the student. O became the real-time output of the Chabot displayed on a screen.

No alt text provided for this image


It was used by hundreds of students in its pre-computer paper mode format. Then, once this system had been used by hundreds of students, it was transposed into the software in the mid-'80s to become the basis of the chatbot multi-layer system. The system obtained immediate official public recognition and financing.

Paris-Diderot University granted me a post-graduate degree for AUDIAL (patent, book, audio-book, statistical approach). At this point, I was certain that I was on the right track. Corporations, official government innovation organizations, and one of the toughest scientific universities in Paris confirmed my approach.

From then on, starting in the late 1980s, in a transfer learning process using transduction at that time (Simondon), I transposed the software into an industrial application for aerospace maintenance and fabric cutting optimization in the apparel production lines. With some solid mathematical theory, I converted R(rules) in the industrial rules, the V(vocabulary, phonemes) into industrial variables, and used I/O as input and outputs. I succeeded not only in transfer learning but also in domain learning. The system evolved in time into much more complex versions and is used to this day (2020) in Operational Research by many key corporations. I had laid the grounds for my entire AI career right then and there in the 1980s without realizing it.

  • At that time, I also conducted and published a market survey called "Log 1", laying the first version of the concepts down for one of the first machine teaching systems:

No alt text provided for this image

This product roadmap quickly led to a successful Computer-assisted language learning(CALL) teaching lab. It was version 1 and primitive, but the core concepts were there. At that point, public innovation sponsors got me on national TV. After that, I was able to create version 2, with a full-blown Cognitive NPL ChatBot.

  • Mo?t et Chandon was intent on sponsoring innovations in its corporation at its main facility in Epernay in the Champagne region. I installed a complete language-learning lab there, which was fully equipped with cutting-edge technology. Mo?t et Chandon published a lengthy article that was sent out to the press and also greatly helped to promote my innovations. Many thanks to this great corporation!

Here are three excerpts from the lengthy article written by Mo?t et Chandon to describe and publicize the system. You can translate them with the online Google translation service.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


The Times published the following article on April 28th, 1989, bringing some international attention to the innovation I'd started designing in 1982 with success in all of the versions from 1982 to the early 1990s. Before that, the media had helped from the start. I have many more articles and documents to back up one of the first AI chatbots on the market. This promotion naturally helped me tremendously to enter corporations at a time when the Internet and SEO were not available to the global market. You only had the post office, landline telephones, and face-to-face meetings! So many thanks to the newspapers, radios, and TV shows that encouraged me!

No alt text provided for this image

In 1986, I registered a Cognitive Chabot patent:

Brevet fran?ais n ° 86 03430 du 11 mars 1986 pour?"Didacticiel robot".-

The translation is French patent n°86 0343 March 11, 1986, Tutorial Robot

The patent describes a chatbot that can teach or be a therapist through dialogs based on the successful implementation described above.

I went further and began cognitive dialogs in a 3D space using symbols and concepts:

No alt text provided for this image

I had laid the basis for my work on both neural and cognitive approaches to AI in Euclidean spaces.

No alt text provided for this image

In the late 80s and early '90s, I started writing very complex artificial intelligence software for industrial applications.

For example, the leading French apparel and luxury brands needed to optimize the consumption of fabric.

I wrote an optimizer that was widely distributed and saved corporations an average of 3% of their resource consumption.

This led me to be the co-author of a government study on automated processes and decision-making in the textile industry.

I achieved my goal: the freedom to create what I wanted when I wanted on my own. I never built an empire but managed to live very decently off my AI software since then thanks to the support of innovative corporations and public services. I am grateful for that. Moreover, I am even more grateful to see artificial intelligence become disruptive.

Never let others stop you from innovating. Ignore them, Knock on doors until you get into the market. From then, you can surf on the dollar vote of your customers or employers!

Further documented references

For those who understand French, I provided links to two interviews on French national radios that timestamp my research and development in the 1980s along with other documents(one in English).

1. A recording on May 1, 1986, on Radio Monte Carlo

The message is simple: as long as your invention is not used in a corporation it's just an idea. When corporations use it, it's an innovation. You have had an IMPACT on society and will leave a FOOTPRINT. That was my vision, my innovation impact is in my LinkedIn profile.

2. A recording on Europe 1 on 01/03/1988 with one of the first digital (not synthetic) voices of a chatbot. Also an official mention of my Artificial Intelligence(term used in audio) software. I recorded a digital dictionary of words. Each word in the dictionary with different intonations(prosody) depending on its position in a sentence. Then the chatbot could rebuild digital sentences with a semantic real-time predictor when a user asked a question.

3. In 1986, The AFP (Agence France Press) published an article about the artificial intelligence program I had designed: a Cognitive NPL Chabot. It was efficient enough to start selling it to corporations such as Aerospatiale(now Airbus), IBM, and others. It was the beginning of a long journey through AI in a favorable corporate environment. During that period, AI was forgotten by the general public in the long "AI winter" until the 2005/2006 rebirth era and the 2015 disruptive era. Now it's a pleasant ride for us, pioneers! The AFP photographer and I had a great time laughing like school children taking this geek picture of me below back then. After all, geeks were just getting out of the dusty garages we were working in. I had my garage office as all of us did, and it took us all some time to figure out how to get out into the real world with our inventions and innovate society and...pay our piled-up bills!

No alt text provided for this image


4. Louis Vuitton Mo?t Hennessy (LVMH) mentioned the innovation described above in their April 1989, #1 issue of "Convergences", their magazine at the time.

No alt text provided for this image

The following image is an excerpt of the April 1989 article:

No alt text provided for this image

I must thank Yves Bénard, a business visionary who immediately saw the necessity of a modern high-tech approach to language training. Thanks to such corporate visionaries, I managed to survive the AI winter! For more on Yves Bénard, who is not only a businessman but also a family man, read this article.

5. The 1986 ?Computer Assisted Language Instruction Consortium(Calico) hosted the Naval Academy in Annapolis, Maryland

I had to fly over from Europe with my hard disk in those days and set it up on a computer that was onsite. You would always wonder if it would work again. The presentation went well, and I was confident enough to see some contacts in New York and Canada. This experience opened my mind to the research going on in the US. It was a great experience.

No alt text provided for this image

6. The 1992 Apple Catalog

My Chatbot reached the Apple Catalog in 1992. It would be my last step in this direction before the AI winter hit hard. At that point and public interest plummetted, research was slow.

However, thanks to the support of the corporate visionaries I had met along the way, I had already entered the world of industrial AI optimization which has generated enough income for me to work on AI non-stop through the AI winter and up to this day. I don't have a Ferrari, I drive a family car, but I have enjoyed a unique and exhilarating AI innovation ride!

The following image is the cover of the 1992 Apple Catalog. Apple computers were great multimedia innovations at the time.

No alt text provided for this image

And now the product. I name it "New Horizon" because for me Artificial Intelligence was and still is a fantastic new horizon. Exploring automated intelligence is an adventure full of excitement and unplanned challenging events!

No alt text provided for this image


7. American Independence

In 1979, I was already self-employed while still a college student. I had begun to work on statistical systematic language-learning methods experimenting with 100+ private students that flocked into my office with curious eyes.

I was also experimenting with lexical fields. American English is full of words of Indian origin such as pow-wow and also African origin such as tote bag. French is full of words describing food such as moelleux to describe both cheese and wine. These words and also expressions cannot be translated easily. Automated machine translation was going to face a lot of problems!

I was in the US on July 4th, 1776 during the bicentennial celebrations of the Declaration of Independence. In 1979, I was wondering how to convey the feelings and emotions of an American in French to modern French English learners. So now, I face another problem: semantic analysis. How was it possible to take American concepts and translate them into the French culture.

No alt text provided for this image

These thoughts ended up in a project to produce an audiobook on American Independence in English and also in French. I wrote the texts, wrote and the recorded music track, produced it, and sold it. It attracted many people to my language learning approach and provided me with some funds to buy more equipment to innovate.

DialogFlow

Will Chambers, Ph.D.

Artificial Intelligence and Autonomy Leader

3 年

Denis Rothman I think this is the post where we first met :D.

Thom Ives, Ph.D.

Sr. Data Scientist, Echo Global Logistics | Founder, Integrated ML & AI | Multi-Physics Engineer

3 年

Denis, Thanks for writing this. I loved every paragraph!

Will Chambers, Ph.D.

Artificial Intelligence and Autonomy Leader

7 年

Nearly all AI "dates back". I can not count how many "gray beards" have told me "we were doing that X years ago". In my head I am wondering "then why does it work now but not then?"

要查看或添加评论,请登录

Denis Rothman的更多文章

社区洞察

其他会员也浏览了