Numerator
Kamel Djebrouni Sainte-Croix
Lead of Product Management chez Devoteam Creative Tech
From common Indo-European*nem- (“count, put in order”).
In Latin: Accountant, one who counts. (synonym: dinumerator)
In English: First term of a fraction, written above the fraction line or before its oblique line.
Digital is today an integral part of our daily lives. We don’t even ask the question anymore, it’s a given. Even the oldest have difficulty remembering “non-digital” life.
The recent development and explosion of what we call AI (Artificial Intelligence) has somewhat “disturbed” this newly established order.
Through these few lines, I suggest you take the time, try to understand by taking a step back, to better understand recent developments in Digital. A time which seems essential to me to avoid remaining with overly negative impressions, even emotions of fear, and ultimately a reaction of rejection, a feeling of absolute overwhelm.
Let’s start by going back to the basics, the very foundations of Digital.
Talking about digits to better understand Digital
By talking about “Digital” we forget the real meaning of the word and especially the professions it encompasses. I think it is essential to recall it’s mainly about “digits”.
There is no conservative will there, but above all an embarrassment, an intellectual itch, a fundamental problem.
What do our famous “Digital” professions consist of?
We are all faced with this situation, at social or family dinners, when we are asked our job, we find ourselves at a loss to explain it. We end up simply saying: “I’m in Digital”. This proclamation is in the process of supplanting the old one: “I am in IT”.
However, if we recall we are mainly dealing with digits, we can, it seems to me, begin to find a way, for a clearer explanation, above all more accessible to ordinary mortals, who do not work directly in Digital. Above all, we could better understand the latest developments in “Digital”. My intuition is that by talking about the digits behind Digital, we can refocus discussions at the heart of the technological and economic issues associated with it. It seems to me that we can then explain more clearly how this famous “Artificial Intelligence” works, and above all understand what its usefulness is, its primary vocation.
To explain more precisely how semantics seems essential to me here, I found inspiration in an unexpected place.
A naughty but very interesting experience
As much French as I am, I was recently attracted by an article relating an experiment aimed at visualizing our sexual orgasms.
The oldest of us, who have passed the age of puberty, know that orgasms are as much physical as cerebral.
The experiment therefore took advantage of two major technological developments: one relatively old (end of the 19th century), the electrocardiogram (ECG), and the other much more recent, Artificial Intelligence (AI), which unleashes passions in our news.
For the sake of intellectual honesty, it should be noted that this experiment was sponsored by two sex toy brands (Lovehoney and Womanizer). It is therefore more of a fairly good communication operation than a real scientific experiment.
The fact remains that there was indeed an experiment carried out in the laboratory, with, in the role of guinea pigs, five men and five women, consenting to the recording, via ECG, of their cardio-musculo-respiratory data while they indulged to sexual pleasures, solitary, with the famous sex toys.
Once this valuable data set was collected, the visualization part could begin. It was enough to give the graphs, associated with the data of each of the guinea pigs, to the most famous of current AIs: in fact a pair of AIs, ChatGPT and Dall-E. By asking them to translate them into images, we ultimately obtain ten images, which allow us to visualize the ten orgasmic eruptions.
If we come back to the initial point, therefore to the importance of talking about the digits behind the “Digital”, we can summarize this experience in one sentence. But this sentence takes on a quite different meaning depending on whether we focus on digits or not:
In the second version, an average person, like you and me, has a better chance of understanding what it is about than in the first.
Of course, the sentence itself is not enough, but it places the subject, in fact both, the one being talked about and the person listening, in the right place to start the explanation, and then facilitate understanding. .
Indeed, to obtain the final image, we had to go through a stage of digitizing the orgasms. In this experience, the key is in the use of an ECG. It is it that makes it possible to digitize the orgasm, by translating it into a data set. A data set consisting of a series of numbers indicating variations in heart rate, breathing, or muscle contractions. These numbers are the raw material of experience without which nothing is possible. It is these numbers that are first translated into tables and graphs, before being analyzed by artificial intelligence tools to translate them into images. These tools are themselves computer programs, the raw materials of which are also numbers, two in particular (the 0 and the 1 of binary). The computer programs associated with ChatGPT and Dall-E were designed on the basis of other datasets, quite gigantic in this case. Data sets that were obtained how? By scanning texts and images, gigantic quantities again, to extract a series of numbers indicating variations in colors, luminosity, tone, shapes, format, writing style, language, themes, authors…
We are here at the heart of what digital professions are.
By recalling that the raw material of Digital is numbers, and that the English term “Digital” comes precisely from “digits”, we free ourselves from an unnecessarily complicated imagination associated with the term “Digital”. ”. There is no magic in all this, no higher entity, or whatever other supernatural fantasy.
Above all, we can describe more clearly what Digital technology consists of, better understand its different aspects and how they materialize in its most recent developments, and even imagine how it will evolve in the future.
We cannot claim to understand Digital without also talking, at least, about the associated technologies. They are numerous and sometimes very elaborate. However, for our purpose and our primarily educational objective, let us limit ourselves to the main technology, that which is at the heart of the Digital.
The computer is a digital electronic calculator
There is no need to launch into an extensive explanation of how the multiple digital tools work. That’s not the point. On the other hand, in my opinion it is essential to remember how the main digital tool works: the computer. Indeed, if digits are the raw material of Digital, its fuel, the computer is its main tool, its engine.
The full name for the computer, in english, is “Digital Computer”. It gives the idea of numbers calculation, which is actually one of the main things a computer does. However for our explanatory exercise, it is interesting to notice that in french the term we use is “ordinateur”. It has the advantage of broadening the function of the tool — we’re still talking about the computer here — beyond simple calculation. It is used to put things in “order”, to arrange them.
And what exactly do we put in order with a computer?
Today we talk about data, but the simplest term is “information”.
In short, a computer is a tool that works with numbers, in this case two: 0 and 1 (the famous binary language). It processes information, which is translated into numbers, using programs, themselves written in the form of numbers, to give a result, also in numbers.
The program it executes can be a simple calculation, but it can also be more elaborate and we therefore use a more generic term: algorithm.
We can also divide what a computer does in 3 simple steps:
If we return to our initial example of “digitizing orgasms”, we can reproduce the same diagram:
If you look closely at the definition of computer in Wikipedia, you’ll notice the mention of Alan Turing, pioneer of artificial intelligence.
We speak in particular of a Turing Machine, a concept imagined by Turing to describe algorithms, and therefore programs, as having to “represent a virtual person executing a well-defined procedure”.
Computers are therefore robots, but digital robots.
Robots whose objective and usefulness is to be able to delegate them tasks usually carried out by humans.
After recalling the main foundations of Digital, and refocusing the associated issues around digitization. We can now discuss in more details its most recent development, the famous Artificial Intelligence and the tools that accompany it.
A prodigious acceleration of dematerialization
Let’s return to my initial inspiration and the experience that initiated it.
If the use of Artificial Intelligence tools is naughty, and also a bit funny, the resulting production nonetheless remains of fairly poor quality. On the other hand, it illustrates quite well the major challenge associated with the digital revolution.
As I explained above, it is the step of digitizing the sensations of the guinea pigs which is the most interesting here.
Digitization brings us to the heart of the current issues associated with the emergence of Artificial Intelligence tools.
These are very advanced, very powerful tools, they nevertheless remain associated with the same major digital issue, that of dematerialization.
Until now, dematerialization concerned, in common parlance, the digitization of information. To oversimplify, we started by dematerializing documents, the famous “zero paper”, then sounds, music, radio broadcasts, telephone conversations, and finally images, films, photos, graphic creations.
I will not intervene here in the debate on the benefits or harms of digital compared to analog. I will simply note that dematerialization has enabled a notable improvement in human productivity. In the same way that machines largely imposed themselves with the industrial revolution, with the digital revolution computer tools have invaded our lives.
Today, Artificial Intelligence reveals a new major step in the dematerialization of our world. Beyond the uses, certainly nice, but not very interesting from an economic point of view, which are making the buzz in the news, like this experiment that I mentioned in the introduction, AI remains above all a formidable tool for improving productivity.
Let’s take the example of the most famous AI tool today: ChatGPT. Microsoft established an exclusive partnership on the commercialization of the uses of AI developed by Open AI.
However, the main mission of Microsoft, its “market”, has not changed, it remains to offer the best productivity tools.
It is in this sense that it is presented by Microsoft itself:
“ChatGPT and Microsoft Copilot are both artificial intelligence (AI) technologies that were developed with the intent of helping you accomplish tasks and activities faster and more efficiently.”
How will a tool like ChatGPT/Copilot allow us to “accomplish tasks and activities faster and more efficiently”?
It knows how to produce relevant written content adapted to a multitude of contexts.
If you are a musician, it can write a song adapted to a musical style, a theme, a form of lyric construction, or even to a type of composer (yourself or any other).
In the same way, if you are a writer, it can write texts adapted to the style, genre, theme, narrative construction, author.
The same goes for articles by a journalist, the “slogans” of an advertising copywriter, the recommendations of a lawyer, etc.
Without listing the infinity of possible professions, we can easily imagine how everyone can use it to write, for them, reports, summaries, administrative or commercial letters, etc.
Thus, we can say more simply that tools like ChatGPT make it possible to dematerialize the task itself, not of writing, in the mechanical sense of the term, but of writing production. In the same way with its Dall-E counterpart, it is possible to dematerialize the production of images, photos and visuals.
To illustrate the subject, and to better understand the associated issue, we can return here to the parallel, we initiated a bit earlier, between the industrial revolution and the digital revolution.
Schematically we can say that the industrial revolution, in its aspect of mechanization, made it possible to improve productivity by entrusting the carrying out of manual tasks to machines (we often say that these are “robots”). Similarly, the digital revolution, in its most advanced aspects of dematerialization, makes it possible to improve productivity by entrusting the performance of intellectual tasks to machines (also called “robots”).
We return here, once again, to the original definitions of computers, these digital machines aiming to “represent a virtual person executing a well-defined procedure”.
“What we understand well is expressed clearly, and the words to say it come easily”
(Poetic Art (1674) by Nicolas Boileau-Despréaux)
After correctly replacing “the church in the center of the village” (a french expression literally meaning “Let’s go back to basics!”), by precisely and clearly recalling the main challenge of Digital: digitizing tasks carried out by humans, dematerializing them. We can now see how it is implemented. We can explain how the latest developments in Digital, and in particular the emergence of Artificial Intelligence, will allow us to dematerialize new tasks.
Today we speak of Artificial Intelligence (AI) in a different sense from that associated with the definition of computers stated above.
The definition given by the European Parliament is very close to that of a computer:
“AI is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity.”
Through examples of reproduced human skills, however, it gives a clue to what really changes: reasoning and creativity.
Artificial intelligence tools are therefore capable of creating content corresponding to reasoning.
For companies, always looking to improve their performance, the challenge therefore simply consists of identifying tasks to entrust to an AI tool. Not just any tasks, those which will allow a gain in productivity, quality or economic performance.
Since it is a question of digitizing, of dematerializing tasks carried out by humans, we can start by identifying those which could be “digitized”. Then, it will remain to determine those which will be the most relevant, in the sense that digitization can actually generate a performance gain.
Let’s take a concrete example, quite classic in business, with the management of an email marketing campaign. By the way, it should be noted that email remains one of the most used and most efficient customer acquisition channels.
For the sake of our reflection, I asked Copilot — This is the Microsoft assistant bringing together ChatGPT and Dall-E — to process this example, this use case.
Below, according to him — or her, I’m not sure if the use of the “it” pronoun is relevant here, still it remains the most “fair” — the list of tasks associated with managing an email marketing campaign. For each task,it also indicates whether it can be processed by an AI tool.
Managing and implementing an email marketing campaign involves several steps. Here is a list of tasks associated with each step, as well as guidance on what can be assigned to an artificial intelligence (AI) tool:
Managing and implementing an email marketing campaign involves several steps. Here is a list of tasks associated with each step, as well as guidance on what can be assigned to an artificial intelligence (AI) tool:
In summary, AI can be used to automate many tasks, but strategic planning, creativity and deep analysis remain areas where human intervention is essential.
I also asked him to make a graphical representation of the process. The result, which you see in the image below, is quite disappointing, but helps illustrate the need to know the skills of the AI tool you are using. I will come back to this point a little later.
I deliberately took a simple case to define the basic principle.
For this example, there are already IT tools that make it possible to digitize a large part of the stages of managing a marketing campaign.
We can then imagine the same exercise replicated across all of a company’s processes. In human resources, we can similarly digitize recruitment campaigns, management of assessments and evaluations, and payroll management. At the commercial level, we can digitize all or part of the customer relationship, the same for the customer support part. Etc, for each of the company’s departments, whenever the digitization of the task is relevant.
Behind this question of relevance, there are several aspects. Firstly, the feasibility of digitization, and then the effectiveness of the result obtained.
As we saw previously, the notion of efficiency answers fairly standard performance improvement questions.
It is at the level of feasibility that there is a major development. In fact, it is thanks to the combination of both hyper-developed technological capabilities and the availability of gigantic masses of data, that we can imagine today being able to digitize a greater number of tasks, over a much broader spectrum.
No data no AI
To understand how far this major step in the digital revolution can go, we must understand how new artificial intelligence tools work.
Without going into too much complicated detail, we can explain how the skills of these tools are developed.
This is also an essential notion to answer the question of the feasibility and relevance behind their use.
Let’s start by remembering that they work like any IT tool, with, in a simplified way, as we saw previously, three steps:
Here, we must focus on the second step, the program itself. This is where the major development lies.
We are talking about a computer program, made up of algorithms, which makes it possible to reproduce the sequence of the famous “tasks usually carried out by humans”.
领英推荐
To illustrate the principle, here is a simple flowchart. This is a program that allows you to manage the rise, and also the maintenance, of the temperature of an oven. You might not have suspected it, but your oven runs on AI!
In the case of artificial intelligence tools, the algorithms are disproportionately more complex.
To be able to deal with this level of complexity, computer processing capabilities are also disproportionate.
It is no coincidence that companies like Nvidia have seen their stock quote explode on the stock market.
Nvidia is known to the general public for its “graphics cards”, or graphics processors (the computer chip(s) dedicated to processing display data), considered to be the most efficient by the video gamer community, we speak of “Gamers”, and the type of computers dedicated to them are called “Gaming PCs”.
For IT professionals, Nvidia is also renowned for the performance of its processors. They have become essential in the design of artificial intelligence tools.
In fact, computer components capable of carrying out disproportionate quantities of calculations are needed to “run” the computer programs associated with artificial intelligence tools, but above all to design them. Because it is in the way they are designed that there is a revolution.
If we take the simple flowchart used in the illustration associated with the management of the temperature of an oven, we can easily imagine that it was designed and transcribed into computer language by a computer engineer. Trying to imagine an equivalent for even one percent of the program that runs behind ChatGPT seems impossible.
If we talk about artificial intelligence, it is for two main reasons. First of all, the new associated tools are capable, as we have seen, of reproducing “human reasoning”, but above all they have “learned” to do so.
You have certainly heard of “machine learning” and its derivative “deep learning”.
We can translate them again into basic English and explain that we are indeed teaching computers how to reproduce human tasks and reasoning.
It would be unnecessarily lengthening this article, already well beyond the standards, to try to explain in detail how this learning works. On the other hand, I can try to describe the main principles.
The key lies in the notion of algorithms that follow one after the other in computer programs.
In the case of artificial intelligence tools, the algorithms used were themselves designed by computers, “machines”.
This design work is based on statistical modeling principles.
The basic principle is quite simple.
The algorithms are designed in two stages:
We have therefore taught machines (hence the expression “Machine Learning”) to model human reasoning, so that they can then reproduce it.
I will here open two parentheses in order to provide clarifications that seem important to me.
First of all, to talk about the need to fully understand how AI tools work, to be able to use them wisely. You definitely don’t want to end up like this American lawyer who asked ChatGPT to make his plea. Or, without going to these extremes, you end up with an ill-suited illustration, like the one I obtained by asking Dall-E to illustrate the diagram of a campaign management process.
Better understanding how they are designed will allow you to better understand what their skills are, and therefore to define the most relevant tasks that can be entrusted to them.
This will also potentially help you if you one day need to work on developing an artificial intelligence tool.
Then, I also want to point out that the description of the functioning of artificial intelligence tools allows us to avoid an unnecessary debate from my point of view. It is not necessary to debate very long to come to the conclusion that these tools are not really intelligent. We can obviously use philosophical arguments regarding the notion of intelligence and then assert that these tools demonstrate a certain level of intelligence specific to human beings. The reality is more mundane, as we have seen when looking at what lies under the “hood” of artificial intelligence engines. They are not endowed with reason, in the sense that they could think for themselves, let alone have emotions. They don’t know how to think like human beings do. They only reproduce a certain type of mental patterns, very particular reasoning, which represent a tiny part of the intellectual capacities of human beings. And above all, they reproduce them exactly as we asked them to do, by applying to the letter, without any free will, the famous algorithms, which we designed for this precise and specific objective.
I have not given above all of the steps of “teaching” a tool like ChatGPT. They go well beyond language modeling. For example, he knows how to adapt the text he produces according to the context described in your request, and in particular he can “play a role” and put himself in the shoes of a doctor, engineer, architect, etc. He has also learned to make very good summaries, to analyze the content of a text, to restore for example its tone (very useful for analyzing the famous “customer feedback”). Among other things…
If we return to our initial question of the feasibility of digitizing a task, we can conclude that the main answer lies in the availability or not of a sufficient amount of data, and also the quality of these. Clearly, to answer we must ask ourselves whether we have sufficient and representative data, allowing us to model the task concerned. Formulated differently, the question becomes: do we have enough data, representing all the essential aspects of the task we want to digitize, to be able to teach a machine to carry out this task for us? This also brings us back to the older expression of “Big Data” which expresses precisely the importance of data in Digital.
In the example of email campaign management, to be able to generate the layout or content, the machine must have learned to do it based on sufficiently numerous examples. The level of quality and quantity (the number of cases, different contexts) of the layouts and content produced by the machine directly depend on the quality and quantity of the examples that will have been used in this learning.
In the cases handled by ChatGPT and Dall-E, you only need to take a step back to understand that the basic data is widely available on the internet. The number of contents, whether in text, image or video, is almost infinite.
For tasks carried out in business, it is more complex, but not impossible. We must add an element, in my opinion major, to integrate into any business strategy today. The collection, even the creation, of data is indeed a primordial, almost vital, issue for companies in the all-digital era. If we take the example of email campaigns again, a simple but very effective source of data lies in the testing phases. It is necessary to test different formats, content, targeting, segmentation, etc. The analysis of the results of these tests will then be an extremely valuable source of data.
Before continuing, it should be remembered that the data here is “encrypted”. To be usable, the information which serves as the basis for the design of any artificial intelligence tool must be available in a digital format. It was possible to design the language models behind ChatGPT and the visual models behind Dall-E only because we had a sufficient database of text and images in digital format. So, by extension, just as we have established that numbers are the essential “fuel” to run the “engine” of computers, data is the essential fuel behind AI tools.
After recalling that the main challenge of Digital lies in its capacity for digitization, then discussing in more detail its most recent development, the famous Artificial Intelligence and the tools that accompany it, we can now look to the future. Let’s start with a “major trend” and the amplification of the war over access to data.
The digital loop
The question of data is therefore essential. They are also the scene of new fierce economic battles.
We see this, for example, in the reaction of content creators who try to hold AI players accountable for the use they have made of their “raw material”.
A little more indirectly, we can also observe, with hindsight, how the strategy of developing business productivity tools, for a company like Google, proves essential today. Their office suite that it has managed to set up, by supplementing its electronic mail service with a spreadsheet, a word processing software, another for the presentation of slides, a video conferencing tool, etc., constitutes an obvious asset in being able to face Microsoft.
We can also make the same observation, on what we call today “cloud computing”. This is again a theme that I will simply touch on here to avoid unnecessary lengthening compared to the essential part of our subject. We speak of “cloud”, to evoke the notion of virtualization. The term “computing” for its part, represents the entire IT production chain. In the “Cloud” the machines are virtual (computers, servers, data storage, etc.) but also and above all the programs themselves (database and processing in particular).
We find in the “cloud” market the same major players as Microsoft, Google and Amazon.
Last example, less glorious for these same actors, with their various attempts to take a position on the social media market. One of the major sources of content production, and if you have followed so far, you now know to what extent this same content is essential for designing AI tools.
The numerical model doesn’t change, it just becomes “exponential”. The more we are able to digitize data, the more we will be able to digitize tasks, the more we will scan tasks, the more digitized data we will have, etc.
It turns out that digitization capabilities have exploded a bit.
The data war has therefore already started, for a while now, and will continue to gain momentum in the years to come.
Tomorrow is not that far away
For the last chapter of our little adventure, let’s try together to imagine what the digital future has in store for us.
Let’s continue first with the theme of war. In the previous paragraph, I mentioned the notion of data war which is at the heart of the economic wars waged by companies. States are not left out, and international conflicts are also in an advanced phase of digitization.
There are of course the war “machines” themselves, which are increasingly digitized. We can take the best-known example of remotely piloted drones. They are still piloted by soldiers, as far as I know, but we can easily imagine self-piloting functions with the definition of a precise target. The next step is to define a broader mission. I am not at all initiated into the art of war and military strategy, but I can still imagine the interest of a drone, or rather a fleet of drones, piloted by artificial intelligence which is both capable of recognizing enemy weapons and soldiers, but also of analyzing their movements, deducing (remember the famous “predictive” models which are at the heart of AI) their strategy and adapt your own movements and deployment model.
I’ll let you imagine all the possibilities for replacing soldiers with machines combining analog and digital.
However, it is in the military strategies themselves that evolution seems to me to be the most important. Indeed, even if we “digitize” the war “machines”, we remain on the classic pattern of a direct and physical confrontation, it remains “material”, “analogue”. However, we are already witnessing purely digital feats of arms. We are talking about “cyber attacks”.
In relatively recent news, there are examples in the conflict in Ukraine (Hacking of Ukrainian energy network by Russian hackers) or in Palestine (Iranian hackers caused water outage in Ireland in support of Palestine).
There are also questions of propaganda and we all have in mind the examples of manipulation of Western public opinions by the famous Russian hackers. One must also here point the importance of this new type of soldier in the army contingents…
If we come back to digital in the broader sense, we can look forward a little to imagine a world that is increasingly digital, increasingly digitized.
I could return to the theme of sex used at the very beginning of this article. It must be recognized that the sex and pornography industry is always at the forefront of technological developments. The first erotic films (filmed in Paris, obviously) date from the invention of cinema, and we could go back further and find the first works quite closely following the invention of printing by Gutenberg. Digital is no exception to the rule and it is always interesting to observe market trends. It represents a good witness to the way digital technology is evolving and the way it is transforming uses. However, I want to spare the emotions of all readers. I therefore leave you free to interpret and imagine what an immersive and predictive web can give here…
In the same way, everyone can quite easily imagine how artificial intelligence tools will be able to digitize our daily lives. Just take the example of household chores. There are already robots that vacuum or mow the lawn for us, all while adapting precisely to their playgrounds.
The possibilities, in every aspect of our daily life, are endless:
The most interesting development, from my point of view, concerns the media we will use. In IT, in digital technology, we talk about devices. The more intrepid will be able to keep in mind that we are talking more broadly about HMI (Human Machine Interface). It seems to me that observing their evolution, summarizes quite clearly the extent of the revolution that is coming.
Indeed, more digitization of the world implies more needs in terms of media to interact. They must expand too.
To understand this concept, we can take the time to quickly review how the evolution of digital technology is categorized. The best-known classification concerns the Internet in particular, therefore the computer “network” which is the main place for implementing digital tools. We then speak of “Web”.
As with software, each of the major digital developments is available in a new version of the Web. We are therefore talking about Web 1.0, Web 2.0, Web 3.0, etc…
A few decades ago I also produced an analysis of our entry into the Web 3.0 era (only available in French).
Today we are entering the fourth version of the digital software. We talk about the “immersive” Web. To understand the concept in question, it is necessary to explain the notions associated with previous versions.
In version 1, we are on a dimension of exchange, of single-dimensional interaction. Clearly, we are on a classic pattern of consulting information. It mainly goes in the “vertical” direction from machine to human.
In version 2, the human becomes an actor and we therefore add a second dimension. If you have a minimum of knowledge of mathematics, you will understand the interest of the image. It illustrates quite well the explosion in the distribution of content, linked to the emergence of social networks.
In version 3, again, we add a dimension. We move into three dimensions, because we also include objects, which have become connected. We are also talking about the Internet of Things. There is another aspect in this version that evolves. We then talk about the multiplication of screens. Among all the new screens, the smartphone is obviously the one that will change the most things and open the door to mobile internet. It is said of the Internet, in this phase of its evolution, that it is everywhere and all the time (you may have known the acronym ATAWAD, for Any Time Any Where Any Device).
So here we are in version 4. If you have followed closely, you should already have “4D” in mind, for the fourth dimension (the oldest among you will notice here an almost involuntary reference to the television series of the same name, ancestor of its modern equivalent, more dystopian, Black Mirror). And the image that comes to my mind is that of “4D” cinemas.
We also talk about immersive web, ambient, pervasive and ubiquitous. Mix AI and its army of virtual agents, with the Metaverse and the virtual reality equipment that accompanies it (immersive headset, but also gloves with haptic feedback, or even a 4Dx cinema-type armchair with reproduction of physical and olfactory sensations), you obtain a Web which actually comes quite close to that imagined in Matrix or Ready Player One.
As with Web 3.0, Web 4.0 is accompanied by the emergence of new terminals.
It is true that our smartphone screens may be getting bigger, but they remain very limited to allow us to interact in a “pervasive” web.
The question of HMI (Human-Computer Interaction, already mentioned previously) becomes crucial. If we take our simplified diagram of how IT tools operate in three steps (1.entrance data, 2. data processing, 3. restitution of the result), with the fourth version of the Web we can no longer simply enter data via a keyboard and a mouse, and in the same way a simple screen will no longer suffice for “data restitution”. We need 3D printers, virtual reality headsets, voice commands, “terminals” capturing and reproducing odors, and why not directly implants in our brains.
Of course, the evolution will be slower, and there will be a whole bunch of intermediate stages before implants for everyone, or even the generalization of virtual reality suits (a nice oxymoron this name by the way).
By way of illustration, I invite you to find out about these new AI terminals, which are intended to be the iPhones of web 4.0.
Here are two examples, firstly, the Rabbit R1, with which we interact mainly with the voice. It nevertheless retains a physical screen and an interaction wheel. I will let you judge by yourself. It seems to me that we are still too close to a connected watch.
And then, the AI Pin, a little more interesting for my taste, because the promise is to make the screen disappear. But rest assured, the physical screen is replaced by a projector integrated into the artificial intelligence pin. A smart pin ?? ??, and again here a reference for the oldest Windows users among us. If you are among the most nostalgic, you can even resurrect “Clippy” in a 4.0 version.
Abracadabra!
I will end with some general advice, in fact more of a wish.
With each major digital development, there are additional conceptual or technological barriers that arise. These are new ultra-powerful, ultra-efficient components, or even revolutionary programming methods, new screens, new types of terminals.
With each new stage, the level of digitization of our world increases. This may seem scary at first sight. We can quickly feel overwhelmed, or even excluded. However, I hope that this article, by helping you to take a step back, will allow you to better understand what is happening, so that you can be better equipped to navigate the jungle of digital tools. Ideally, it will break down another barrier, a psychological one. The one that makes us react with fear, rejection and retreat. I am convinced that the digital future, our future, will be all the more positive if as many of us as possible become actors in it.
To illustrate this idea of not being impressed by digital tools, I would like to draw a parallel with the world of magic.
In the world of magic too, things are changing a lot, sometimes with the help of digital tools. There are in particular illusionists who create illusions defying the laws of gravity, or even mentalists who can actually read our minds. Even if we are destabilized and it seems unimaginable to us to understand the famous “trick” behind these magical displays, deep down we know that there is indeed one. It is this belief that prevents us from falling into the irrational, or even the paranormal.
Behind each digital tool, there is also a “trick” that helps explain how it works.
There is even a very specific example which illustrates quite well why the parallel between Digital and Magic is interesting. It’s about Mechanical Turk.
The Mechanical Turk or the chess-playing automaton is a famous hoax constructed at the end of the 18th century: it was an alleged automaton endowed with the ability to play chess. (…) Outwardly, it had the appearance of a mannequin dressed in a cape and turban seated behind a maple cabinet which had doors revealing internal mechanics and gears which came alive when activated.
This mechanism was only an illusion to hide the real depth of the furniture. This had another secret compartment into which a human player could slip, and manipulate the mannequin, like a puppeteer, without being seen.(…)
Nowadays, a replica has been created: it is controlled by software and actually plays chess on its own.
The overall concept was taken up in the Digital. We then speak of “making a mechanical Turk”. What is it about?
The idea is very simple. This generally involves testing a concept, a new functionality, a new service, while reducing development costs to a minimum. As with the Mechanical Turk, the user is made to believe that there is an entire computer program that executes the actions, when in reality they are carried out by one or more human beings. The collected performance data is then used to validate the concept. We can then confidently develop the service, or even use this data to convince investors. Anecdotally, it seems that the online shoe sales site Zappos was launched using this method. The brand was then bought by Amazon in 2009 for just over a billion dollars.
Amazon, in fact, the digital giant that we all know, even created a program of this name (Amazon Mechanical Turk). This involves connecting “manual” workers with companies that need certain tasks carried out. In general, companies using this service need to analyze or produce data to feed their…artificial intelligence models.
PS: I did not talk about ethical and environmental issues here. It seems to me that it was not appropriate for the purpose. You will also find numerous publications on these themes. However, it seems to me that with the explanations in this article, you can easily imagine the problems of bias in the language models or visual models used by AI tools (when the input data comes from sources mainly in English and a large part comes from social networks, we inevitably find a certain cultural orientation, misogyny, racism, and other biases). You can also guess that the multiplication of digital terminals, calculation machines, servers does not generate a positive ecological balance.
A big thank you to the reviewers (Bérengère, Arielle, Sylvie, Pascal and Marin), without them this series of articles would have been much worse.