"Reasoned" AI

"Reasoned" AI

or Why can't we build "Reasoned" AI ?

or CSR AI (not AI-based CSR)



Artificial Intelligence (AI) has become the New Shiny Object, even though it's been around for decades now.

But this boom is mainly due to its latest sibling : GenAI.

The hype is driven by 2 things : "fun" and incredible "revenue figures".

Some previous examples of shiny objects that went sour ?? NFTs ? Metaverse ?...

It doesn't mean AI is comparable. AI, as a global topic, has major strengths and usages.

But are the ones we see/praise are the one we should put on a pedestal?


Like every technology, there is good use and bad use.

One example : Nuclear fission is a good example. Creating no carbon-dioxide electrical energy is good for the Planet. Nuclear weapons not.

Few more examples : Social Networks, Youtubes and likes, even TV and radios in their heyday.

Because it is a matter of "mastering" all the effects, direct and side, from the object. Not technologically solely.


Technology is rarely the hardest one to manage, even if quantum topics are quite heavy. Remember, a couple of decades back, we used to say "it is rocket science" to emphasize the complexity and difficulty of achieving something. Now it is "quantic" that is used to underline high difficulty.

So here, AI is reasonably complex. Hence reasonably mastered as providing the technology / service.


But that is not where the risk lies.

It is not "machines taking over" like in Terminator. Even though it could, but in a different way.

Just like digital currencies are not directly threatening the State's sovereignty, but it is from a different angle.


The problem lies first in the "dumbization" of people.

And it is a very important and deep subject.


We all read alarming polls about the youngest... I've been a lecturer in several prime schools for years now, so I interact with students and with their professors. It is not an urban legend. Levels are dropping.

The way people talk or write is the best KPI.


What are the reasons behind that? Many actually, but the digitalization of the world with social networks, videos everywhere, videogames, is locking up kids in a pretend world. They loose track of the real world. Don't get me started with Metaverse, please! :)


And now, genAI is the worst of all of these evils. Simple reasons:

  • Kids will not refer to "certified/validated" information sets like dictionaries or encyclopedias, but GenAI reader-digested internet (mostly user-generated, when not GenAI generated) content.
  • Kids won't bother doing their essays anymore (that are already of poor quality) so will train their skills and capabilities even less.
  • Kids won't bother even writing their social network content anymore, it will be generated. They won't bother taking the right picture, doing the right research and writing their own piece.
  • Kids won't even interact with other kids even on social networks, it will be GenAI systems.


So kids will be less literate, less trained (if we did put a tenth of what we spent training machines training our kids instead...), more prone to get influenced, less able to focus, less able to build up an analytic approach or analysis...


I highly recommend watching Pixar's Wall-E... We'll end up fat, small-limbed, brain and spineless, following sheepishly the herd...


In order not to get there, we should be prudent and proportionate.


AI does solve many problems that classical programming can't.

To make it simple, if one needs a computer to sort pictures of squares versus triangles, it is easy : a program that counts the number of corners, i.e. 3 for a triangle and 4 for a square, will provide a 100% success ratio and is easy to program.

If one needs a computer to sort pictures of pears versus apples, it is not easy. So a computer is trained (fed) with billions of tagged images of pears and apples, and if the corpus is big enough and accurate, the computer will

sort them in an efficient way, way better than what classical programming would have done, but probably not 100% accuracy.

In other words, AI is used when one can't program it. To put it in a positive way, it's a disaster recovery plan when programming fails.


Some of the obvious added-value usages of AI (I did not say GenAI):

  • Natural Language Processing to better understand people's input of information. Granted.
  • Health with complex diagnosis, sure (even some doctors get influenced by AI and lose their expert's judgement for a statistical estimate (that's only what AI does at the end).
  • Research topics, as AI helps validating concepts and ideas prior to finding and confirming the final equations and rules.


But let's banish the bad sides or usages of AI (here I must say, mostly GenAI).


Like "text generation" (based on low quality internet sources, what do we expect?), complex "programming" (which GenAI is incapable of anyways), "image generation" (isn't there enough images existing to make our point that we need to generate new ones that (still) often are clearly recognizable as GenAI?), "summarization" (how can something summarize truly without understanding a word of what it reads? Classical AI with linguistic capabilities does it way better while understanding the words)...


These are "promoted" to and by users because it is FUN, it is FREE (for now)... then people forget about the dangers underlying.


In a world when we NEED to be "reasoned" and "sober", how can we justify GenAI ?


I mean, OpenAI, Microsoft and the others have burned so much money in it that they need to monetize it in any way, so they advertise it in big letters.


Last week's n'vidia CEO comment telling people not to learn Computer Science and Programming. This is outrageous and totally absurd.

Unless you take it cynically from the revenue standpoint of n'vidia : the more AI is used, the more silicon they sell.

But we are not that silly.


Nothing can justify GenAI's massive and undiscriminated use.


The cost of research, all cumulated, is insane... Why didn't we train our kids instead? Oh, because we can't monetize them, right? How cynical.


The cost of running it, all cumulated... Incredible too when we want to generate less carbon dioxide, we construct more heavy duty data centers just for this just to 'fool around'.


The cost of business... because of its shiny object status, people do get fired by gullible managers that truly think GenAI can do better.

There are cases where it is possible of course, but it is not the majority. But the cost of this social declassification of employees will be heavy for States.


The cost on education... as it simplifies so many burdens of daily life, like homework for kids, it will intellectually sedentarize youngest : just like digital makes them fat, GenAI will make them sheeps.


The cost on social... with the drop of information quality, fake news, deep fakes and all the possible manipulations, some will be driven in mass to the extremes and the rest will desert social and political life... opening up the barriers for more extremism.


The cost on Evolution... That's probably the direst. Life is evolving by small steps in all possible directions, by attempts on new things. With a global dummization of the world, the loss of the ability to express oneself with details and nuance, the loss of creativity, the loss of analysis, the loss of debates, the loss of common sense, we are doomed to first flatten in Evolution (granted we are that evolved) and then go into decrease.


GenAI can NOT invent. GenAI can at best chew up and spit.


No GenAI can see the world and make us think about our condition, like Kafka or Mark Twain did.

No GenAI can discover E=mc2.

No GenAI can first paint a Mona Lisa.

No GenAI can compose like Mozart and soothe even in Operating rooms.


No, GenAI can't do anything but being an eloquent statistical parrot...

... with an astronomical carbon footprint.

So?instead, why don't we?move away from GenAI almighty towards "reasoned" AI?

Thierry Caminel

AI and Decarbonization CTO

8 个月

Hi Nicolas, You say that "life is evolving by small steps in all possible directions, by attempts on new things", but that's actually how algorithms evolve. I wrote on that a few tears ago in a paper published by Atos SC: https://medium.com/algorithms-darwinism-and-ai/darwinism-in-the-information-space-4604045c2dad It was before GenAI, but still quite valid IMO, and might partly answer your question.

回复

要查看或添加评论,请登录

Nicolas Kozakiewicz的更多文章

社区洞察

其他会员也浏览了