Romanticism and speculations on Artificial Intelligence
Photo by sergio souza on Unsplash

Romanticism and speculations on Artificial Intelligence

Artificial Intelligence is nowadays on every mouth and even on the R&D page of every company and government. Would you like an intelligent agent at your side? Get "Alexa" …

?But are you sure you will count on real intelligence?

?First of all we need to scope on the (luckily unregistered) concept of intelligence.

Wikipedia says: Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximise its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".

?Worth noting that this definition give no definition at all of intelligence, limiting itself to differentiate between an artificial (produced by machines following algorithms) and natural (produced by humans or animals) one. The honest part of this definition goes to the "mimic" term, indeed as I had to learn it – AI (Artificial Intelligence) reflects some mechanically (algorithmically) obtained mimic of some natural intelligent problem solving behaviour.

Now that we "know" what we can expect from an "AI agent" (a machine imitating an intelligent behaviour) we know that we have always to do with an imitation. Historically this might have been encountered by ingenious methods like https://en.wikipedia.org/wiki/The_Turk to simulate chess behaviour against a human being, today we have algorithms like https://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234 which mastered the human inside the game.

Reflecting evolution steps, what today is called "intelligent", it will be soon called "basic" … and will be expected as a basic function in the next future.

Do we hence have already AI in action? Do we need to fear it?

AI in action is a big word. Universities and bigger companies – some of them gathering an important part of their money via automatic profiling from free chatting or from search terms – do have the infrastructure, money and the human-power to address and partially solve non-trivial problems like gaming or physical balance or even protein analysis as we can already see from literature.

Yes, we have AI in action right today, but what is it, really in action? Some algorithms solving specific useful supervised tasks (thanks!!) like fault detection, pattern recognition (e.g. voice or visual), simulation, diagnosis.

All these "AI agents" are sold as AI but fulfil some tasks without at all "knowing" what they do. So these are indeed very useful gadgets which can de-facto improve / support our everyday life.

FEAR: NOPE – you should rather fear humans adopting artificial intelligence methods instead of artificial intelligence per sé. One piece of good news concerning fear is that there is (at the time of writing – December 2020) no real self-aware artificial intelligence. Self-awareness is connected with the possibility to consciously experience a state of emotion (qualia ). Self-awareness is what is today totally missing in artificial intelligence gadgets – despite of the romanticism of some authors as e.g. R.V. Yampolskiy in e.g. Detecting Qualia in Natural and Artificial Agents .

Self awareness and experience are hence defined (in human beings) through qualia which are stimuli answering the "question" what does it feels like to experience something?

Without running the risk of being esoteric, I do believe that deterministic – but even non-deterministic algorithms do not have any way of experiencing qualia and hence are not self-aware. In https://en.wikipedia.org/wiki/Chinese_room you find the clear difference between week and strong AI. All what we know today – from robotics to language understanding or conversational AI is "supervised AI" and hence "weak AI". Strong AI (there are some writers which even describe?an "AI++" in correlation with super-consciousness like in https://www.researchgate.net/publication/263882127_Super-intelligence_and_super-consciousness but all this is in my eyes just written (romantic) literature or at least (very amazing) "thoughts", not implementations.)

What does it means "supervised AI"?

"Supervised AI" means mainly a scenario where humans (mostly mathematicians or computer scientists) instruct a (conventional or machine learning) system to recognise ways how to solve a problem inside an a-priori defined (!) mathematical space (scenario) where metrics (tools) are used to let the machine "understand", it has reached some target (e.g. [partially] solved some problem, or some goal). One of the nowadays hyper-financed research fields is ML (Machine Learning). In ML researchers imitate the behaviour of natural neurons (depolarisation, ignition, combination) which are implemented based on some aspects of neurology through artificial neural networks, basically here resumed as a architectures of connected tensors (matrices) of float numbers organised in several layers, some of which are defined as "hidden" and others not. An ANN (artificial neural network) is basically a linear regression tool which implements a linear (polynomial) function. If it has one or several hidden layers inside it is called "deep". Several architectures of ANN are differently capable of recognising / translating patterns. So the basic work of an ML data scientist (in order to create an ML based problem solver for a specific scenario) is to

?1)???Build a scenario representation (map ontological knowledge into float numbers)

2)???Define / extract a training set (feature vector) and a validation set

3)???Train the ANN in several epochs optimising its response (build the feature matrix)

4)???Use the ANN to gather an answer to problems for that scenario which should be "similar" to the ones given in 2).

?Training the ANN means here that in several "epochs" the ANN is given the same training set again and again, at each step a so called forward propagation (linear) function is computed, then a backward propagation (derivative) function (of the former forward function) is computed to adjust some weights inside hidden tensors present mainly in the (hidden layers of the) ANN. And that's all, folks.

?Is This ML? Is This AI?

Yes, this is basically (supervised) Machine Learning – basically taking some examples from past scenario(s), coding them into a so called feature vector, then training the ANN. A well trained ANN – sounds "dangerous" – what can this do after a good training? A (well) trained ANN (e.g. one used for classifications purposes) simply takes a "new" feature vector as input and output some so-called activations which express usually a probability, that the input feature vector belonged to this or that class according to the trained values in the learned feature matrix. In a quite "funny" way all the ML community call this (re)action "prediction".

?N.B. they call "prediction" de-facto a linear approximation, a kind of interpolation ("computed" via summing derivatives in the previous training session) which says that the input vector within a certain probability could be belonging / mapped to some predefined class(es) or between two or near other ones.

I always dreamed of an oracle which could "predict" some useful things – why can't this be a prediction ? Is this useful at all?

Yes, very useful. Provided all the previous work of the mathematician / computer / data scientist was well done and an appropriate "godelisation " of the problem into a floats feature vector be gathered, the ANN well trained, then it will be very fast and very good in outputting an "activation" (i.e. giving an answer) in about constant time! The latter is a very important fact inside theoretical computer science and it is a great advantage vs. conventional e.g. logics based systems, which search, are hence time consuming and might even not terminate (give an answer).

An ANN always terminates and even very quickly! So an ANN is a very useful intelligence prothesis indeed (once you did all that "minimal" godelisation pre-work!),

Why the answer of an ANN cannot be a prediction in the oracle sense? Simply because

1)???The scenario (and the conditions) are already "old" (e.g. taken from data in the past)

2)???The work of the ANN is merely to approximate/interpolate (guess) a value of a polynomial given an input?

Naming that "prediction" is here a very romantic way to define a linear approximation – and indeed "prediction" is much nicer in case you want to sell and get financing for an AI "prediction system" instead of simply an "interpolation system" - the linguistic impact is huge, isn't it! Following the real neuronal literature you can at most talk of an "activation", a kind of "reflex " as answer to a stimulus (applying the input feature vector as input to an ANN).

I heard of "unsupervised ML" – maybe is this ML a way to get real predictions without training?

There is an interesting actually highly paid theoretical science sector studying reinforcement learning – this indeed seems to lead to the right way to unsupervised AI – but just at the moment – no real practical implementation (just small games) have been implemented.

ML based AI sounds nature friendly – what are the resources needed for a good AI system?

The theory of computational complexity cannot be fooled, even by an intelligent system – maybe in the sense of https://www.sciencedirect.com/topics/social-sciences/concept-of-intelligence . Where conventional AI systems (e.g. logics or rule based systems) spawn a (more or less big) search space consuming both memory and computational resources (e.g. also time), ANN once trained are quite lean and fast and do not waste all that amount of time-space resources. Where is the catch here? Training is here the catch. In order to train average ANN new (electricity and time consuming) supporting gadgets like TPU's were realised: although nowadays an access to TPU's is still publicly granted, we will see how the financial process will shape their use for a public audience in a near future. Using a TPU can help train a big ANN for later (fast) use. Most of nowadays ANN trainings are carried out nowadays on CPU's or at mostly on GPU's. On the representation side big and nice efforts have been done creating powerful frameworks like keras in order to quickly compose, train and use ANN with e.g. python .?

My resumee:

AI is nowadays a very interesting field where more and more human subsystems are imitated by efficient artificial systems. Implementations show that today we have only weak supervised AI. ML is experiencing a high financing motivation – being itself still in a promising but not reached target for strong AI. "AI" is nowadays the magical (strong) word to easily get money out of?technical unaware investors … and at the same time an exciting research field!

And yours ?

Yours, Fabio Ricci from semweb

PS: Due to linkedin's way of linking of reshares - please would you mind (in case you got here via a Linkedin Interest Group and you liked it) considering the original article https://www.dhirubhai.net/pulse/romanticism-speculations-artificial-intelligence-fabio-ricci to reshare or put reactions - thank you so much.

Erik Pouwen

AI|Tech|Data|Platforms|EHealth| Indusrtry4.0|Implementatie |Digitale Transformatie

3 年

Fabio Ricci?we see that feature extraction and training the ML model is key to getting the desired AI output. We experience that a lot of companies do not use science and feature extraction... Hoping the machine says it all can lead to very weak models and millions in losses!?

回复

要查看或添加评论,请登录

Fabio Ricci的更多文章

社区洞察

其他会员也浏览了