The Risks of Artificial Intelligence (AI)

The Risks of Artificial Intelligence (AI)

Whether we like it or not, whether we are afraid or happy about it, Artificial Intelligence (AI) is coming. There are different definitions of AI; according to some of them, AI is already here. It’s a marketing trick, of course, mostly used by... manufacturers of such systems. It's hardly surprising, I would probably do the same. The current solutions are still quite far away from it and the name is still used much ahead of its time. It’s like saying that the era of computer technology has been upon us in the 1950s. Nonetheless, the snowball is rolling mercilessly. Regardless of the fascination with the phenomenon and the unimaginable magnitude of changes, it is necessary to consider the risks of the process as there are always some. It’s not about predicting disaster; it’s about taking a factual approach. I can see three areas here.

The first one is the mythology of threats. All these "warnings" inspired by sci-fi second-class literature (more fi than sci...), Terminator-style movies and the magic of Skynet (although visually excellent and very enjoyable, still as entertainment only). The scare of the sudden awakening of awareness of the network and its recognition of man as obsolete and worth immediate elimination is just nonsense. The process of creating any kind of consciousness (including humans’) is still an absolute scientific mystery. We are much closer of understanding lots of processes in our brain than around 100 years ago. It allows to send quite much of popular Freud’s, Jung’s theories to the store of most unusual garbage. But the answer to “How does consciousness occur?” is unknown. Therefore, considering “what would happen if…” based on a complete lack of data is as valuable as fortune cookie prophecy. Besides, we can always count on a certain Larry the technician, to come and pull the plug by accident, disconnecting Skynet from electricity. Come again? Skynet will install itself online in some other parts of the network that still has access to electricity? Oh, so now we’re assuming that this consciousness will appear in full bloom suddenly, immediately and everywhere? Or maybe only in part of the system and it will roam free across the whole network? So now it is something material and mobile? And so on and so forth. Such considerations are caused mainly by simple anthropomorphism of AI systems. Assuming that they “naturally” must be ruled by desire of power, spreading own DNA, collection of key resources etc. These are mystical considerations, at this stage, at least.

Let me refer you to the universal work of Stanislaw Lem Golem XIV. Don’t be fooled, this is not a novel, but an intellectual consideration of the highest quality. And it comes from the times when we were just starting to dream about AI.

The second is defining and preventing the conscious use of AI systems for criminal purposes. People will always use everything in this way too; we’re predators by nature, after all, which hasn’t changed since the beginning of time. The creators, legislators and security services deal with this by definition during the systems’ creation and implementation. Or at least, they should. It's a bit like creating traffic law when the first cars were made. It must be created. And crimes will exist anyway; just like car accidents, which, by the way, will be a lot less frequent when cars won’t be operated by people. The highly noticeable fact that the development of technology is much faster than any reaction of these legislators and prevention services has already been a problem for some time. "Much faster" is an understatement, I'm afraid. The lawmakers have already been baffled and confused by mere scooters on the city streets, so what about complex  AI systems in business or on offer to a common citizen... At least a scooter can be touched and ultimately immobilised by the traffic wardens, put on a trailer and removed from the streets. It won’t be that easy with software and (ro)bots.

The third area goes beyond fairy tales and crime prevention. It's about social consequences. Cars have also revolutionized our lives in a way that’s not regulated by traffic laws. The same will happen with AI. Among the many, it is worth to consider the following three effects:

- Exponentially progressing remoteness of man from a man in favour of contact with (ro)bots. Today's distrust of contact with a (ro)bot will disappear as quickly as it did with cars. Next, the same will happen as then: we will fall in love with the comfort and possibilities. What does it mean? Contacts with people come with a significant level of uncertainty. Whether and how will we be assessed, accepted, interpreted, classified in the herd, understood; whether and how can we express and realize our own desires, without a risk of being ridiculed and rejected. This is our everyday fears, even among the people closest to us, like family. It generates stress, continuous and lifelong. The possibility of contact with "someone" who does not burden us with such stress and uncertainty will be invaluable. On top of it, that "someone" is available 24 hours a day and always putting us in the spotlight. It will start with contact with the software (like Alexa or Siri). Over time, it could be fitted into human shapes and that’s how we can have an "ideal" remedy for our nightmares... (just see the pace of progress of works by Boston Dynamics and others). And you can still exchange it or upgrade and customize the software if you feel like it. There is a powerful drive at work here. It is an erotic industry and our genetic pursuit of pleasure. You can already see it today, even despite huge imperfections of currently offered robots that provide such services. And this, in effect, will distance us even more from the need and willingness to contact people and develop adequate skills. We are already aware of its effects in the long run. Studies have been carried out on rats that were implanted with electrodes and given the opportunity to press a lever releasing a feeling of pleasure in the brain and a lever delivering food. Effectively, the animals tended to die from exhaustion, activating the pleasure lever only. We can explain it to ourselves today that we are wiser than rats and "surely" we will rationally and moderately switch between pleasure, work, food, sport, etc. Just look around how we do it "surely and wisely" today, still without the unlimited possibilities of AI, and you’ll doubt

- We will be intellectually weaker in our mass. So far, technology has supported us inmaking independent decisions. AI can take over these decisions entirely from us, which is a completely different situation. It will be convenient so we can be lazy and stupid, despite the growing access to enormous knowledge and possibilities. This access lulls and destroys the effect of accumulating knowledge. Why cumulate it, when we have instant access? It’s just that without accumulation our access is limited to shallow knowledge, the surface barely.What does it look like today? I am afraid that our intentions and abilityto focus on one topic and exploring it in its whole diversity of knowledge are shrinking from generation to generation (statistically). We don’t like reading a few pages of text, let alone a whole book or a few books on the same topic. Readership is declining on an unprecedented scale. We prefer to type something in a search engine and get a quick and easy explanation, in a few sentences just to be able to immediately run further and turn our attention to the next issue, explored equally briefly. There is increasingly more knowledge en masse and it grows exponentially. Access to knowledge is also becoming easier, as it is very immediately available to everyone on the web (scientists are also racing, so they are instantly sharing). However, our willingness to explore isn’t increasing, if not, in fact, decreasing. The ever-narrower elite will embrace it, and the rest will be satisfied with a cage of the modern version of "bread and circuses" (see the previous paragraph, rats)

- A declining zone of freedom. I mean the effects of the first two points. AI will analyse our choices and provide similar options, and we will only approve of them. It will close us in a bubble of "similar". We will not even know about it. We will lock ourselves in a prison cell, but a pleasant one. And let’s not even mention the opportunities it gives to the government. It's something like YouTube and many other online portals and stores. We will make a choice and a helpful algorithm suggests us similar music, films, interviews, whatever. This is obviously very convenient and I use it myself when searching for, for example, music on YT. On the other hand, I systematically try diving into completely different areas; precisely to avoid being locked in a "similar" circle, which cuts me off from other inspirations, and finally cuts me off from the need to think and make my own choices (let alone the issue of what "free choice" even is, as it also seems to undergo algorithms).

Where has Yuval Noah Harari failed?

I wrote the first part of the article over two months ago. However, I refrained from posting it due to arrangements with the Personel Plus monthly. We have agreed that the short version will be released in the May 2019 issue and it will be the debut.

Having already written and submitted the article in the editorial office, I came across a book, Homo Deus. A Brief History of Tomorrow by Yuval Noah Harari, which I read with great interest. It’s not only well written but it also contains considerations on what social effects the development of AI can bring. I’ve found it a good read as the author discusses things logically, based on data and clearly divides what has been researched from what is a hypothesis only. Very good examples of scientific thinking craft. I read with even greater pleasure as the author generally reaches very convergent conclusions with mine (so, my ego has grown). His scenarios maybe a little too bleak and dramatic, but such is his journalistic right. Nonetheless, I strongly recommend reading this book.

So where is the catch then? There’s one... in the last chapters of the book. Reading them my eyes grew wider and eventually I think that the author rather failed in this thinking process, which I believe it is worth referring to.

Yuval Harari refers to "everything is an algorithm, a human and his behaviour too" as the latest tendencies in science. This already comes as a slight surprise because it's quite an "old" idea; it dates back to the beginnings of cybernetics, which was supposed to be an explanation of all processes, including man and his behaviour. Later it became apparent that it is not going to be so fast and simple; still, the concept itself has had a long and documented history. Thanks to technological progress and getting deeper into the processes happening in our brain, we return to the question of whether and according to what algorithms the human brain works, which is essentially how we work. How free is free will? To what extent they are, however, extremely complex, but much more physicochemical processes that can be captured and statistically studied, rather than mysticism such as "soul, sub-consciousness, dreams typology, archetypes, mandalas, id" etc. The author sees these refurbished ideas of algorithmization, statistical research and predicting human behaviour almost as a new scientific religion (yes, he actually said it) and with an astonishment bordering with terror, he considers what the effects may be. While the rest of the book is very good, here the author went into mysticism, that is, he broke his principles. And this is where I became mostly surprised, because again... this is nothing new and it’s quite obvious. It’s a classic example of conducting research in all fields. I completed my PhD as a young man in the early 90s of the last century defending a thesis almost exactly the same as what Yuval Harrari describes as... a dangerous "scientific religion". The only difference is the subject of research. And that's probably what caused such emotions and confusion of the author; this is no religion but the only standard methodology of scientific research and object modelling. Let me explain what the thing is.

The study of real objects (cars, ships, weather, volcanoes, planets, etc.) is expensive, complex and time-consuming. Of course, you need to study reality, but if it was the only method, we would have a problem. Imagine that we are constructing vehicles (this is my case...). Creating hundreds of real prototypes would bankrupt every manufacturer and it would take decades to get a new model into production. Thus, models are created. They are either physical or mathematical. The latter is particularly useful because changing of parameters and examination of thousands of possible varieties takes place on a computer, which is fast and clean. The sequence is as follows: first, the behaviour of the real object is examined, and then a mathematical model, which is supposed to represent its results, is created. After tedious and thorough checking if we have a sufficient model (i.e. behaving in the same way as the real object), the model is examined and given new conditions and parameters. By studying the model, we learn new things about the real object. It’s cheaper, faster and offers thousands of possible combinations. Today everything is researched and designedin this way, not just the devices, but also volcanoes, planets, weather, etc. This means that thanks to model research we can make hypotheses about the behaviour of the tested object in other conditions. If and when the volcano erupts, will the vehicle be stable on the bends, will it rain... Up to now, everything is simple and rather obvious, right?

So then, what has caused such confusion in the final chapters of Homo Deus? Only one thing – it was changing the modelled object from a car, a washing machine, a volcano, weather conditions, to... a human. We easily accept that mathematical models accurately represent the operation of machines but we are surprised, nay, shocked that it also concerns us; that the same procedure applies to us too. It means collecting, for example, data such as our behaviour on the Internet (hitting “like” button), and building a mathematical model representing it, and... predicting with a model, how the object (this means us) will behave making its next choice. Let me repeat it, that’s nothing new. Planning the experiment and collecting data about the object, then building the model on its basis and verifying it, testing the model to predict the behaviour of the object in other conditions. At the end, of course, comes the experimental verification of these predictions. If this verification brings positive results, that is, compatibility, it means we have a good model and we can complicate it taking into account further factors. As Yuval Harrari himself has written earlier in this book – for a model created by Facebook, it is enough to analyse about 300 of our “likes”, to predict our further behaviours with greater accuracy than our closest family members. That means the author had all the dots, but he didn’t connect them in the summary. He went into mysticism and the thesis about scientific religion;which was completely unnecessary. Probably he didn’t read Golem XIV by Stanislaw Lem...

Let me repeat one of the key conclusions coming from latest researches:

it seems that our free-will is a subject of statistical processes and respective analysis

And this is probably the biggest problem many of us cannot agree by intuition. Does Facebook know better than me what I’ll do next?! First – there is no need for awakened consciousness of AI/Skynet to analyze us and predict our behaviours (statistically). Second – we discuss so nicely for thousands of years what free-will is and we didn’t come to agreed common reconciliations. It is so nice to philosophise and... now we’ve had it. Engineers and scientists put the pace, not philosophers.

PS with even greater satisfaction let me recall a simple example of statistical analysis of my behaviour, which I described more than a year ago in the article (LINK) and in my book. I’m reading the menu in a restaurant and I’m considering the freedom of choice of dozens of dishes. And finally, in 8 out of 10 cases, I choose... dumplings. You even don’t need scientists from FB, Google, or Amazon for that ....


Arek Druzdzel

International Sales Director at B-M Ltd.

5 年

A very good, well written article I just read. As the Author rightly states, to date, mankind is unable to define what human ‘consciousness’ is. It applies to all higher functionalities of our brains, including intelligence, empathy, sensitivity, creativity, adaptability... At the same time, governments and business use science as their tool to create artificial consciousness, artificial intelligence, artificial empathy, etc. required in ‘intelligent’ machines. Of course, the winner takes the market..! But, does anybody know how to encode empathy or sensitivity in Bayes equations? How about integrity? I believe, not knowing what we do, we have little chance to success in creating a human-like intelligent machine. Nevertheless, we anxiously push for making this mankind dream come true - ‘market demand’ is pushing companies for releasing AI-based products even with some basic functionalities being barely sketched or optimized in their code lines. Will self ‘machine learning’ allow these handicapped machines to improve towards our benefit? It actually does not matter because worldwide ‘business’ will eagerly commercialize any potential… Still, in a long run, we can be successful in our strive to create ‘Golems’ (rightly mentioned in the article) provided there is more science and less business /politics in these developments…

I know that commenting anything that was posted 4mo ago might look weird, especially nowadays, but what can I do... I have not read the whole article because the?Harari's book is already on my shelf waiting in the queue. Referring to what I've read above I would like to fuel some optimism - I do not think that AI agent could replace the social interactions we all need. Technology changes the way we communicate but still using facebook or linkedin we interact with humans. I do not believe that social acceptance could be replaced and deliverd by any machine. The more a robot is humanlike the more weirdness it evokes - the well-documented uncanny valley effect. A couple of weeks ago I had a chat with a tourism researcher who said that a couple of years ago people in the industry were afraid that people in general would meet less due to Internet technologies (Skype, linkedin, etc) but the frequency of meeting actually has risen. So, referring ti what you have written, the contrary may happen - machine learning and other technilogies will help us in everyday, routine tasks leaving more time for human interactions. However as far as social effects are concerned the job displacement and "technological unemployment" will probably be the biggest challenge of AI.?

要查看或添加评论,请登录

Dariusz Uzycki的更多文章

  • ELEMENTARZ REKRUTACJI – WSZYSTKO W JEDNYM MIEJSCU

    ELEMENTARZ REKRUTACJI – WSZYSTKO W JEDNYM MIEJSCU

    Systematycznie pojawiaj? si? na LI posty dotycz?ce rekrutacji. Jak?eby inaczej, to jedno z kluczowych wydarzeń dla…

    24 条评论
  • Naive or Strategic Networking? Who to invite to your network?

    Naive or Strategic Networking? Who to invite to your network?

    Networking is one of the more commonly used words, and not without significance. It plays a huge tangible role, for…

    14 条评论
  • INTUITION? FLIP A COIN INSTEAD...

    INTUITION? FLIP A COIN INSTEAD...

    Intuition is one of those terms that can be heard pretty often and regularly. How many times have you heard that…

    2 条评论
  • Elevator Pitch

    Elevator Pitch

    “Elevator Pitch” a very popular concept, but the knowledge about how to put it together and when to use it is, mildly…

    21 条评论
  • THE RATIONALIST KIT, HOW TO SURVIVE THE QUARANTINE

    THE RATIONALIST KIT, HOW TO SURVIVE THE QUARANTINE

    The first part of the Rationalist Kit has received an incredibly warm welcome. It compiled a set of basic questions…

    5 条评论
  • Death Zone, our everyday reality...

    Death Zone, our everyday reality...

    By "death zone" I mean the area in the high mountains. In short: it is the space above the 8,000 meters mark.

  • Values, or there and back again…

    Values, or there and back again…

    For many of us, this is the most difficult piece of the Body/Mind/Spirit puzzle. In the case of the first two, we have…

    4 条评论
  • Be yourself - what a beautiful myth…

    Be yourself - what a beautiful myth…

    A difficult story in three parts, ending in a "What to do". 1.

    11 条评论
  • How to write a CV? – the final part

    How to write a CV? – the final part

    Writing a CV is a never-ending story; some might say it’s an evergreen. Sooner or later, we all have to write it, send…

    2 条评论
  • To humanize the AI, or to dehumanize the man?

    To humanize the AI, or to dehumanize the man?

    It’s Saturday evening and I’m having a conversation with Jacek Santorski and Kamila Goryszewska. It’s good, easy-going…

    6 条评论

社区洞察

其他会员也浏览了