Artificial intelligence and lateral thinking

Artificial intelligence and lateral thinking

For many years we have been accustomed to use algorithms based on rules, systems that behave exactly as they should and that respond correctly to our needs.

Think for example about how a bank transfer works: we enter the required data and expect the money to be moved from one account to another, all because the systems follow precise rules with which they have been programmed. It may happen that there is some malfunction, but these are issues that can be identified and resolved, so that the system returns to work as it was designed.

Over time, we have also become accustomed to artificial intelligence algorithms, systems that can make autonomous choices based on context and what the machine is able to learn through use.

The next song on Spotify, the recommended products on Amazon, the advertising on social networks, are not at all random, on the contrary they are linked to our tastes, our preferences, our propensity to purchase. Nothing is random, everything is guided by algorithms that, day after day, learn something about us and are designed to provide us with something that in some way meets our taste or can stimulate in us the desire to buy something.

More recently, with the public availability of tools such as ChatGPT, even non-experts have been able to experiment with the generation of complex texts starting from a huge base of knowledge, obtaining a result that is sometimes of good quality thanks to a great ability to construct sentences. This has allowed us to have content that is written very well, whose quality, albeit with limitations, appears good, but whose truthfulness is a mystery. The last point depends on the base of knowledge that, unfortunately, has privileged quantity over quality, thus incorporating a part of correct information and another part of completely wrong information.

As a result, the final output will not be accurate and, in general, we will not be able to trust the produced content, even if it will be written in a very convincing way.

So far, we have seen algorithms that can make decisions based solely on pre-set rules programmed in, up to algorithms that can choose based on massive concepts and knowledge acquired through subsequent interactions. In any case, the goal of the algorithm has always been to solve assigned problems by making decisions within a specific reference framework, without the freedom to approach the subject in a completely different way or to subvert the operating rules. What would happen if we had algorithms that were able to use lateral thinking or act completely outside of the pre-set patterns to perform their assigned tasks in the best way possible?

Lateral thinking is a human thinking technique that involves analyzing a problem from different perspectives, generating ideas outside of the norm and finding unusual and innovative solutions. It is therefore something very far from the operating logic of rule-based algorithms, in which there are no degrees of freedom, but only the clear and unchallenged sequence of what must happen based on the boundary conditions. AI-based algorithms typically rely on a large amount of data in the knowledge base, but the goal for the neural network is to find a correspondence with one of the resolution patterns that the machine knows as valid, so often the work of these algorithms is limited to identifying reasonable solutions within sets of valid resolutions. Even in this case, despite the presence of neural networks and huge amounts of data, very often what is obtained are rather traditional solutions.

If we really wanted to exploit the enormous potential of AI-based algorithms, we would have to move within the context of lateral thinking, trying to release some constraints and allowing the machine to propose completely unimaginable solutions. This poses the possibility that the solutions are technically not feasible, but it is the risk that is necessary to take when trying to raise the level of innovation of a system.

On the one hand we could have really innovative solutions in many fields of application, from finance, to health, to production and sales, to marketing, but on the other hand we would have the risk of obtaining completely unfeasible solutions because they are illegal, because they are not compatible with some pre-existing constraints or, simply, because they are not appropriate or not consistent with our ethical and moral principles.

For example, think of a machine that, in order to win any game, decides to cheat, even if it runs the risk of being discovered. Over time it could learn to cheat better, until it refines this feature and turns it into a skill at its disposal. On the other hand, if the goal is to win the game, the rules may become less important. Similarly, we could have financial algorithms that are able to move money in a perhaps more efficient, but illegal way. Or we could have machines, which we have asked to help human beings in preserving the planet Earth by providing solutions to reduce carbon dioxide emissions and pollution, which could consider the extinction of the human beings as a great idea, which we know very well is the main cause of the problem.?

The machine would have ”technically” found an excellent solution, it remains to be seen if such a solution can be suitable for humanity.


No alt text provided for this image

The future is full of transformative changes in the way we work, travel, consume information, maintain our health, shop, and interact with others.?

My latest book, "Augmented Lives" explores innovation and emerging technologies and their impact on our lives. Available in all editions and formats on augmentedlives.com, and on all Amazon stores, starting from here: https://www.amazon.com/dp/B0BTRTDGK5?

Massimo Marabese

Group CIO - Digital Transformation ?Advisor in Business Innovation, Development & Strategy

2 年

Massimo thank you for your insightful article. Some time ago I found this research from Stanford and Google https://arxiv.org/pdf/1712.02950.pdf A ML agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later. This occurrence, far from illustrating some kind of magic intelligence inherent to AI, simply reveals that computers do exactly what you tell them to do. So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other.The computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map. This practice isn’t new, it’s called steganography. A computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. The machines found a way but did exactly what they are asked. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network if not explicitly prevented from doing so will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily

Patrick Barrabé ??

LinkedIn Top Voice since 2024 - Architecte de Narrations d'Influence - Microsociologue - TEDdy Speaker ?? | ???? | ???? | M's

2 年

Intéressante et pertinente question Massimo ??

André Le Lerre

CEA @ AI Technology Futures Ltd. Our mission is to combine the power of Edge AI computing with next generation IoT technologies to generate actionable insights and positive environmental impacts for our customers

2 年

AI algorithms strive to produce the answer which best matches their training data set and so superficially it would seem fundamentally impossible to generate alternative solutions. However, the key to unlocking this potential could lie in reproducing the same mechanism which underpins much of human innovation. That is the transfer of known solutions and concepts to seemingly un-related contexts. A context-to-context translation algorithm would enable AIs to look around for a solution in "foreign" data sets, used in combination with a certain ability to challenge the question. This would require significant generative capabilities but as we see with ChatGPT these will be possible. When algorithms can refine what they are being asked and bounce around each others' data sets for a solution we should get some interesting "out of the box" answers.

回复

要查看或添加评论,请登录

Massimo Canducci的更多文章

  • How To Integrate Artificial Intelligence Into Your Organization: A Comprehensive Guide

    How To Integrate Artificial Intelligence Into Your Organization: A Comprehensive Guide

    In today's rapidly evolving business landscape, integrating Artificial Intelligence (AI) into your organization is no…

  • Human being is not enough

    Human being is not enough

    In professional sports, there are many cases of Paralympic athletes who, thanks to the use of prosthetics designed to…

  • Our digital clone

    Our digital clone

    In early November, OpenAI, renowned for developing the ChatGPT platform, introduced the capability to create custom…

    1 条评论
  • If machines start buying on their own

    If machines start buying on their own

    Who buys for us We are used to thinking of ourselves as the main actors of our purchases, people who consciously buy…

    3 条评论
  • Waiting for Universal Digital Assistants

    Waiting for Universal Digital Assistants

    Waiting for Universal Digital Assistants Many of us remember the first time we used a voice assistant like Siri, Alexa…

  • The world we will have

    The world we will have

    Technology as enabler Technology is an extraordinary enabler of innovation and enables comprehensive solutions that…

  • The change triggered by Apple's Vision Pro

    The change triggered by Apple's Vision Pro

    Here we go. Apple has finally introduced its Augmented Reality headset, called Vision Pro, that resembles a diving mask…

  • What I desire from Apple's Smart Glasses

    What I desire from Apple's Smart Glasses

    Starting on June 5th, an exciting week awaits technology enthusiasts, particularly developers within the Apple…

    2 条评论
  • The right services for the future paid social networks

    The right services for the future paid social networks

    Business models that no longer work For many years, we have become accustomed to numerous "free" services on the…

  • The future of Human - Machine Interaction

    The future of Human - Machine Interaction

    Today, humans have a satisfying interaction with machines when they have good input and output devices, which is why…

社区洞察

其他会员也浏览了