New Age of AI - AI Fundamentals, Impact and Outlook
Adam D. Wisniewski
Partner at ADW Consulting and AI4Leaders | Technology Strategy & Business Alignment | AI | DLT | Rational Visionary
Ever since OpenAI released version 3.5 of ChatGPT to the public, everyone has been talking about AI. This technology raises immense hopes, both for the economy and society as a whole, as well as deep fears. It promises to further unite the world and allow every single person to share in its benefits, but also harbours the risk of creating new divides and deepening the existing ones - for example, between technologically advanced countries and social classes and companies, and those that are technologically distant or less tech-savvy.
Today, we can only be sure of one thing: We are at the beginning of a development that will expand into every area of life and business and that is progressing at a speed that we have never experienced in the entire history of mankind. It is, and will be, a major challenge to steer this development in such a way that we get the best out of it in the long term - and avert problems and dangers that are difficult to get under control again.
This first article - and you can look forward to more - is intended to help strengthen understanding of this development and help prepare you and your companies for this wild ride.
What Is Artificial Intelligence?
AI is nothing really new in itself. The term Artificial Intelligence was introduced as early as 1956 by John McCarthy during the Dartmouth Conference, which is seen as the starting point of AI as a field of research. It describes algorithms and models that allow computers to perform tasks that typically require human intelligence.
Alan Turing and Isaac Asimov in particular were already thinking about this topic years before the Dartmouth Conference. After all, it is an old human fantasy, and just as great a desire, to create entities that do our work for us - and confirm us as great creators.
Evolution of AI
1) The Past
The development of AI has been rocky - repeatedly driven by high hopes and repeatedly characterised by disappointments. After the first theoretical considerations by Alan Turing - who devised the Turing Test, which was intended to decide whether a machine could think for itself, or rather whether it was possible to distinguish between the thinking of a machine and that of a human being - and Isaac Asimov, who thought about how simple rules could be used to ensure that an automaton or robot would not harm humans, many scientists began working on AI in 1956.
This so-called Golden Age of AI was driven by great expectations and had great goals. The pinnacle was to be a computer that possessed all of humanity's knowledge and, based on this, could solve all of humanity's questions and problems.?
However, it turned out that the scientists and developers were unable to find a solution even for simple problems. They were unable to generate usable automatic translations or recognise spoken language. In 1973, the Lingthill Report predicted that machines would always remain at the level of an experienced amateur. This dashed great hopes, AI research was no longer funded and the so-called AI winter prevailed until 1980.
2) The Present
Developments in areas such as computer technology and the ever-increasing amount of easily accessible data, not least due to the success of the internet, breathed new life into AI research from around 1980. By 1997 at the latest, when the IBM AI Deep Blue defeated the then world chess champion Kasparov in a highly publicised man versus machine tournament, AI had once again arrived on the world stage. Large companies such as IBM, Google and Apple soon began to invest substantially in AI research. This resulted in applications such as Siri, IBM Watson, auto-correct features, automatic song recommendations from Spotify or the Roomba robot hoover from iRobot.
This development has accelerated with the advent of ever faster computers and ever larger amounts of available data, but has not been without its disappointments. Based on Watson, IBM has mastered speech recognition, for example, and impressively demonstrated this ability by winning the American Jeopardy show in 2011, but its AI system has disappointed expectations in other areas such as medicine.
The next breakthrough came with a completely new learning approach for the AI algorithms - they are not supposed to make their decisions based on the broadest possible depth of data, but only receive the basic rules for a problem, and then find the best solutions themselves through iterative trials, coming a little closer to the best solution with each iteration.
A major advantage of this new approach, known as deep learning, is that it can be applied to any problem. And Google proved just how powerful it is with its AlphaZero system in 2017. It gave its system the basics of chess, had it trained and played 100 games against the then world chess computer champion Stockfish 7. Of these 100 games, AlphaZero won 28, drew 72 times and didn't lose a single one. And AlphaZero only had to train for 4 hours to achieve this feat!
Since then, deep learning systems have learned how to recognise objects and faces in images, create their own images, and more recently videos, based on a simple description, how to predict the folding of proteins, or even cancer diseases based on X-ray images, how to write advertising copy, or even entire articles and books, write computer code or compose music. And since OpenAI, with access to vast amounts of information, released its AI ChatGPT 3.5 to the public in November 2022, this technology has finally entered every classroom, every office and probably even your home.
3) The Future
With a technology that is developing so quickly, it is of course difficult to predict what the future will bring. In any case, we will see rapid improvements in the areas where it is already in use today. The automatically generated images will soon be almost indistinguishable from photographs, the AIs will create ever better and longer videos - soon probably complete movies, entirely according to the specifications of their viewers. They will take on more and more tasks in research, in companies, and we will find ourselves facing them more and more often when we communicate online. Language barriers will fall, everyone will have access to the best teachers in every field, and more and more of the decisions we make will be supported by an AI.
We will have to learn to live closely with AI, whether privately or professionally. And we already have to prepare ourselves in both areas to have to adapt to an ever faster changing environment.
We are already feeling many of the consequences of this development today. AIs are influencing our behaviour on social media and we are struggling to distinguish real news from artificially created deep fakes in the case of images and videos. During a phone or video call, we can no longer be sure that we are dealing with a real person. Texts are increasingly being created automatically, which requires new approaches for copywriters, teachers and lawyers alike.
With ever-improving systems, we can expect more and more professions to change fundamentally because of AI. And this will increasingly involve complex and creative tasks. A recent study by the consulting firm Cognizant together with Oxford Economics estimates that 90% of jobs in the US will be disrupted by AI (https://www.cognizant.com/us/en/aem-i/generative-ai-economic-model-oxford-economics ).
Looking further into the future, however, AI could have an even greater impact. There is currently a debate in scientific circles about whether or when AI could reach the full breadth of human intelligence - so-called General Intelligence (AGI). The technology could then be used for any intellectual human task. A further development could lead to super intelligence (ASI), i.e. AI systems that surpass us in terms of intelligence in all fields. Some experts believe that this could happen when AIs themselves can create even better AIs, and that this process will then accelerate exponentially. According to these experts, we would then have completely lost control of the technology and would be at the mercy of superior AI-"beings".
Artificial General Intelligence
Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a level of competence comparable to that of a human. Unlike narrow AI, which is designed to perform specific tasks with expertise, AGI can generalize its learning and reasoning capabilities to solve any problem, including those it has not been specifically programmed for. AGI encompasses a broad and flexible range of cognitive abilities, enabling it to perform any intellectual task that a human being can.
Artificial Superintelligence
Artificial Superintelligence (ASI) refers to a hypothetical AI that surpasses human intelligence across all fields, including creativity, general wisdom, and problem-solving. Unlike Artificial General Intelligence (AGI), which aims to match human cognitive abilities, ASI would be capable of exceeding the intellectual performance of the best human minds in virtually every discipline, from scientific research and invention to social interactions and emotional understanding. The concept of ASI raises both opportunities and significant ethical and safety concerns, as it could lead to unprecedented advances in technology, medicine, and science, but also poses existential risks if not properly controlled or aligned with human values and interests.
The Singularity
The Singularity, in the context of AI development, refers to a hypothetical future point at which artificial intelligence (AI) will have advanced to the point of creating machines that are smarter than human beings. This moment is expected to lead to exponential technological growth, resulting in unfathomable changes to human civilization. The concept suggests that post-Singularity, AI could improve itself autonomously at an ever-increasing rate, leading to the creation of machines with superhuman intelligence and abilities. The idea of the Singularity raises both excitement and concern, as it presents opportunities for solving humanity’s most pressing problems but also poses significant ethical, safety, and existential risks. The term is widely associated with futurists like Ray Kurzweil, who predict that this event could occur within the 21st century.
AI Hype - Why Now ?
As already mentioned in the article, the latest development of AI has mainly been accelerated by increasing computing power, the availability of large amounts of data and the development of new AI concepts.
However, acceptance among the general public and within companies should not be underestimated. We are increasingly living in a digital world, communicating, working and buying online. We are also used to accessing new, even more convenient online services ever more quickly and expect individual and personalised services in every area. These are natural fields of application for AI technology, which is why we are embracing it with open arms.
Main Types of Current AI Systems
AI is a term that summarises many technologies. This includes, for example, machine learning, where decisions are made automatically but under human guidance, and as subtype of it deep learning, where the algorithm itself decides whether a prediction is right or wrong. Here is a list of the most important types of AI:
1) Machine Learning Systems
These AI systems learn from data, identifying patterns and making decisions with minimal human intervention. Machine Learning (ML) is the foundation of many modern AI applications, including image and speech recognition, medical diagnosis, and stock market trading. It includes subcategories such as Supervised Learning, Unsupervised Learning, and Reinforcement Learning.
2) Expert Systems
Designed to mimic the decision-making ability of a human expert, expert systems use predefined rules and knowledge to make inferences. They are used in specialized fields like medical diagnosis, engineering, finance, and more to provide advice, interpret data, or diagnose issues based on their vast database of knowledge.
领英推荐
3) Deep Learning
Deep Learning is a subset of machine learning that utilizes deep neural networks to model and understand complex patterns in large volumes of data. By mimicking the structure and function of the human brain through layers of artificial neurons, deep learning algorithms can automatically extract and learn features relevant to their tasks, such as image recognition or natural language processing. This technology has led to significant advancements in AI, enabling machines to perform a wide range of tasks with increasing accuracy and autonomy, without needing explicit instructions for feature extraction and interpretation.
4) Generative AI
This type of AI focuses on creating new content, such as images, text, music, and even video, that is similar to human-created content. Generative Adversarial Networks (GANs) are a popular approach, where two neural networks compete with each other to improve the quality and realism of the generated output. Applications include art creation, video game content generation, and more.
5) Natural Language Processing
NLP systems are designed to understand, interpret, and generate human language in a way that is valuable. They enable computers to perform tasks such as translation, sentiment analysis, and speech recognition. Chatbots and virtual assistants like Siri and Alexa are common applications of NLP.
6) Computer Vision
This AI technology enables machines to interpret and make decisions based on visual data. From recognizing faces in social media photos to autonomous vehicles interpreting the driving environment, computer vision systems are used in security, retail, healthcare, and many other industries.
7) Robotic Process Automation
RPA technologies allow for the automation of repetitive tasks usually performed by humans. By mimicking human interactions with software and applications, RPA can automate processes in various domains such as customer service, data entry, and more, enhancing efficiency and reducing errors.
8) Cognitive Computing
Aimed at simulating human thought processes in a computerized model, cognitive computing systems use self-learning algorithms that use data mining, pattern recognition, and natural language processing to mimic the human brain. The goal is to create automated IT systems capable of solving problems without human assistance.
Current limitations of AI
Despite the rapid advancements in AI, current systems still face several significant limitations that researchers and developers are actively working to overcome. These limitations include:
Addressing these limitations is a focus of ongoing research in AI, aiming to create more adaptable, reliable, and transparent AI systems that can work more harmoniously within human contexts and societies.
Applications of AI in Business
Sooner or later, AI will influence most areas of your company. The earlier a company engages with it, the better prepared it is for the change and the greater the chances of successfully utilising AI in a competitive environment. Most importantly, this will lay the foundations for the successful utilisation of this technology at an early stage. Even if the focus on these varies depending on the industry and area of application for AI, the following components should be taken into account:
Some well-known strategy consultancies are of the opinion that the learning effects and the development of the necessary infrastructure are reason enough for the introduction of AI, even if it is ultimately not economically viable. I do not agree with this view. AI often triggers great fears among employees and uncertainty among management. If a first AI project fails because its goals are set too high - or too naively - this can strengthen internal resistance for a long time and put the brakes on similar initiatives. If customers are also disappointed, the damage can be even greater. However, if a project is well prepared and successful, nothing stands in the way of AI-friendly further development.
Nevertheless, there are ways to gain initial experience and test success with relatively little effort and risk, for example by working with external companies that offer easy-to-configure AI platforms. In particular, this approach makes it possible to take the first steps without immediately building up a lot of AI expertise in the organisation.
When analysing the potential benefits of AI for a company - this will be the focus of a subsequent article - the possible areas of application should be compared with the characteristics of such systems: Do better predictions add substantial value ? Can the technology be well integrated into the existing infrastructure and organisation? Are there processes that can be easily automated? Does the technology fit into the company's innovation strategy?
Risks of AI and Ethical Considerations in Business
Your use of AI in the company, whether for internal processes or towards customers, is of course not without risks. These depend, among other things, on the type of AI technology that is used. As AI systems are usually dynamic and change over time - for example through repeated training with new data - appropriate risk management structures should be in place.?
The main risks of AI systems in companies are often cited as privacy - i.e. the handling of confidential data, bias - i.e. distorted results, usually due to training data that is incomplete or unbalanced, and the lack of transparency in decisions generated by AI - especially in the case of self-learning algorithms such as deep learning. However, there are many other risks that should be considered, and as these depend on the company, area of application and technology, the following list will not be exhaustive:
In addition, the use of AI also requires an awareness of ethical aspects towards employees, customers and the environment. Here are the most important ones, whereby the first 3 are repeated for the sake of completeness:
Addressing these ethical considerations requires a multifaceted approach, including ethical guidelines, stakeholder engagement, transparency in AI development and deployment processes, and adherence to relevant laws and regulations. It also involves a commitment to continuous learning and adaptation as AI technologies and their societal implications evolve.
Where to Start ?
As urgent as it is for companies to address this topic, the task can be overwhelming. Getting insights from articles like this one - and those that follow - is certainly good advice. To earn a certificate in AI from one of the many providers is certainly time well spent as well, if you have it.?
From a business development perspective, however, it is better to work with experts - like us. We have designed a special offer for this, which provides a lot of hands-on knowledge within 7 days and sets the most important foundations for a robust AI strategy, based on real-world experience:
It should also be mentioned that any use of AI stands or falls with the quality of the available data. The development of a robust data strategy is therefore usually the first step and should be tackled independently of the use of AI. Of course, we are happy to help with this as well.
Conclusion
This article has looked at the beginnings and development of AI, given an overview of the current state of the technology and ventured a look into the future. Even though the impact on society as a whole is considerable, we have focussed here on the impact for companies, on the opportunities, but also on the risks that need to be considered. And finally, we have emphasised the importance of dealing with this topic at an early stage.
AI is part of our society and economy, and will remain so forever. Only companies that embrace this will be able to survive in the long term.
To quote an unknown source, the best time to start was yesterday, the second best time is today!
Aspiring Entrepreneur | Strategic Planning | Legal Tech | Digital Transformation | AI & Cybersecurity | Community Growth & Empowerment | Forward McKinsey Alumni | Aspire Institute Leadership Program Scholar
4 个月Thank you for your thorough insights, I really enjoyed reading it! This article is very well established and comprehensive as it covers the evolution of #AI from past to future seamlessly as well as types of AI, its application, and concerns to look for, in the simplest form ever! I highly recommend any person who would like to get to know further about AI to read this article. I'd like to have your permission to repost it.
Push it forward
4 个月For me, the aspect of new business models is the most fascinating part of AI. Thank you for this well researched paper, Adam! Push it forward.
Partner at ADW Consulting and AI4Leaders | Technology Strategy & Business Alignment | AI | DLT | Rational Visionary
4 个月Interested in the topic, and specifically in applications in the Banking industry ? Join me and my fellow experts for a webinar tomorrow Friday, 28.06. at 12:00 CET, where we will discuss Banking Use Cases, legal implications and efficient development of AI services! Register here, it's free of charge, and a great opportunity to learn! https://www.eventbrite.ch/e/banking-20-embracing-ai-legal-challenges-and-efficient-development-tickets-917228313577?aff=oddtdtcreator