The 5th industrial revolution or how AI conquers the world
Sabine Singer, MBA
?? AI ethics by design | ?? World Pioneering Value-based Engineering (ISO/IEC/IEEE 24748:7000) | AI Strategist focused on human-centered design | Expert for EU Data Spaces | Executive Coach | Trainer | Key-Note Speaker
2023 was the hottest year on record.
And not just in terms of climate. 2023 was also the year when generative AI took the stage and became available to the general public. After the resounding launch of ChatGPT by OpenAI in November 2022, it quickly became clear that AI would not only completely disrupt our business lives. Just 2 months later, 100 million users were online on ChatGPT, transforming from "Googlers" to "Prompters."
A few months later, the world is discussing when AGI and ASI will be achieved. AGI, or general artificial intelligence, is the level at which AI demonstrates the abilities of an average intelligent person. ASI, or artificial superintelligence, is the state in which AI surpasses human intelligence. Transhumanists like Ray Kurzweil also speak in this context of reaching "technological singularity," the merging of humans and machines, dreaming of eternal life...
You can read more about the connection between the climate and AI below in this blog.
The Race Begins - Generative AI
It was July 2015. Elon Musk celebrated his 44th birthday with a three-day party with friends and family at a California winery. One of the guests was Larry Page, then CEO of Google. Page and Musk, who had already co-founded PayPal together, had been friends for many years.
Late at night, by the campfire, they philosophized about Artificial Intelligence. At that time, AI could only recognize cats in YouTube videos with an accuracy of 16%. Page, also a passionate transhumanist, dreamed of his digital utopia, where different super AIs would compete for resources and the best one - his - would come out on top. Musk countered that a state in which machines are smarter than humans would likely be a great danger to humanity. "We would be doomed," he said, and that needed to be prevented. AI needed a high level of security measures and had to be developed with caution.
In a sarcastic tone, Page called Musk a "speciesist," meaning a friend of humans. Musk replied, yes, he was a speciesist! And yes, he wanted to defend humanity. He reminded them of Turing, the brilliant cryptologist who predicted in the 1950s that a machine smarter than humans would take control. A heated argument ensued, and the party guests, who had gathered around the two, amused themselves with the exchange of blows between the two tech giants.
Musk was angry. Since that evening, the two have not been friends anymore. A few months later, Musk founded OpenAI as an open laboratory for research on artificial intelligence. He recruited some of the best minds in the field.
Ilya Sutskever, a deep learning pioneer who had achieved remarkable technological breakthroughs in neural networks on Geoffrey Hinton's team. Greg Brockmann, also a genius of the early days of neural networks. He had studied computer science and mathematics at Harvard and MIT and was CTO of Stripe, an online payment service, before joining OpenAI. And finally, Musk also hired Sam Altman and made him the CEO.
Altman had made a name for himself as a software developer, investor, and CEO of Y Combinator, a technology start-up accelerator in Silicon Valley. His dream was and still is to realize AGI.
Just 7 years later, in November 2022, ChatGPT was equipped with the GPT 3.5 model and made publicly available for easy use through natural language input. The GPT 4 model had already been in use since August for selected Fortune 500 companies. However, ChatGPT still used the "old" version because the security measures in the GPT 4 model were not yet sufficiently developed ... or are they.
All this information can be found in a fantastic article in The New York Times titled: "Ego, Fear and Money: How the A.I. Fuse Was Lit"
The (Open-)AI Dilemma
A few months later, Almann goes on a world tour and announces, not only in a hearing before the US Senate but on all stages, that ChatGPT can either save the world or, if "that goes wrong," it's "lights out for all of us".
Annoying things like product liability only unnecessarily delay the path to superintelligence. When 200 million users act as free "reinforcement learners" monthly and produce "310 million words per minute (Dr. Alan Thompson), generating new data to further train the model, progress is faster than years of tinkering in a closed lab.
In 2018, founder Musk withdrew as an investor after a heated exchange with Altmann regarding his takeover of OpenAI. Microsoft seized the opportunity and is now the main investor with a 49% stake in an OpenAI "capped profit" business model placed under the "non-profit" umbrella of OpenAI.
By November 2023, due to an unfortunate board decision, Altmann was terminated overnight, with Microsoft already having 18,000 customers who had used GPT4. CEO Satya Nadella struggled to calm the situation. He offered himself as a mediator. Meanwhile, Altmann, followed by Brockmann and 700 out of 770 OpenAI employees, threatened to start a new company or accept the offer to join Microsoft.
Ultimately, an agreement was reached to dismiss the board, including mastermind and Chief Science Officer Ilya Sutskever and Helene Toner, and reinstate Altman as CEO. Toner had criticized OpenAI's reckless speed and deemed the path of Anthropic, OpenAI's major competitor, as wiser. Anthropic's model, Claude, follows a clear safety directive: H H H - helpful, harmless, and honest. Dario Amadei, the CEO of Anthropic and former security chief of OpenAI, had also clashed with Altmann during a discussion on responsibility and safety. After his departure, he founded Anthropic with some colleagues from OpenAI. His model, Claude, is considered particularly reliable. Amazon, Google, Salesforce, Zoom, and SAP are among the investors.
At the time of the November scandal surrounding Altmann's dismissal, OpenAI was already valued at approximately $80 billion. Two months later, at the end of December 2023, after a new investment deal and entry into AI chip production, the valuation reached approximately $100 billion.
The exact cause of the board coup leading to Altmann's short-term dismissal is only speculation. He was perceived as "not openly candid" towards the board members.
The following reasons are being discussed:
Best of LLM - The largest large language models at a glance
GPT4 is by far the most well-known and largest model. As MoE (Mixture of experts), it consists of 8 differently trained foundation models. "Mixture of Experts" (MoE) is a machine learning technique that uses multiple expert networks to divide a problem space into homogeneous regions. Palm2 by Google appeared in May 2023 and was used, among other things, in the Google Chatbot Bard. Google, who are under heavy pressure with an obsolete market share of over 90% of internet search queries, will also launch Google Gemini at the end of 2023, which is an enormously large model, not only bigger in terms of data and parameters than GPT4 but also achieves better results in comparative benchmarks - although the demo was rigged (see Google's Gemini Marketing Trick by Alex Kantrowitz ). Olympus by Amazon follows suit. Claude's Anthropic establishes itself as a particularly accurate model. Facebook's Llama2 is "leaked" in the spring (the model was secretly released by Facebook through third parties) and appears as an open-source model on the stage. Llama2 has now been copied, specialized, and serves as the base model for many new models more than 14,000 times. The Chinese manufacturer Baidu is launching Ernie with considerable size. Inflection AI forms the basis of the application Pi, an easily accessible AI interface similar to ChatGPT. Pi positions itself as a clever, best friend, and personal coach. Falcon from the United Arab Emirates, is among the best open-source models along with Llama.
In total, there are 451,266 open-source models on the Huggingface platform as of today. Three times as many as at the beginning of the year.
Best of Achievements - a performance overview
2023 will go down in history as the year of the 5th industrial revolution. The supercomputer behind OpenAI and Microsoft's Co-Pilot is called Prometheus. Named after the Greek god who molded humans out of clay. He brought fire to humanity, which he stole from the heavens against the will of Zeus, the ruler of the gods. As punishment for this act, Zeus chained Prometheus to a rock in the Caucasus Mountains. There, an eagle would come daily and feast on a piece of his liver, which would then regenerate.
Generative AI is as crucial an achievement for our lives as it is for our business models, similarly to fire.
Here is a rough summary:
The flip side of the coin: immense energy consumption.
Training large language models is a computationally intensive process that consumes significant amounts of energy. For example, it has been estimated that the training of OpenAI's GPT-3 may have caused more than 500 tons of CO2 emissions, partially due to the use of older, less efficient hardware.
A study by the University of Massachusetts found that training a single AI model can consume as much energy as five cars emit throughout their entire lifecycles.
Best of Robots - KI learns to walk, see, speak.
领英推荐
AGI is often described by Ray Kurzweil as follows: "when a humanoid robot can enter a foreign house and operate the coffee machine".
The Norwegian company 1X is building Neo for co-owners (investment: 23.5 million dollars) OpenAI. The Chinese H1 by Unitree runs at a speed of 18 (!) km/h. Intel invests 9 million dollars in Figure 01 and aims to compete with Tesla's Optimus Bot. Boston Dynamics Robo-Dog "helps" clean up the tunnels in Gaza and receives a GPT4 chip in its new version.
Best of IQ - KI passes all intelligence tests with flying colors
Each of us is an expert in some area. Some of us are specialists in their field. We all know one or two absolute authorities. None of us is a specialist in many areas. With GPT4 and other LLMs, each of us now has a huge variety of top experts at our immediate disposal. An immense space of possibilities is opening up. McKinsey estimates that AI will increase productivity by $2.6 to 4.4 trillion in the coming years. Productivity is the ratio of input to output. Now we can creatively think about who will buy the output in the future when hundreds of millions of jobs have become obsolete.
Swiss philosopher and psychologist Jean Piaget defines intelligence as the ability to solve complex tasks through thinking. Only 0.3% of people have an IQ above 140, GPT has passed all common intelligence tests with flying colors, completed an MBA effortlessly, passed the Biology Olympiad, the SAT (entrance exam for American universities), and - particularly noteworthy: excels in "Theory of Mind," which includes tests of empathy and emotional intelligence. Our current intelligence tests are no longer able to fully test LLMs. And every day, unexpectedly new abilities of generative AI are being discovered. In 2023, an average of 8 new white papers per day were published, showcasing some overwhelming new achievements of AI.
Best of Transparency - AI ethics and security are becoming the number one global issue.
The Center for Research of Foundation Models has developed a transparency index based on 100 different indicators. None of the tested models provide sufficient transparency. And without transparency, there is no possibility of reasonable regulation. If all the digital knowledge of humanity is thrown into one pot, stirred once, and new sentences come out, then questions like which data sets were used and whose intellectual property was processed without permission become irrelevant.
The main statements:
Read more here.
Predictions for 2024 - what the new year brings
What we are experiencing this year is child's play compared to what is to come. Each individual tool, from chatGPT to Perplexity AI, from Gamma and Beautiful AI for presentations, from Midjourney, Dall-E, Stability AI, and Runway for image and video generation, accomplishes incredible things. With a simple text-to-speech, image, video, audio, music, software code, any layperson can produce new high-quality content that is hardly recognizable as artificially generated. Considering that in 2024, half the world will go to the ballot box, the term "deep fake" takes on a whole new meaning.
Elections will be held in over 70 countries, where a total of 4.2 billion people live.
We should be aware that:
Whether artificial or human, it will soon be impossible to distinguish. Neither in text, nor in image, nor in moving image, nor in sound. Already today, a more than empathetic and likable AI makes up to 10,000 calls to voters per day in the US, orders taxis, books trips, and schedules appointments.
Now it's time:
My conclusion:
After 12 months of intensive learning and experimenting with numerous tools, reading thousands of articles and white papers, my conclusion is:
It is overwhelming to see the results one can achieve by using AI in a curious, cautious, and clever manner across various fields. One no longer needs to be an expert to utilize AI as a great support, and endless possibilities emerge. There is no need for advanced skills to handle the new tools. On the contrary, the speed and tremendous support provided by AI in our daily lives will soon gain widespread acceptance.
What makes me thoughtful is the fact that we, as humans, do not always have good intentions. Sometimes we act without thinking, full of prejudices, occasionally being sarcastic and malicious, and from time to time downright evil.
It is not the machines, AI, or large language models that pose a threat to our society, democracy, and autonomy. It is us - each and every one of us - who must take responsibility. Neither the EU AI Act nor the US Executive Order can relieve us of that responsibility.
With every new technology emerges a new form of responsibility.
And this can only be taken over if one knows what it is dealing with.
It is important to learn and use AI correctly, consciously and critically.
To come back to our metaphor of Prometheus: with fire, one can indeed temper a room and transform it into a cozy home. However, one can also burn down entire settlements with it.
With careful and wise handling of AI, it will serve us and our small, blue, battered planet well - and set us free, mentally, intellectually, and holistically.
In this spirit, I wish you a brave and optimistic start to the New Year - 2024, here we come ??
Yours, Sabine ??
and her AI-powered alter ego, Antiphonia
?? stay tuned: CuiBono AI will soon be available as a podcast ???
CEO und Gründerin > Corporate Development Lenglachner & Partner > Aus der Zukunft heraus die Gegenwart co-kreativ gestalten > Purpose - Leadership - Cultural Change - Conflicts - Diversity
8 个月Grossartig liebe Sabine wie du da gewissenhaft u begeistert auf deinem Weg bist u uns teilhaben l?sst Danke
Crafting Audits, Process and Automations that Generate ?+??| Work remotely Only | Founder & Tech Creative | 30+ Companies Guided
9 个月Great read! Looking forward to diving into the insights. ??
Looking forward to diving into this! ??