A dormant giant is shaking us rudely awake!
Spiral galaxy IC 342 - ESA, November 2023

A dormant giant is shaking us rudely awake!

The AI safety summit and some afterthoughts?

Will AI be the new model that makes the existing model obsolete, or will it only make the current model faster?

A week ago, the AI safety summit for world leaders, knowledge institutes, and tech company leaders was organized by the UK premier Rishi Sunak. It is a very much-needed gathering of world leaders and experts. A little late? So much attention went to what ‘really’ mattered over the past few years when Covid-19 held us in a stranglehold. Also, massive attention to climate change, wars, economic struggles such as inflation, logistics, high energy prices, and the fast embracement of generative AIs, aka Large Language Models (LLMs). Examples are: ChatGPT, BERT, LaMDA, Llama, Falcon, Pi, Bloom, and dozens more. Most readers are aware of these applications. However, we are generally less aware of the rapid development of Artificial GENERAL Intelligence (AGI). Generative and general are often used interchangeably, which can confuse.

Artificial General Intelligence

For those unfamiliar with AGI, I’ll describe it briefly: it is a hypothetical form of AI that can successfully perform any intellectual task that a human can - not just specialized tasks like recognizing images, playing chess, or driving a car. AGI could learn, reason, and solve new problems independently, much like humans. In theory, AGI can invent new technologies, plan long-term strategies, and even talk with you (with a voice of your preference, if you like). However, I’m still unsure how much it will be capable of understanding emotions like humans (and other living species) daily experience. Today, generative AI applications are not at that stage but are forming bridges and widening their horizons to escape from the cocoons of Artificial Narrow Intelligence (those algorithms that are excellent in one discipline or activity, such as playing “Go” or flying an aircraft.

AGI is conceived as a stepping stone toward Artificial Super Intelligence. Will AGI capabilities already go far beyond human (cognitive) intelligence (still unsure if “it” will feel happiness when “it” solves a specific task), Artificial Super Intelligence will change humanity entirely from the one we know today.?

This notion is what the AI safety summit is bringing to the center of the discussions. A wake-up call to the world? Although three AI phenomena are mixed (generative-general and super intelligence) during the (after) discussions, it shows firstly the interlinked stages (there is no such thing as one jump to super intelligence) and, secondly, its unprecedented speed, currently a proxy of five doublings per year.?

Contemplations

Five obvious observations that triggered certain feelings that triggered some contemplations about acceptance:?

  1. Am I afraid of generative intelligence (or artificial capable intelligence)? I go for moderate. After being exposed to sophisticated applications of Artificial Narrow Intelligence (ANI) such as AlphaGo, self-driving cars, or automatic lawnmowers, the liminal development of LLMs switched some awareness triggers on. I experimented extensively with LLMs, studied their implications, had many discussions, etc. I experienced the profound power of conversations. Also, the conversations can go in directions where you can easily cross lines. ?
  2. Am I afraid of AGI? Since I don’t know the ramifications of AGI (and who does?). Here, I recommend inequality.? As a super curious person, I’ve always been open to new things, experimenting, discovering, exploring, doing, and meeting opposite opinions. That made me, at times, recalcitrant, yet keeping my respect. Not easy, in any case. This adventure is from a different level. I am uncertain whether I understand the arguments of that, whatever the AGI appearance may be. Not that we might not be on the same page intellectually, we may not be in the same book. Wouldn’t AGI not be bored after 60 seconds? Similar to a professor of theoretical physics teaching at a kindergarten? ? ? ? ? ?
  3. Am I afraid of ASI? It depends. If AGI is so smart that we can have conversations with patience at my level, I’d welcome that. That’s where inclusiveness comes to the front, and when that is the case, some nine billion of my fellow citizens will benefit from it as well. I’m not introducing the flip side. Then I’d admit we failed to do our job in early-stage developments. But there is a very thin line between failure and success! ?
  4. Am I afraid of the speed of AI? I admit here that YES! I love speed! E.g., windsurfing with high-speed winds made the adrenaline pour out all over my body… at least, that’s how it felt. I think that I can handle speed. I fear that the speed of AI development is way higher than the bodies that must keep up with regulations worldwide! Before people reached agreements internationally, things likely have been changed dramatically.? ?
  5. Am I afraid of control? This feels like the referee paradox, and paradoxes are usually confusing (look at some of Escher’s works). We need regulations and following rules, then we can play fair games. All games have referees: soccer, football, baseball, basketball, volleyball, field hockey, ice hockey, judo, rugby, tennis, etc. All games have different rules, times, penalties, etc. Imagine for a? minute that an algorithm is a game. The referee could be a human checking the code, tests, and results. This referee must have a super good understanding of that algorithm so as not to be fooled. Now imagine that we have fifty games playing simultaneously next to each other, and balls or pucks may fly in other fields, etc. Only one referee is overseeing all that and has to decide in a split second who plays false, who doesn’t follow the rules, etc. Most likely, a human being is out of the question. Especially if this is extrapolated to a world scale, indeed, a super AI can maybe do the job. We are in an era where AI is checking AI, and we are on the sidelines watching the games that we don’t understand. Ergo, we need to control, but we can’t intellectually; therefore, we need AI. Who’s in control? ? ? ? ? ? ?


Representation

Regulation is needed, and doing that fast is an understatement. Participating in deep discussions about AI and its potential consequences at Singularity University some twelve years ago started to reach a few open ears, which were not only in awe of AI’s capabilities. More concerns saw daylight since.? ?

Looking at the countries, companies, and knowledge institutes that attended this summit, we can be happy that there is already good coverage worldwide. However, the representation of the Netherlands, where I’m from, most likely reflects how seriously we are taking this topic: no universities nor thought leaders and only one representative from the government!? ?

Given the pace at which AI is developing, it is easy to imagine that we will reach the AGI stage soon. Perhaps we have seriously underestimated the violence with which AI has engulfed daily life in every country worldwide. Nation-state leaders could and can’t ignore its rapidly increasing influence at all societal levels.?

The gauge of power?

Everything indicates that the realization is only now beginning to dawn that existing powers, such as government leaders and other levels of leadership, are being shaken fundamentally. The rhetorical question arises now: does the urgency of the summit prevail at this moment motivated by the notion of a possible loss of political power in one's nation, or on the world level, or the realization that the world's survival is at stake? Hopes are pinned on human resilience, made up of more than just cognitive intelligence.

I’m bringing this phenomenon of power a bit to the front because, in a statement after the summit, premier Sunak stated: “The UK is once again leading the world at the forefront of this new technological frontier by kickstarting this conversation, which will see us work together to make AI safe and realize all its benefits for generations to come.” It is easily forgotten that the EU already has an AI act; the US released theirs on October 30, 2023, just a few days before the summit. In addition, the G7 has a declaration on AI, and China has a law in force on generative AI since August 2023. What this declaration adds to the AI regulations arena is the start of a global political commitment. Claiming ‘leadership’ at the crossroads of humanity means that we are already 2-0 behind.? ?

Meanwhile, we have three major annual events: World Economic Forum (Davos), Climate Change; Conference of the Parties?(e.g. COP-28 in Dubai) and AI Safety Summits. Countries are competing to be the organizers of annual summits for the latter two.?

Unfortunately, AI and climate change are on different agendas.?

Control of Intelligence

So that you know, clear rules are needed! Russel, Yampolskiy, Tegmark, Musk, and Suleyman, to mention a few, are talking and writing about that for a while, but yeah, as the saying is going: “Don’t ask experts because they tell you why something can’t be done!” Maybe we should revisit that statement. Start to listen to AI experts, particularly when they say: ‘We have no clue what’s going on in the black box’. To recall: During the development of AlphaGo, the developers were surprised by its capabilities, meaning that they could not have predicted that this would happen so fast and that it would have so much potential. That is something like building a supermodel car that suddenly decides to fly.? ? ?

Do we want a world dominated by something magnitudes more powerful (cognitively) and intelligent than us? In human history of about 100.000 years, that has not yet occurred and already experienced. We were and are in charge so far. The question is, is urgency for containing AI driven by the realization that something will dominate us intellectually, and thus human power (our only distinctive factor is/was intelligence) becomes relative, or is urgency driven by mitigating the effects of AI for our entire planet? Discussions about AI are generally limited to the intellectual part of it. AI is not more yet than an intellectual phenomenon, but its capabilities will likely affect the entire periphery. Periphery as in our environment (animals, insects, trees, etc). Although we hate to be a second intellectual order, the inclusion of something bigger, Spaceship Earth, our lifeline, is at stake.? ?

Underexposed aspects?

Landed at this point, some interesting questions arose. Let me introduce some groups of aspects that, so far, have not been highlighted too much, as far as I’ve read, seen, or heard about the Summit.

1. Deep Learning learns ?

All species that you can think of learn. Some faster and some very slow. Humans are at the top of the pyramid when it comes to learning. We are so good at it that we learn something new every day, even those doing the same thing every day. Ostensibly, today is similar to yesterday, etc., but what has been forgotten is that the context is always different. There are always variables that lead to different decisions, no matter how minuscule they may be. We can’t stop learning; even if we ‘unlearn’, we learn, which makes the phenomenon of “unlearning” debatable. Since algorithms mimic human thinking, reasoning, or decision-making, we easily can see that we can’t stop that, nor can we stop our learning that sprouts from thinking, which is the fuel for inventions. And what has been discovered can’t be de-discovered. Imagine that we would stop current deep learning algorithms from further learning, whereas they have been developed to learn from new data. Therefore, they must continue! Think of stopping their complicated tasks in mission-critical processes such as banking (FinTech), telecommunication, stock markets, biotech, weather predictions, electricity, or traffic control. It will become a bit messy.?

In short, many of these algorithms have already been implemented and have a substantial economic value! This means that they have to continue their work. Would containing their work at this point be possible? Are we not a little too late since Pandora’s box is open?

2. All inclusive

In the second section, we read: “We recognize that this is, therefore, a unique moment to act and affirm the need for the safe development of?AI?and for the transformative opportunities of?AI?to be used for good and for all, in an inclusive manner in our countries and globally” (Italic: PE).

"Good for all" and "in an inclusive manner" are very interesting, and we "are including everyone" on the planet. Earlier in this paper, I was a bit harsh to my own country, but I’m tremendously proud of an unimaginably sophisticated technology on Dutch soil that makes me super proud. Machines that can make chips that are extremely powerful and at the nano-size level.? I’m talking about ASML. They are shipping their machines worldwide (at least if a country can afford it, of course!). Except China, Russia, Iran, North Korea, for sure, and maybe more. For the other AI chip powerhouse NVIDIA, the same story counts. So, all-inclusive is a bit relative. If we want to set up a control system, we all have to be involved and imbued with the seriousness of the matter. A circle is not a little bit round! Work to do… and that leads to TRUST.?

3. Trust (…the currency of the future)

Recent research about public trust in governments, period 1958-2023, by Pew Research (https://tinyurl.com/4vktfyt7), shows a decline from about 80% to 16% in the US! The latest OECD Trust Survey (https://tinyurl.com/5ajsy8x4), doesn’t show an overwhelming trust in governments either, specifically, younger members of the countries scored low on trust. Concerning integrity, the scores are even more dramatically low. The trust factor seems consistent with another research by Edelman (https://tinyurl.com/ya7cdm7y). In the latter research, it is even more dramatic that there is a little more trust in businesses than in governments! Now, representatives of these domains are talking about the ‘trustworthiness’ of AI. If something concerns me, then it is this insight! We can’t close our eyes to the current turmoil influencing most people’s thinking worldwide. Ergo, the proven incapability and willingness of well-educated leaders to solve conflicts. Because we don’t trust these leading players in the field of AI, should we trust (future) AIs? Based on what??

Despite the importance of the executive order, declaration, code of conduct, or whatever document, if ALL countries do not sign ONE TRUSTED agreement, things will be complicated. It is easier to develop a Trust Manifesto and sign it. It is stunningly more complicated to trust one another as a person. A stroll across city centers of capitals worldwide showed me the many law buildings, populated with thousands of lawyers working around the clock to develop contracts, agreements, etc., because distrust is their business model! I can never resist thinking these are the places we outsource(d) trust. The higher the building, the less trust.?

If ‘insourced trust’* doesn’t have the highest priority in the world, trustworthy AI is an illusion, mainly because it develops faster than a warm hug between Biden and Putin, a Tango with Tsai Ing-wen and Xí Jìnpíng or a two-day off-grid hike, without a compass, by Ebrahim Raisi and Benjamin Netanyahu. In addition, a mixture of the shareholders of Alibaba, Tencent, Alphabet, Microsoft, Amazon, Yandex ai, Meta, and IBM agreed to share their AI developments to solve the world’s biggest challenges. Use your fantasy for a combination of knowledge institutes… (it’s a fun exercise ;-) ). Remember, trust is the foundation.?

Therefore, leaders should genuinely care for the people they serve and the causes they pursue. That kind of dedication, that passion for doing good, can lead to better outcomes for everyone. In a technologically driven world, we know that ‘techies’ have some other orientations on life. ? ? ?

But wait a minute or two: we also have the GDP and AI. How does that work?? ?

* Some of the attributes are:?

  • Integrity: world leaders need to have consistent, transparent principles and values and be willing to stand by them in all their dealings.
  • Honesty: leaders need to be honest and straightforward in their communications, even when challenging or complicated.
  • Empathy: leaders should be able to put themselves in the shoes of others and have a genuine concern for their well-being.
  • Collaboration: leaders should be able to work cooperatively with others towards shared goals instead of engaging in zero-sum or adversarial approaches.
  • Consistency: leaders should be reliable and predictable in their actions and decisions.
  • Clean Communication: leaders should be open and accessible and clearly communicate their thoughts and intentions and not hide things.
  • Love: the most challenging part, yet the most powerful one. ?


4. The perfect Lock-in: the holy GDP

Bloomberg, McKinsey, and many organizations rush in with predictions of the many trillions AI will contribute to the economy. Stock markets get overheated. Because of this ‘new oil’, who can resist the seduction of bringing their outstanding AI company to an IPO? After hard work and passing all the tests to be accredited as a trustworthy AI company, the dynamics of the stock market and the importance of prosperity (and prestige) of the country are changing the dynamics of the game. Once the company enters this desired world (a little bit of extra funding can’t harm, doesn’t it?), it becomes prey to greedy shareholders pressing them to do more, expanding faster, achieving higher profits, making shortcuts, etc. More is better, and if you don’t meet the predicted targets, you better do as we say because otherwise…. What do you do? There are so many other AI companies waiting in line. Beware of the wise advice from some populists who say that there will be only two types of companies: one that is using AI, and the other is dead. (In my calculation, we thus have only one type to begin with since a dead company isn’t a company!). Since the trillions are flying around and seem to be coming from nowhere, don’t you want to obey the rules of the GDP, accept the shortcuts, and contribute to increasing the nation’s prosperity??

You get my point, so I don’t need to dive into the rabbit hole of AI and weaponry by fleshing out what we are doing and where a big portion of the algorithms are used for weaponry. I don’t know how that industry contributes to a better world.?

Overall, the GDP “demands” an annual growth of about 3%. And we need AI to achieve that. Are we only bringing things to the markets that are good for people and our environment? The almighty GDP isn’t going to like that! Recession!!! Panic!!?

How does the GDP calculate ‘work’ that has been / will be performed by AI, and how does that contribute to its required growth? In light of this potential, we have a huge opportunity to change our current economy for a better one.? ? ?

But haven’t we created something else that is worthwhile to explore a bit? What about the dependency on electricity? ? ? ?

5. Electricity

According to a report by Bloomberg Technology (https://tinyurl.com/bdhy2km2), power grids worldwide are becoming increasingly vulnerable to cyber-attacks. The report highlights that as utilities turn to sources of renewable energy and add millions of other components like smart meters, they’re rapidly multiplying the number of connections and sensors along their networks, widening the potential for intrusions.?The disruptive potential of grid failures makes electricity a key target, particularly for state-based hostile actors.?

MIT Climate Portal (https://tinyurl.com/yckrzks7)? states that the electric grid is also vulnerable to extreme weather events, like hurricanes and heat waves, becoming more common or intense as our planet warms.

Forbes reports (https://tinyurl.com/y6sjtnkx) that the US electric grid infrastructure is extremely vulnerable to physical, cyber, and forces of nature incidents.

It is important to note that the vulnerability of electrical grids worldwide is a complex issue that requires a multifaceted approach. Governments, utilities, and other stakeholders must work together to ensure the security and resilience of the power grid infrastructure.

The digital world is incredible. It brought and will bring us a lot of good things. One important caveat: it is entirely dependent on power, and so do we, as a consequence of that. What will be the value of data sequences from IoT devices in peripheral systems when the networks that transport these data are down (for a longer time). Yes, we have emergency power supply systems in mission-critical parts. But only for parts of the value chain. We rely on the fact that there is always power. What is plan B for a massive power outage when we increasingly depend on AI? Can you imagine a high rise without an old-fashioned staircase, can’t you?? ?

To conclude (for now)

Dear reader, AI is a wicked problem at this stage. It has grown to a systemic phenomenon that can’t be solved by focusing on single parts: regulations per nation-states, reliable codes, proper data, ‘referees,’ trade limitations, ethical implications, current dependencies, education, awareness campaigns, open source or not, containment, money, stock market, economic dynamic, and dozens more parts.?

We need to peel the onion layer by layer, return to the first principles, and focus on that. The easiest and cheapest one is to trust one another. We only have one earth! Please don’t mess with it!? Since that is probably the hardest one, we may need to work globally on regulations first, but do it fast, not wait until the next Summit for Safe AI, and don’t make it a legal document but use the format of a Trust Manifesto. At the next Summit these trusted regulations should be ready and celebrated with all world leaders. If we, particular leaders, cannot do this, we are not taking ourselves, each other, or AI developments seriously. ?

Let me end by echoing a few critical messages from Buckminster Fuller.?“Dare to be na?ve. You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” ?

Thus, will AI be the new model that makes the existing model obsolete, or will it only make the current inherited model inevitably faster? After all, the current way of thinking is coded in algorithms because we are still determining what that new model should look like. Would we allow AI to develop such a new model, a model that we can’t even imagine to be accurate and positive, and its precessional effect that all people on the planet are equally important? In that model, AI and humans will co-create decisions and, therefore, be inclusive, and we will put collective intelligence at the center of life. If we already have so many faces in databases connected to personal identifiers, maybe it is not that difficult to add more variables (consent required!!), as well as all languages, written and spoken, then that will make collective decisions even better. Technically doable! That makes leadership, as we know it today, relative.? ? ?

My humble conclusion about what we are doing now is that things will only accelerate in all domains. Whether that will make the future better is hard to estimate.?

And:

“I could commit an exclusively ego-suicide – a personal ego ‘throwaway’.” In other words, he did not want to work for a living or his family’s advantage; he wanted to’ contribute to changing the world and benefiting all humanity.’?

It is not about who has the power; it is about all humans collectively having it (in that new model)! ? ?


Paul Epping

November 2023 ?

Ann Boothello

Leadership | Well-being | Web 3 | Art & Culture (Find me in one or more of these playgrounds!) “Gorgeous leadership is not defined by a title. It’s defined by a courageous heart, an expansive mind & fierce action.”

10 个月

There’s much value in your insights Paul Epping - I’ve always enjoyed our conversations and your perspective on the future. We need to hv a sense of responsibility with what we co-create with Ai and eachother. Grounded in the present, with an eye on the future.

回复
James Steel

GlobalScot ?????????????? | CCO, Executive Leadership, Medical Diagnostics & AI, Innovation, Trusted Advisor, Story Teller, Transformation Leader, Board Member & Fractional CMIO

10 个月

Paul thanks for sharing your thoughts and insights in this increadibly important topic. I agree AGI and ASI are not too far off in the distance and im thankfull that generative AI has erupted this year and is available to everyone. Its cast much needed light on the need to expedite regulations (or guide rails) for the safe, ethical and rappid development and deployment of AI. I throughly enjoyed reading your article and wanted to share this in return incase you havent seen it already. https://www.dhirubhai.net/posts/james-steel-3432ba9_ai-ml-aisafetysummit-activity-7126153856622731265-4Pp9?utm_source=share&utm_medium=member_desktop

this was very insightful and powerful, i hope many people read this and that we can create more trust-based systems for global economies and focus on regeneration of planetary and social health and use AI tools for that - and especially not become faster, but to slow down so we have more time for what and who we love, because as the greeks already discovered, we are not gods and we will never be, we are all mortal, and it is risky to try to pretend to be almighty and I think AI is showing us that. Humility and conscious, mindful contribution is what I hope to see more of and what I am exploring for myself.

Sonja Klopcic

CEO, Developer of AEIOU Leadership, Innovator * Coach * Mentor* Author, Country Co-chair G100 Leadership & Entrepreneurial Education

11 个月

Great article, thank you for sharing.

Sophie Krantz

Global Strategist | Global Leadership Development

11 个月

Thanks for sharing your thoughtful insights, Paul. There is a lot of value in your writing on AI.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了