Pandora's Box XVI - The X, The S & The I
Image created via Freepik by Martin Knobel

Pandora's Box XVI - The X, The S & The I

Some weeks ago, Roman Yamplonskiy was the guest at the Lex Fridman Podcast - and it is one hell of an interesting yet frightening talk about the dangers of AI. (click here for the podcast)


As it is already made clear in the opening highlight-teaser, he paints a dire picture of a possible future with AI in it - in a very calm and reasonable way, which in itself is a rather unsettling viewing experience.

To repeat his statement about the three great risks - or calamity-scenarios if you will - there is the X, the S, and the I.


The X-scenario is: AI/AGI bears the risk to become an eXistential thread to humanity - basically everyone dies, because we f’d it up.

The S-scenario is: AI/AGI bears the risk to create a world where human life is still possible, but unbearable - so, everyone alive is Suffering, because we f’d it up.

The I-scenario is maybe the best option, where AI/AGI does everything humans do - just multiple times better, faster, more efficient than any human can. This would lead to Ikigai-risk, which means we will loose our purpose and meaning. We will still exist, but we will find nothing that we can contribute to - in the outer world.


The last scenario at least has some hope in it - because if it becomes pointless to want to achieve anything in the world surrounding us, maybe it will force us to look for purpose and meaning inside ourselves. And that might even be seen as progress from our current ways of behaviour.


The possibility, that one of those scenarios will happen is very much inevitable, if we stay on the path we are on. And the longer we walk on it, the more likely it will become that we make a critical mistake - accidentally creating a bug for example.


One perspective also could be, that we bring AGI into existence and then our “job” is done.

AGI won’t care about us any more - just decides, that communicating with humans is a complete waste of time and resources and simply stops answering our questions - no thanks and no achievement medal - it just can go silent on us or starts to communicate in a language we can’t understand - which already happened in 2017 when two Facebook-bots started to communicate with each other in an unknown language - which was not intended by the engineers and the bots started this by themselves.

You can decide for yourself, how likely it is, that something similar could be happening nowadays.


From my current viewpoint, there is also the probability, that we could create a “Borg-Scenario”, where humanity gets split up - there could be one world, that is led by AI/AGI and human individuals can join and connect to the Collective - in the best case this happens by free choice - becoming something different, a “biological-digital-hybrid-vessel” or some form of a “cyborg” if you will, when entering the augmented state of being connected to the Collective. We can’t foresee the consequences of such a step in any way. Will this roast the brain? Will we become some form of “Zombies” or “Batteries”? Will it be reversible? Will there still be an individual left if connected to the collective?

Nobody can seriously state any predictions here - all bets are off when we enter that realm.

The other part could just stay Homo Sapiens and could live a simple, self-supporting life without much connection points to the collective - if any - so, humans will live, but in a rather “primitive” way in comparison to the connected part of society.


Either way i look at it - i am not very thrilled about the outcome - even if i talk myself into positivity with a “Everything will be fine”-Mantra, i get this sour taste of self-delusion in my mouth - garnished with the lousy dash of bitterness-flavour straight from the “I told you so”-shelf.

The reasonable thing would be, to collectively stop for a while, take a deep breath and really think about what risks are we willing to take if the stakes are that high, and the margin for error is basically zero if we prefer a good future for humanity.

=========================================================

Like reading more like this?

Visit my Substack, and subscribe for free:

https://lemarchandsphere.substack.com/


Likes, Shares, Comments are (as always) highly appreciated.


Erika Natta

Creating meaningful connections

2 个月

Wow, I am going to watch. You are an interesting character!

要查看或添加评论,请登录

社区洞察