The Age of AI- A Rabbit Hole of Opportunities and Uncertainties
Are we closer than we think to achieving 'AI Singularity'?

The Age of AI- A Rabbit Hole of Opportunities and Uncertainties

Everyday in the past few weeks, the world seems to have progressed a 'few years ahead'. The breakneck speed at which AI and its capabilities are growing and being explored leaves us in awe, excitement, hope and also makes us feel incredibly uncertain, nervous and also afraid.

One might argue, how are these human emotions different compared to any other new technology breakthrough?

While the answer to above question might be a simple 'no different'; it might just be at the surface which, could be stemming out of ignorance (lack of deep understanding) or cautious defense (one that is to defend their territory).

There is no doubt that the more AI capabilities are being unearthed, the boundaries of 'What's possible' is being redefined (dare I say with each passing hour). Like any new technology this bit is all exciting and what has surprised even the experts, is the speed at which this is occurring (especially after GPT-4's release ~2.5 weeks ago, recent releases from Adobe- Firefly etc.).

No alt text provided for this image

Given the quality of output, I'm pretty certain we're not too far off from seeing a large scale commercial uptake of AI tools in our day to day jobs and lives....Microsoft Copilot and Bing has already shown how it can be done and intends to roll out.

No alt text provided for this image
GPT-4 results matching human performace across most various professional and academic benchmark tests

Evolution of AI isn't new, these engines across many decades and how they've improved in streams like pattern recognition, language, detention, image processing etc. etc show that they have consistently matched or outperformed humans; the growth (like many other technology breakthroughs) has primarily targeted the 'efficiency and accuracy' parameters. I see this akin to the advent of machines that triggered the agri-revolution/ industrial revolution, or the invention of tools (bronze, iron) that kickstarted those ages. Even during the information revolution, the primary tenets were centered around making jobs more efficient and accurate (primary impact area). While terms such as 'intelligent systems' have been used to refer to these technologies/machines, most of these operate within very limited scope and constrained operational tenets. So the actual 'intelligence' is more a misnomer than reality. The decisions made by these systems are based primarily on a finite set of outcomes (which has been pre-defined by the creator wrt. expected outcomes) that gets repeated based on the input parameters. Just by the sheer capacity to digest data being fed and parse it through those pre-defined condition faster, these systems tend to deliver much superior results compared to humans (Note: This is a very crude and trivial definition that I'm making). All this is very exciting as there is huge value in an ever crowded world with finite resources where the 'accuracy and correctness' of decisions have significant impact.

However personally all this excitement turns into nervousness given the announcements, experiences (personal) and developments over the past few months (personally though AlphaGo defeating Lee Seedol was a watershed moment for me). AGI or Artificial General Intelligence is a completely different kettle of fish. I feel the significance of the past few months/weeks to have a profound impact on humanity. A self learning system where the overall tenets are so loosely defined that it makes it difficult to look into those grey areas of decision making leads us into a space of huge uncertainties. Now you might ask, where is your fear stemming from? - Let me try and articulate...

This is the 1st time in human history that we've created/in the process of creating aspects of intelligence which hitherto was very distinctly specific to us aka. multi contextual reasoning, emotions, identity/individuality....We are parting with something that actually made us human- Yes I understand that we're definitely miles away from achieving 'Singularity' or 'Self-Aware or Sentient' AI systems. Where we're today may be showing early signs of 'Theory of Mind', but the far reaching effects of this technology is already profound. While recent statements such as 'In the future every pixel will be rendered' during the recent Adobe-Firefly might sound powerful and exciting, the gravity and impact of such statements with over 8 bn of us trying to progress into this new world creates a dichotomy where uncertainty and opportunity are at loggerheads.

The pace of development and growth in AI is far outpacing the pace of our understanding, leading to a world divided!

( With claims that Elon Musk, Steve Wozniak etc. signing it along with more than 1100 other AI experts and researchers).

The uncertainty stems partly from our lack of complete understanding (as stated above), lack of well-defined boundaries and tenets of operation, governance, ethics and more fundamentally HUMAN MORALITY.

While there is no argument to the fact that change is imminent and constant, and one has to adopt, the underlying assumption here is that change, gives us some amount of time for this adoption to take place....

A holistic approach is needed to analyse how deep is the impact, and how the new world order might pan out and hence determine the 'coefficients of new normal'. While the impact of previous major technology breakthroughs had profound impact on certain industries more so than others, this time it is different.

This time from where I see, no industry will be spared (there might be some slightly slower compared to others). This calls for immediate attention and open discussion on how do we handle mass human displacement to ensure stability within societies.

While most movies (viz. iRobot, Terminator, Ex-Machina etc.) might show a very dystopian future, the futurists see a very optimistic ideal scenario; the truth lies somewhere in between and that 'in between' is where most of us are going to end up, with, very little clarity on the exact nature of skills required to survive, nor the time to acquire these new skills. To make matters more concerning, this is sneaking upon all of us so fast that a vast majority of us will be caught unawares, leaving us exposed and very vulnerable.

While lots of discussions have already commenced on this topic, I'm pondering a a lot of questions:

  • Are all the critical players in this new eco-systems involved in discussions that charts out various scenarios for us?
  • Are these discussions based on open and honest information sharing or is it more about furthering commercial interests of few and the society being used as collateral damage?
  • What policy and governance level discussions are taking place and at what pace?
  • Are we going to see a change of power centers (visibly and openly) from Governments to Corporations ( meaning: corporations moving from the background to foreground)?
  • How much space will be given to create an equitable and non-biased society that is integrated with AI?
  • How soon can we come up with an actionable upskilling plan that will cater for majority of our society to assimilate?
  • How do we cater for in the coming decades for millions of 'have nots' to be able to survive and given an opportunity to succeed?

This decade is going to see changes that might redefine our course as humanity (and I'm making this statement without the intent of sounding too dramatic). I really hope the world we create for our children is something that they can thank us for and not one where we will have to see them suffer.

Note: The views expressed in this post are purely author's own and does not in any form associate with any organisation or corporation.

Devika Devaiah

Founder at Anarva, Strategy and Innovation Expert, Award winning business author, Speaker.

1 年

Interesting write up Madhujith, clearly you've thought through it with great depth, enjoyed the read. You'll resonate with John Oliver's recent 'Last Week Tonight' episode covering some of the same areas that you have. The most revealing for me is that output is only as good as the input, and a lot of AI input is skewed/ biased and yet it is being used for very critical purposes resulting in discrimination.

要查看或添加评论,请登录

Madhujith Venkatakrishna的更多文章

社区洞察

其他会员也浏览了