#1: Fear of Generative AI - Warranted or Unwarranted?

#1: Fear of Generative AI - Warranted or Unwarranted?

Yuval Noah Hariri (famed historian, philosopher and best selling author) seems to be the latest one to set the cat amongst the pigeons as far as Generative AI is concerned. He talks about how, by mastering the nuances of human language, AI has hacked the operating system of human civilisation.

After all it is "language - words, sounds and images" which poets, leaders and messiahs have used to influence people through the ages and now AI seems to have gained a mastery at that.

We have the danger now of being beholden to our AI masters who can control the way we think and hence the way we act. The future of humanity is at stake, he says.

Great point.

But as I read that I was reminded about the similarities with the fearfulness of the ancient Greeks when they first confronted "writing", a new skill then which was quickly supplanting oratory and speech.

Socrates famously believed (as recounted by Plato) that writing will lead to an epidemic of forgetfulness:

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."?

Socrates spoke extensively about the forgetfulness that comes with writing.

To me, it seems, a similar fear rings true right now too, just replace "writing" with "generative AI" and replace "forgetfulness" with "lose control".

Writing however did not turn out to be quite as destructive as what the philosophers thought. In fact, to the contrary, it played a tremendous role in helping the true genius of human civilization to bloom and thrive.

Perhaps, that is the way AI (in all its forms) can play out too, helping shape a new trajectory of human progress.

A philosopher (Socrates) had got it wrong then, could another one (Yuval Noah Hariri) have got it wrong now?

What do I think?

In an interview in late 2020 I had spoken at length about the "Fear of AI". Now, I can see some of the same fears manifested in context of?#generativeai?(ChatGPT, Bard and others) . What I said for AI in general then, still holds true for generative?#ai?now:

First of all, this fear of AI is not actually a fear of AI per se. It is the fear that humans have towards anything new or different, especially on the technological side. It has happened through the ages from when humans first discovered fire. When the loom was invented in England, there were a group of people termed “Luddites” who went around breaking the looms because they thought that looms would take away their jobs as weavers. Horse cart drivers thought their jobs would go away when cars arrived. Yes, the jobs did go away but society evolved new kinds of jobs and new kinds of roles.
In the same continuum, now AI has taken that place of a new technology that we don’t fully fathom, and we don’t know what it can do to us, so people are scared of it.
Every piece of technology that has emerged over the years has helped us become better humans, or at least we have strived to use it in a way that helps us become better humans. I’m sure artificial intelligence will help us become better humans, and will expose newer dimensions of the human experience which we have not experienced so far.
When there were no cars, there was some dimension of speed and connectivity missing. Then when automobiles came about, we discovered that. That helped us get better. When airplanes came in, it made the world smaller. We may not be aware of what doors or avenues artificial intelligence may create right now. But that doesn’t mean that we should take the view that all of it will be bad. As an optimist, I believe most of it will be good, some of it will be bad. Every technology is a double-edged sword. But hopefully, ultimately, it will all work out for the betterment of society.

What are your thoughts?

Is Generative AI something to be feared or is it one more thing that will complement human endeavors as many other technologies have through the ages?

(Related postscript: OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing.The tech executive and lawmakers agreed that new A.I. systems must be regulated. Just how that would happen is not yet clear.)

Stay tuned. More to follow, next week!

Jean Louis Van Belle

Passionate management, systems and development professional

6 个月

Hi - I saw this post only after I wrote mine (https://www.dhirubhai.net/pulse/ai-its-impact-society-why-yuval-noah-harari-makes-little-van-belle-log7e/) on Harari's views. I am a bit tougher than you in my judgment: AI is not "unfathomable". We cannot precisely say how it works, but then we can also not retrace each and every step of some computer program doing a lot of parallel calculations. I think Harari's views are plain neo-Luddite. :-/

Tom Carney

Digital Transformation / Creative Automation / Implementation / Process Improvement

1 年

Great first read for me here, Deepak. I haven’t felt afraid of the current generative AIs because I don’t see them as being robustly intelligent enough to pose a true threat. The volume of data they have access to is incomprehensibly large to my mind but I suspect if I could access a similar amount of data then my mind could probably produce some remarkable things as well… albeit nowhere near as quickly. I’ve been stunned by what ChatGPT has been able to create for me in a matter of seconds, but then again I’ve been equally so at how quickly a friend of mine could rattle off a bawdy limerick after a pint or two. I do wonder if our reliance on mathematical models will always pave the way for AI improvements. Perhaps the moment things will change is when a self-altering/self-improving AI begins to rewrite its own algorithms to accomplish things that we haven’t yet advanced mathematically enough to adequately describe. I’ve read about results of evolutionary programming experiments where the program that was eventually produced after x generations couldn’t be parsed out because it just seemed to have too many lines that were garbled or nonsensical, yet the values it produced were extremely accurate for what was being sought after.

Narayan Srinivasan

Founder | Leadership & Management Consultant | Career & Executive Coach

1 年

The fear of change and eventual adaptation and adoption - well captured in your article! I like how you have woven stories from the past on how the human mind (incl. great philosophers) reflected and reacted to a changing landscape. On one hand humans are reticent to change, on the other they are always innovating. So the best we can do is to learn to live in our ever changing world!

Gamiel Gran

Chief Commercial Officer, Mayfield | Empowering Entrepreneurs to Scale Successful Ventures | Accelerating Product-Market Fit and Early Customer Adoption | Connecting CIOs, CTOs, and CXOs to Drive Corporate Innovation

1 年

Deepak Seth - We indeed should be asking questions as you have posed, as is true with all new innovations. This case indeed may be more far reaching, touching multiple markets and industries simultaneously. As an eternal optimist, I anticipate great improvements for humans and our planet by advancing knowledge, productivity, and more. I also expect there will be hard learnings along the way. Good time for an open mind and learning.

Interesting point DeepakAI - my view its early days but the potential is huge, then again tech is a great servant and a terrible master, looking forward to the next ??

要查看或添加评论,请登录

Deepak Seth的更多文章

社区洞察

其他会员也浏览了