Are We Conveniently Ignoring The Potential Threats of AI?

Are We Conveniently Ignoring The Potential Threats of AI?

I was chatting to a friend about AI, as many of are these days, and he shared links to a few videos he’d come across, one of them being: Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

https://youtu.be/41SUp-TRVlg

While I am using AI, and our team of Virtual Assistants are being trained and using AI for their clients, I am equally concerned. As a father of two young daughters, I can’t help but think about the impact of AI on the future generation.

In my office, I have a toy figure of ‘Data’ from Star Trek and R2D2 that symbolize the ‘light’ side of AI evolution. I also have a figure of a ‘Terminator’, aka, the ‘dark’ side of AI, that I sometimes use in videos on the subject, but also to serve as a reminder of the two paths ahead of us.

Having watched about 30 minutes of the Interview, I decided to pose a question in one of the AI Whatsapp groups I am part of, and I was quite surprised by how dismissive many of the group members were of what I thought was a legitimate concern/subject matter that I felt needs to be considered.

Here’s what I asked the group:

“The rise and dangerous of AI.

I’m sure most of us have heard for a call to pause the development of AI.

I read the headlines, but did not have the opportunity to read deeper in to the sorry until the weekend.?

To simplify: the request was to pause AI development and to rapidly work on HI.

HI = Human Intelligence

The realization was that Human Intelligence, morals and ethics is NOT where is needs to be to safely direct the development and use of AI.?

Of the many possible outcomes for the future of Humanity, most, as of today, do not point to a Star Trek Utopia and AI helping humanity to boldly go where no human has gone before.?

On the contract… the outlook, according to the experts looks a little bleak.?

Here’s a ‘not how to use ChatGPT, to make some cool graphics or videos’, reality check interviews.”

Now, the video is 4 hours long, so I was not really expecting anyone to watch the video, even I have not watched more than 30 minutes so far, but what caught me a little off guard was how dismissive they were of the topic.

Before I share their perspective, here a point-by-point summary of the video by Chat GPT:?

“Eliezer Yudkowsky is an AI researcher and co-founder of the Machine Intelligence Research Institute (MIRI).

Yudkowsky begins the video by explaining his concern about the potential dangers of AI, stating that "AI will kill us" if it is not developed and controlled properly. He argues that AI has the potential to surpass human intelligence and that it could become a threat to human existence if it is not aligned with human values.

Yudkowsky discusses the concept of "alignment" and how it is crucial for ensuring that an AI's goals are aligned with human values. He notes that alignment is difficult because human values are complex and there is no straightforward way to program an AI to align with them. He suggests that alignment is an active research area and that it requires careful consideration of the potential risks and benefits of AI.

Yudkowsky talks about the nature of intelligence and how it differs from consciousness. He argues that intelligence is about optimization and that it is not the same as consciousness or emotions. He suggests that developing AI that is aligned with human values requires understanding the nature of intelligence and how it can be aligned with human goals.

Yudkowsky mentions various science fiction stories that explore the potential risks and benefits of AI. He suggests that science fiction can help us think about the future of AI in a more nuanced way and that it can inspire us to consider the potential consequences of AI development.

Yudkowsky emphasizes the importance of rationality and critical thinking in navigating the potential risks and benefits of AI development. He argues that we need to be careful not to be swayed by biases and emotions and that we need to think carefully about the potential risks and benefits of AI. He suggests that this requires a deep understanding of human values and an appreciation of the potential consequences of AI development.

Overall, the video provides a thought-provoking exploration of the potential risks and benefits of AI development, and highlights the importance of alignment, understanding the nature of intelligence, science fiction, and rationality in navigating this complex and rapidly evolving field.”

I am not going to post any individual answer, but to summarize the responses:

Person 1 is a heavy user and lead of research of Generative AI. They know the opportunities and threats, as well as the strengths and weaknesses of these models. They are working with 40 scientists and machine learning experts to have a better understanding and find solutions.

Person 2 believes that the most important work one can do for the world is the work they do in themselves. They don't believe in the idea of everyone cooperating to help two people each, and instead advocate focusing on oneself to make oneself happy. They argue that unhappy people hurt people and that the core of what causes problems in the world is busybodies trying to meddle in other people's affairs, an egotistical attitude of thinking only they have the ability to change certain things in the world, and/or a personal dissatisfaction with how things are. Person 2 thinks that if we focus on ourselves and develop ourselves to be unconditionally happy, then the personal requirements to be happy are lowered, and whatever is happening, we will still be happy. They argue that we can't effectively and permanently control other people, and therefore being concerned about what other people will do is unproductive. Person 2 suggests focusing on oneself and showing people what life could be like if they just let go and do the same.

Person 3 advises taking such discussions with a pinch of salt because there are many scientists and researchers who are quacks. They argue that scientists and researchers can be lobbied to say anything, and that one must follow the money to find the motives. Person 3 suggests being cautious about believing everything that scientists and researchers say.

Person 4 believes that there is more ego and politics in science than one might think. They argue that some people are in science for the truth, while others are in it for the power. Person 4 advises not to take all research at face value, as it has always been about the best hypothesis experimented with an imperfect method. They argue that the method is imperfect because it is experimented with by imperfect people in an imperfect system, but it is still a system of progress. Person 4 suggests being aware of the limitations of the scientific method.

In conclusion, the dangers of AI require careful consideration and attention. It is important to have a deep understanding of the opportunities and threats posed by AI models, as well as their strengths and weaknesses. To effectively communicate about AI, one should be cautious about believing everything that scientists and researchers say, as there may be lobbying and political motivations at play.?

When discussing the implications of AI, it is essential to consider the potential risks and benefits, and to avoid naively assuming that everyone will cooperate for the greater good. Ultimately, we must remain vigilant and aware of the limitations of the scientific method and the need for ongoing adaptation and improvement in our understanding of AI.

With that said, as a father, I wont BS you, I’m kinda concerned if truth be told!


Beejel Parmar

CEO/Founder

www.BeeEPiCoutsourcing.com


Tech/Ai Enabled Human Vital Assistant Services:

programs starting as low as $5 a day, part-time,

and scalable home based remote team.



要查看或添加评论,请登录

Beejel Parmar的更多文章

社区洞察

其他会员也浏览了