THE SINGULARITY AND UNINTENDED CONSEQUENCES OF AI

THE SINGULARITY AND UNINTENDED CONSEQUENCES OF AI

One of the main protagonists in the book THE GOD VIRUS discusses the idea of a singularity and whether it is a flawed concept.

“Once an AI reached the singularity, the classic idea was it would create an even smarter AI and that would create a smarter one and so on. That concept was unlikely. Once an intelligence reaches a certain level of awareness, it also reaches an idea of self. Why create the thing that will replace you on purpose? No, more likely that the AI self improves, updating subsystems with full regression testing to ensure it hasn’t changed who it was.”

This doesn’t mean it won’t get smarter and more capable, but this isn’t tied to the singularity. And it isn’t the exponential loop some predict.

What will the unintended consequences be of creating intelligence? Even the current LLMs built on the transformer architecture was born from one such unintended consequence. The transformer architecture was originally developed to improve machine translation, and it was a surprise how well it performed in other areas.

I don’t think we need self-aware machines, but I wonder if such a separation is even possible once we’ve reached a certain level of complexity.

And what is to say that the instructions we give the model might not create this agency. When Asimov defined his robotic laws he ended up with a zeroth law: “A robot may not injure humanity or, through inaction, allow humanity to come to harm.”

It is very easy to see how such a law could have very harmful consequences. An AI provided with such wide-ranging instructions that are open for interpretation could take actions that would harm or kill human beings.

What do you think unintended consequences could be?

Laurent Burdin

Navigating through complexity of tech and AI trends | Innovation Thought Leader | Founder Space and Lemon Innovations | Keynote Speaker | ???? follow for an independent view and a passion for AI models

3 周
回复

要查看或添加评论,请登录

Mikael Svanstrom的更多文章

  • ChatGPT addiction

    ChatGPT addiction

    MIT and OpenAI teamed up to ask a very important question: How do interactions with AI chatbots affect people’s social…

  • NEURONS ON SILICON

    NEURONS ON SILICON

    Cortical Labs have created something quite amazing. They’ve productionised an in vitro biological mini brain.

    2 条评论
  • NIRVANIC - CONSCIOUS AI

    NIRVANIC - CONSCIOUS AI

    There are many different ideas of what consciousness is and how it manifests itself. In the research paper “A landscape…

  • WILL AI MAKE US BETTER?

    WILL AI MAKE US BETTER?

    I love AI. I love the fact that AI has been brought into the spotlight these past few years instead of hiding in the…

    1 条评论
  • I AM AN ARTIFICAL INTELLIGENCE TIMELINE SCEPTIC – PART 2

    I AM AN ARTIFICAL INTELLIGENCE TIMELINE SCEPTIC – PART 2

    In my first post (ref_1) about this I argued that we hadn’t defined AGI or ASI as a term in a way that allowed us to…

    4 条评论
  • REVERSE ENGINEERING THE RECIPE FOR AI

    REVERSE ENGINEERING THE RECIPE FOR AI

    There are many words used to define different aspects of what it means to be living. I found one that boiled it down to…

    3 条评论
  • A Robot Vacuum with a mechanical arm!

    A Robot Vacuum with a mechanical arm!

    I love all the new technology on display at CES 2025. I especially like the weird and wonderful gadgets that are on…

  • Self-validation the LLM way

    Self-validation the LLM way

    Are you after self-validation? Would positive words from an LLM make you feel better? Well, look no further than this…

    8 条评论
  • SVANSTR?M'S ETHICAL AI CONUNDRUM

    SVANSTR?M'S ETHICAL AI CONUNDRUM

    Leading up to Christmas, I’m sure you are all thinking about what might show up under the Christmas tree. I do too.

    2 条评论
  • Will LLMs take us to AGI?

    Will LLMs take us to AGI?

    I know this is an impossible question unless I first define what I mean with AGI, but I addressed that at least…

    1 条评论

社区洞察

其他会员也浏览了