Murderous AI is absurd

Murderous AI is absurd

1. Of course, nothing will happen to us. AI won't kill us. It's completely idiotic.

2. Eliezer's arguments are coherent and logical, but they make some assumptions that are debatable.

  • Once I read a short and dense book by Józef M. Bocheński entitled "The Methods of Contemporary Thought".
  • Bocheński explains there that Euclidean geometry is based on axioms.?
  • Axioms are somehow taken outside the system itself, they are necessary to be able to think about a given topic, they give definition to key aspects of further reasoning.?
  • Axioms are a given, obvious, but of course you can come up with a coherent system based on inverse or other axioms.?
  • You can create a geometry based on other axioms than those of Euclidean, and such geometries exist. In geometry and mathematics, this is not a problem, because it is abstract, it does not necessarily have to be reflected in physical reality, it must match at the level of numbers.?
  • In other fields, we also build coherent and logical argumentation on a given topic, based on axioms. However, the problem arises at the level of compliance of these axioms with reality. If the axioms are incorrect, then all the coherent logic we base on them is useless. Everything is falling apart because the world works differently.?

3. I think that's the main problem with Eliezer's argument, it's coherent but based on axioms that are fantasy.?

4. The assumption that AGI is possible is debatable. ChatGPT is a great project, but it falls far short of AGI. Creating an AGI may turn out to be physically impossible, just as it turns out to be impossible to make a good experience in VR, for example.?

5. We often indulge in the fantasy that we can do anything we dream of if we put enough effort into it. Unfortunately, sometimes we can't.?

6. AGI doesn’t exist. It’s a fantasy, a dream, an object of philosophical and ethical debates. It’s the answer to the question of what will happen at the end of the technical development of AI. This doesn’t change the fact that AGI doesn’t exist, it’s speculation.?

7. We can dream and talk about what the world will look like after AGI is created, but that's fantasy.?

8. Let's assume, however, that AGI is being created. Why would it kill us??

9. Eliezer argues that because we are made of matter that AGI will want to use for other purposes.?

10. This is another Eliezer axiom that seems absurd when we think about it.?

11. There is an unlimited amount of matter in the universe. There are certainly cheaper ways to conserve energy than converting people into anything.?

12. Since the AGI is supposed to know everything, it will do things in an optimal way, spending as little energy as possible.?

13. Killing people from the perspective of AGI, which is supposed to be indifferent to people, seems like an extra-curricular, energy-expensive project that doesn’t give an optimal return on investment.

14. If we take these two axioms out of Yudkowski's argument, the sad theory of the murderous AGI collapses. There is nothing to worry about, we can go back to creating cool products based on generative AI and feel good about what we do.?

15. Generative AI is just computer programs that are supposed to make our lives easier. They won't kill us and they won't even take our jobs. They just allow us to do more things in less time.?

16. The problems with AI will be different than what Eliezer fantasizes about: fake news, deep fakes, new addictive and alienating apps, hidden prejudices, copyright, autonomous weapons, etc. These are worth talking about seriously. Murderous AGI is a derailment of the discussion.

Maciej Szandar

CEO Invest It Sp. z o.o.

1 年

In my opinion, as long as we don't understand what consciousness is and how it works, it will be difficult to create conscious AGI. Some argue that it is not possible at all. E.g. Roger Penrose propounds the theory that consciousness is related to quantum processes that we cannot simulate https://www.sciencedirect.com/science/article/abs/pii/0378475496804769 The second problem seems to be a fundamental philosophical problem - the problem of meaning of life (omg!). Since we cannot objectively define what the meaning of life is (everyone perceives it individually according to their own beliefs, religion, etc), it is difficult to define such a meaning (purpose of existence) for AGI, unless we assume that AGI will define its own. If so, what values will guide it? Are there objective good and evil? If so, can they be quantified or calculated? Is evolution the goal and the emergence of AGI a natural evolutionary step? Indeed, the sheer number of problems is overwhelming, let alone their solution.? Anyway, thx for your interesting posts! :) ps1. I recommend an interview with Wojciech Zaremba, co-founder of openai https://www.youtube.com/watch?v=U5OD8MjYnOM ps2. the amount of matter in the universe is not infinite ??

回复
Marek Niezgoda

Senior Technical Specialist

1 年

Bardzo wnikliwa analiza Panie Jakubie. Zajmowa?em si? AI jeszcze na studiach. Tylko laikom wydaje si? ?e mo?na wszystko mechanicznie zautomatyzowa?, nawet inteligencj? zamieni? na kod maszynowy. Pozdrawiam Marek

要查看或添加评论,请登录

Kuba Filipowski的更多文章

社区洞察