When will AI be ready to answer the tough questions - like is there a God?
image by Seanbatty @ Pixabay.com

When will AI be ready to answer the tough questions - like is there a God?

What is a good life? Is there free will? And will we ever accept the answers from an AI as meaningful?

Looking at the state of development of AI today, these question have started to haunt me. I must admit, if I had had an option (a secured life long income) I would probably have chosen to study philosophy instead of computer science. But living in 21st century in Eastern Europe, that did not seem as a viable survival tactic.

Working as an IP Video professional by day and listening to audio books about philosophy at night ,in 2018 I am really wondering:

  • How far of are we to be ready to train an AI to accept philosophical discourse?
  • Will we be brave enough to ask such an AI about an opinion on the tough questions?
  • Will we be able to accept the answers that we get - will they have any meaning?

A great article about the biases that we build in to things we create leaves me skeptical about what kind of "subjectivity" might be built into a philosophical AI. On the other hand, we must not forget that objectivity is just a sum of a large number of non-contradictory subjective ideas. If we can teach most of the philosophical views - maybe an AI can give as a non-subjective answer to those always present questions. But will that answer make any sense, since it's non-subjective?

In my mind, as long as an AI is trying to answer based on the things it's learned - we are not there yet. I will be personally impressed (and a tiny bit scared for the purpose of humans) once we reach a point where AI starts to ask new philosophical questions, questions not yet raised by humans. I truly hope it will happen in my lifetime. That will be a crowning achievement of the human intellect, at least from my point of view. It will surely open up a completely new philosophical field.

Any AI experts out there who can share their views or progress?

Thank you for all and any comments.

Amir Smajevi?

Head of Professional Services at NFON | Engineer | Telco/IT professional | Co-Founder

7 年

Reading the sentence “A great article about the biases that we build in to things we create leaves me skeptical about what kind of "subjectivity" might be built into a philosophical AI.” a somewhat funny question came up to me... namely, given the assumption that two AIs would be built by two different teams, e.g. team A consisting of merely Chinese engineers and team B of merely US engineers (fading out the globalization effect) - would it be safe to assume that those two AIs might differ in something we (humans) call “mentality”?!

回复

要查看或添加评论,请登录

Sr?an ?egvi?的更多文章

社区洞察

其他会员也浏览了