Will machines eventually have the answers to all of our questions?

Computer Science has had an incredible impact on many fields. As an undergraduate you can take a course in computational geometry or computational biology. But what about computational philosophy? At first, this might seem absurd. After all, what could one of the oldest fields of study have anything in common with one of the newest?

Given the recent popularity of Alan Turing, I thought that I’d write about one of those observations that made an impression on me when I was young. It continues to fascinate me, perhaps even more so because of the recent advances in the Q&A space: IBM Watson beating jeopardy players and seemingly having more ‘knowledge’ than any human, Wolfram Alpha answering complex questions, Apple, Google, Microsoft and Amazon all building software that can answer questions. How far can we push this? How powerful will computers become?

The Human Question In this blog I will give you the answer to the above question. But when you think about it the human mind is just a computer with a very different architecture. And so answering the above computer science question helps us answer arguably one of the most profound philosophical questions of our species - How much can humans ever understand? This is probably a 10,000 year old question and seemingly one of those questions that we can argue forever and never settle on an answer. But this is exactly where computer science has made tremendous progress in the last century and I’ll give you that neat insight into the answer. No background in computer science, math, logic or philosophy is needed.

Understanding Understanding First, we can think about what we mean by human understanding. When learning something new, say whether a number is prime, one might say ‘I do not understand why the number 42 is not prime?’ To that, we could respond by saying that the definition of a prime number is one that can not be divided by any number other than itself and 1 but 2 divides 42 and therefore 42 is not prime. What we did was to take a new concept and break it down into parts that you already understand. What we did was offer a proof of a statement. That statement can now be used in proofs of other statements. We understand by logically proving something. Understanding and proofs are intimately tied together. So, we can also ask -How much can we prove?

Hilbert & Computability In the 1920s, Hilbert, one of the top mathematicians, claimed that everything is computable. This meant that he believed that computing machines will eventually solve everything. As a result, you might think that humans will have a shot at understand anything. This intuitively is what we have grown to believe over many millennia.

Godel & Incompleteness  In the early 1930s, Godel blew the world with his incompleteness theorems. A system of logic is consistent when we can not arrive at a contradiction - that is a statement proved simultaneously to be true and false. We expect it to hold in life. You don’t question it. What he showed is that if logic is consistent (what we take for granted - and I am being only a little bit lax here), then the system is incomplete. Meaning not everything can be proven. This is a severe blow to our entire world of thinking. We are used to asking questions of why and how but he showed that there are statements that can never be proven. Incredible.

Turing & the HALTING problem In the late 1930s, Turing, likely inspired by Godel, showed a similar result in computation. He demonstrated a problem that no computer will ever solve. We are talking about precise logical problems (such as testing if a number is prime) that one can never write a computer program to solve. The problem was called the HALTING problem. Every computer science undergraduate learns this problem because it defines the limits of computational complexity. The problems that we will never solve. How many times in a day do we encounter a problem and we never pause to think that perhaps it is unsolvable? Side-note: When I was an engineer at Facebook, a co-worker was building HPHP - Facebook’s PHP to C++ compiler and I asked him how long HPHP sped up the PHP code and he responded half jokingly: ‘remember the halting problem?’ That was the one and only time in my life someone had used it in real-life to tell me that he can’t answer the question because the question can not be answered.

At this point you might think that we are done. We have demonstrated the limits of logic and showed that science, math, logic has serious limits since you can not prove everything. We have showed that computers can’t solve everything. Both results are really mind-blowing. Nothing short of classics. But we are not done yet. What I am about to tell you is something that typically no one learns in school.

Kleene & Two Maps Lets talk about a couple more things before. First, we can translate the proof of a logical statement into a computer program such that the statements in a logical system can be proven only if there exists a computer program to solve the HALTING problem. This bridges the results of Godel and Turing and also in a way bridges the destinies of math, logic, computer science into one. Second, we can define the problems and computer programs such that a unique computer program solves a unique problem - it is 1-1.

The Answer After centuries of thought and the baton of theorems passed down from one big thinker to another throughout the human race, we now have everything. We can simply try a counting argument: counting the number of tasks that one can ask a computer to solve and the number of computer programs that one can write. If the number of tasks for example is 2 times the number of programs, then, by pigeonhole, it follows that there are at least half the problems in the world, that will never be solved by computers.

It turns out that the number of tasks that you can ask a computer to solve is much larger than the number of computer programs that can be written. So large, that, if I played God and (uniformly) randomly picked a single task from the entire possible set of tasks, the probability that there can be a computer program written to solve it is infinitesimal (essentially 0). Therefore, if the entire universe was the size of all tasks, the number that we will ever be able to solve (in a billion quadrillion years) will forever be smaller than a single grain of sand. In the very grand scheme of things, the norm it turns out is not that we can solve problems, it is actually extremely abnormal that we can.

R. Ryan Williams

Professor at Massachusetts Institute of Technology

9 年

Fun essay, Sri! If only they took my course, they would learn this in school ;)

Vamshi Ambati, PhD

Entrepreneur in Enterprise AI & Data | 2x Founder | 3x Exits | @Predera, @Base CRM, @PayPal, @Carnegie Mellon, @IIIT, @ISB

9 年

A very thoughtful piece Srinath !

要查看或添加评论,请登录

Srinath Sridhar的更多文章

社区洞察

其他会员也浏览了