Artificial Intelligence: Going to the basics
I came across a few questions related to Artificial Intelligence, the most basic ones being:
- What is intelligence?
- What makes machines intelligent?
- Do you judge a machine's intelligence by how closely it can simulate human intelligence?
These are some very tough or a very simple questions, depending on the perspective. Case in point: I can't really explain 3D if I am sitting in 2D, but from the perspective of 3D, 2D is quite simple.
Let me explain: all intelligence is geared towards an end. It could be the act of picking up a cup, or it could be fulfilling an internal desire to do something - which, by the way, can be explained by a set of core desires and needs (Maslow maybe?). But that's going into philosophy and I'll avoid discussing that. So for the purpose of this discussion, we will assume that humans have needs and desires and they set goals to fulfil those needs / desires. Now, we, as a species, since times immemorial, have been working towards fulfilling those needs and desires (and sometimes we mix the two!). Of course, there are goals in terms of health etc. but then the question comes - health for what? Similarly, they could be related to avoiding / predicting certain things - terrorist attacks, accidents etc. But it basically comes down to the needs / desires and experiencing life.
Intelligence, in the current context, is assumed to be how smartly and quickly, we can get to achieving them and then stay there for the maximum amount of time (more on this in the next 3 paragraphs) and sometimes, groups of people get there together. A system where the collective intelligence automatically takes shape by adjusting for desires /goals and capabilities is apparent. Currency is a medium through which some of these transactions occur.
Some anomalies which the above assumptions miss; but without these assumptions, we can't really have a logical discussion:
- Someone might figure out that running after desires is actually the cause of suffering and stop running after them! (Buddha?)
- Collective intelligence is assumed to be more 'intelligent' than individual intelligence. Mob behaviour points to the fact that there are conditions during which collective intelligence degrades to a level below individual intelligence.
What AI is doing is helping us reach the point of fulfilling our desires/needs faster...it's making machines part of our collective intelligence pool. And as technology grows, from the concept of macro economics, it replaces the less efficient system. It's not a direct replacement and there is nothing to be scared of as it is all for us and to help us fulfil our needs/desires. There is, of course, the advances indicated and understood by the case of AI drones which are geared with a specific purpose in mind. These aren't considered to be 'PURE' AI as they are being used as tools and can think, only of the sub-problems, e.g., how to navigate a terrain etc.
(By the way, there are some deep insights on the collective intelligence concepts in the Indian spiritual texts and reading some of them, makes me sure they had all this figured out long back. But that's not a part of this discussion.)
At the same time, just like in a competition, it's all about one person achieving a particular desire, getting bored of it and passing the opportunity to the other person to fulfil a similar desire. We see this behaviour exhibited by billionaire's who are now going on the charity spree. And not everyone want's to be a billionaire, the desires are different for different people and as humans, we understand that if we were fed the same dish everyday, however much we liked it, eventually, we will want a change.
If seen from this perspective, machines are as intelligent as they help us achieve our desires/needs (which help us set goals) which are different for different people but can be traced back to a set of core desires / needs. And since they can't / don't get bored, their intelligence cannot be compared to human intelligence. Also, some goals might be related to things where the machines can only guide or provide pointers at best, but nothing more! This skips a level of actually predicting what is going to happen and even if you could do that, there has to be a goal and it comes down again to the desires / needs. An AI system has to assume a goal, a purpose. A human being, sometimes, exhibits traits which border on actions done without a purpose. Sometimes, just to get an experience. Of course, there is an internal purpose behind every action which is driven by needs/desires. It works at a causal plane and cannot be expressed in words. It is outside the scope of this discussion.
A theory, for which I don't have answers to, which keeps popping up in my head, and would be grateful to get pointers on:
The threat of AI is when, while wishing the best for us (humans) collectively, the system's decide something that makes us move away from the core definition of being human or decides that we don't have rational sense or decide to not fulfil the desires as they go against the long term best interests of 'humans'. I.e., they forget evolution and focus on the task of making the system better and more efficient.
By the way, not being completely rational is what makes us human!
Varun Bhutani
Managing Director, WorkKey.com
References:
https://phil415.pbworks.com/f/TuringComputing.pdf
https://www.computerhistory.org/timeline/ai-robotics/
https://www.nytimes.com/2018/02/12/technology/artificial-intelligence-new-work-summit.html