Artificial Intelligence - Is It Intelligent?

Artificial Intelligence - Is It Intelligent?

This article is not really about the AI systems debate on whether AI will reach general intelligence (AGI) - but it is about the poor awareness of what AI is, and even the ignorance of what human intelligence is. AI is a tool and like any another tool - it needs responsible and supervised usage. Where AI is different than other technologies is that its impact (good and bad) can be massive with secondary and tertiary ripples, and the level of control we have on it as it gets embedded into our daily life and the choices we have (like is this site selling my data?) decreases with time (despite the regulations).

AI is a combination of hardware, software, and data and it is shaped by human ingenuity. If we go back to the term coined in 1955 - the problem the small group of researchers attempted to address was can machines be trained to be intelligent - "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

The conference did not end well but it further fuelled the promise of AI, which has historically been driven by defence money. The problem here is you need to know what are different types of human intelligence and then train machines to do it. Human intelligence cannot be precisely defined yet!

Machines are not good at doing different types of things at the same time - hence they can recognise pictures (with human training) and then we say they can "see". They can spew out a sentence to a question looking at huge amounts of data that is already tagged by humans, using statistics, and we say it "converses". So called "smart" machines like an autonomous cars can have 150 Electronic Control Units (ECU) inside of them (and have a huge carbon footprint because of the data they collect). One autonomous car is equivalent to 2600+ internet users - you do the math on sustainability (and this also does not look at e-waste).

Humans think differently as we have empathy, situational awareness, and we can do so much more. My brain can do multiple things at one time, it auto regulates tasks - eating, breathing, digestion, circulation, healing my body, preventing infections, and even every cell in my body has more data stored into it than can fit in my computer (see this incredible report by Nature)! This is an incredible body (it is not a machine), and it does all this with a fraction of the power your computer uses. Machines are faster in some things than human are and we are not in competition. If a machine is to serve a human then we need to complement each other before it can augment us.

The so called advanced AI machines need so many other machines and programs behind the scenes to work. You can't see it, but it is there - from the data cables under the sea, the energy plants and cooling plants that manage the "cloud" servers, the huge amount of data crossing hands that blur the lines of privacy, and the hardware components that criss cross the world.

AI systems convincingly mimic human intelligence but they are NOT intelligent. They give us data curated responses (thanks to the algorithm) and ideally this should help us make better decisions. Here are some questions I would like to ask:

  1. The Delegation Question: What decisions should we be responsible for and what should we delegate to a machine and why?
  2. The Meaningfulness Question: What decisions are meaningful for the human? I am not sure just approving a machine decision adds value to my life.
  3. The Physical Interaction Question: If humans are social beings and we are facing an epidemic of loneliness and trust issues - how can we use AI to bring the community physically together to interact - so what events and opportunities require physical presence and what should we keep instead of removing (neighbourhood stores, office centres) for foster social interactions?
  4. The Knowledge Question: If most knowledge is tacit (in people's heads) - how do we encourage sharing of this information? Data that we have is often a small curation of this wisdom and experience people have (and before you ask - no, brain computer interfaces will not solve that problem yet and there are many ethical questions on this).
  5. The Accountability Priority Question: How can we hold businesses and governments accountable to people at these levels: employees first, then customers, then shareholders and investors (we seem to have got the order wrong)? ESG metrics should ask the question - how many people did you fire and why? Each person we fire is a person with a family, responsibility and obligations - think of the impact when you are not firing them for poor performance but to beef up your Q1 report because the AI technologies you invested in are more expensive than you thought!
  6. The Learning Experience Question: By using AI machines, am I also blurring the human capacity to learn - remember we learn experientially (and sometimes meaningful experience takes time - life takes time).
  7. The Problem Identification Question: Sometimes we use AI because the system is broken - emergency doctors work long shifts - let's use AI - the problem here is why can't they work shorter shifts - what are changes we need in our regulations and education curricula? So often, intuitively we solve for the wrong problem and think AI is a solution when it really is not.

If we want to make machines "more intelligent" then humans need to get more intelligent about the problems we are facing and the solutions required. This means creating an answer that works best for the person (the individual - and no - I do not mean the Board, the Senior Manger or the Investors). It means looking at the community - if people prefer talking to a person, why are there so many chatbots? We hate them....(at least most people I talk to do hate them).

If we are worried about privacy - why are you recording customer conversations and how are you using this data and where is it stored and who deletes it? why don't you tell me this?

How many third party vendors are you using who have access to my data and what data do you give them access to? Unfortunately the Terms of Reference are so complicated (thanks to the army of lawyers), I really don't know what my rights are and what I am giving up.

How do these AI systems change human brains (especially children?) and what does it mean for humanity?

Let's start a discussion and reach out if you want to know more.

More on www.melodena.com



"If we want to make machines "more intelligent" then humans need to get more intelligent about the problems we are facing and the solutions required." Precisely where we need to focus the AI discussions. Thanks Melodena Stephens.

Dr.Rajendra Rajuskar

Associate Professor at Abasaheb Garware College,Karve Road, Pune, 411004

2 个月

Very helpful

Nikhil Varma, PhD

Professor | Blockchain Expert | Business Coach for Web3.0 business model transformation | Speaker | Author

2 个月

Great insights Melodena! The challenge is how can we ensure AI tools enhance social interactions and ethical responsibilities while balancing technological advancement with genuine human connections and accountability? As researcher one of my current concern is that AI is validating work produced by AI, specially in the areas like technology and computer science. Does that leave people to just be the viewers and consumers in the future?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了