A.I. oh A.I.
Credit: JESUSSANZ/ISTOCK/GETTY IMAGES PLUS

A.I. oh A.I.

1.0 INTRODUCTION

Hello everyone. It has been a year since my last article in February 2024. Wow, that’s already a year past and so does time flies. And I guess some of you probably know where my mind had been for the past 3.5 years. Anyway, I came across an interesting topic which has remained close to my heart since my college days, i.e. Artificial Intelligence (A.I.).

I first studied A.I. and Machine Learning over 30 years ago during my college days in the USA. It was a very exciting course but during that time we can only imagine the practical solution in our lives. And today, these concepts and theories have become reality and improving by the day.?

What I am about to write is based on my personal observation of what is going on in my country pertaining to the development of A.I. and its current applications. Technology advancements are good but when you open it to masses that’s where the danger is. I know that players in the A.I. industry are working on A.I. governance, but it seems like the speed of mass adoption has outpaced these efforts, leaving governance as an afterthought—often only considered after widespread damage has already occurred. However, as with any issue, it’s never too late to take prompt action.


2.0 WHAT IS A.I.?

Well, to start off, let me share with you the definition of A.I. in a simplest and concise way.

A.I. (Artificial Intelligence) is the simulation of human intelligence in machines, enabling them to perform tasks that typically require human thinking, such as learning, problem-solving, decision-making, and understanding language.

So, is that easy for you to interpret?? Maybe or maybe not. When I say A.I. it also refers to the robots that are being created with some intelligence for a specific task. So, let’s look at the types of A.I.

There are 3 types of A.I.:

  1. Narrow A.I. (Weak A.I.) – Designed for specific tasks (e.g., Siri, ChatGPT, Google Translate).
  2. General A.I. (Strong A.I.) – Hypothetical AI with human-like reasoning across various tasks.
  3. Super A.I. – A theoretical AI surpassing human intelligence (not yet developed).

Some of the key A.I. Technologies are:

  1. Machine Learning (ML) – A.I. learns patterns from data.
  2. Deep Learning (DL) – Uses neural networks to process complex data (e.g., image recognition).
  3. Natural Language Processing (NLP) – Enables A.I. to understand and generate human language (e.g., ChatGPT).
  4. Computer Vision – Allows A.I. to interpret images and videos (e.g., facial recognition).

Well, I guess that’s good enough for a short A.I. introduction.? Before we explore the impact of A.I. on our lives, let me bring up some of the issues with A.I.

The list of A.I. issues is as follows:

1.????? Hallucination

2.????? Bias and Discrimination

3.????? Lack of Explainability (Black Box Problem)

4.????? Ethical Concerns

5.????? Security Risks

6.????? Dependency and Job Displacement

7.????? High Energy Consumption

8.????? Lack of Common Sense and Emotional Intelligence

9.????? Data Privacy and Ownership Issues

Let’s look at each of the issues here.

1.?? Hallucination.

Hallucination refers to instances when a Large Language Model (LLM) or A.I. generates false, misleading, or nonsensical information that appears plausible but is not based on real data.

?Why Do LLMs Hallucinate?

  1. Lack of Ground Truth – LLMs generate responses from patterns in training data but do not “know” facts.
  2. Overgeneralization – If an LLM lacks relevant data, it may create an answer based on similar but incorrect patterns.
  3. Incomplete Training Data – If the model hasn’t seen certain facts, it might fill in the gaps incorrectly.
  4. Prompt Ambiguity – Vague or complex questions can lead to guesswork.
  5. Fluent but Wrong – LLMs prioritize coherence and readability, sometimes producing confident but incorrect answers.

LLM hallucination is a major challenge in AI development, especially for applications in medicine, law, and research, where accuracy is critical.

?? Example: In 2023, a lawyer used ChatGPT for legal research, only to discover that the A.I. fabricated court cases. This highlights the danger of blindly trusting AI-generated content.

2.?? Bias and Discrimination.

A.I. learns from historical data, which can contain biases related to race, gender, politics, or culture. This can lead to unfair decision-making in applications like hiring, lending, or law enforcement. For example, A.I. hiring tools have been found to favor male candidates due to biased training data, reinforcing existing inequalities rather than eliminating them.

?? Example: A 2018 study found that A.I. hiring algorithms favored male candidates due to biased training data. This demonstrates the importance of ethical A.I. development.

3.?? Lack of Explainability (Black Box Problem).

Many A.I. models, especially deep learning systems, make decisions in ways that are hard to understand even for experts. When an A.I. system denies a loan or diagnoses a medical condition, it may not be able to provide a clear explanation of how it reached its conclusion. This lack of transparency makes it difficult to trust AI in critical applications such as finance and healthcare.

?? Example: A.I.-driven credit scoring systems often deny loans without explaining the rationale, making it challenging for individuals to contest decisions.

4.?? Ethical Concerns.

A.I. can be misused for purposes like deepfakes, misinformation, and surveillance, raising significant privacy and ethical issues. For instance, AI-generated deepfakes can be used for political manipulation, blackmail, or fraud, making it difficult to distinguish between real and fake content. The increasing use of A.I. in surveillance also raises concerns about individual privacy and civil liberties.

?? Example: In 2022, deepfake technology was used to impersonate a CEO, tricking employees into transferring millions of dollars to fraudsters.

5.?? Security Risks.

A.I. is vulnerable to hacking, manipulation, and adversarial attacks. Small changes to input data can trick A.I. into making incorrect decisions, which can have serious consequences. For example, a carefully placed sticker on a stop sign can cause a self-driving car’s A.I. to misinterpret it as a speed limit sign, leading to potential accidents. This highlights the need for robust security measures in A.I. development.

6.?? Dependency and Job Displacements.

As A.I. continues to automate tasks, many industries are experiencing job displacement. AI-powered chatbots, for instance, are replacing human call center agents, reducing employment opportunities in customer service. While A.I. creates new jobs, the transition can be difficult for workers who lack the necessary skills to shift into AI-related roles.

7.?? High Energy Consumptions.

Training large A.I. models like GPT-4 requires enormous computational power, leading to significant environmental impact. Data centers that support A.I. operations consume millions of watts of electricity, contributing to carbon emissions. This raises concerns about the sustainability of A.I. as its demand continues to grow.

?? Example: GPT-4’s training consumed as much energy as powering an entire city for weeks. Sustainable A.I. solutions are needed.

8.?? Lack of Common Sense and Emotional Intelligence.

Despite their ability to process vast amounts of information, A.I. systems lack true human understanding, emotional intelligence, and the ability to apply real-world knowledge flexibly. They struggle with recognizing sarcasm, humor, and cultural context, making them less effective in nuanced human interactions. This limitation affects their ability to engage meaningfully in conversations and decision-making processes.

9.?? Data Privacy and Ownership Issues.

A.I. models rely on large amounts of user data, leading to concerns about who owns the data and how it is used. Many A.I. companies train models using user interactions without explicit permission, raising ethical and legal questions about privacy. If not properly regulated, A.I. could lead to mass data exploitation, affecting user rights and security.


4.0 IMPACT AND RISKS OF A.I.

Based on the above issues let me give my opinion on what those issues may translate into. While it is good to embark on A.I. but we need to ensure it is being used properly, ethically and professionally. These are the impact and risks which I personally observed over the past couple of years.

1.????? Use it Only as a Tool

Let me take you back when the car was first invented. It was meant as a means of transporting people and goods. Not many people drive a car initially since it was expensive. Then, when the road expands and many people can afford cars, then the rules and regulation being developed to cater for the masses. They even have to take driving tests before they are qualified to drive. So, all drivers abide by those rules and regulations, thus making driving a car safe and enjoyable.

So, A.I. is also a tool but we have deployed it in a rash manner, not like the car. We have opened it to masses without having solid rules and regulations or even a license to use A.I.? Furthermore, there were still many unresolved issues which many A.I. developers discovered after the floodgate was opened.

As of now, it is ok for the educated, experienced users to use A.I. but for the masses, they are using it at their own expense and risks.

2.????? Distortion of Young Mind Development

There were many stages of development from baby, childhood, teenager, young adult and adult. Since most kids nowadays have access to smartphones, they may also have access to A.I. especially the young adults and above.

In fact, it may start from high school if they are not monitored on using the A.I.? I can see it being widely used by college students and fresh graduates nowadays. Our institutions need to be taught how to teach young adults in the proper use of A.I. I do not have access into what the institution enforcement of A.I. usages if there is any enforcement.

But in the working environment this is what I observed. We have been hiring many internships, and we monitored them in terms of how they do their work. Some were A.I. literate and some were not. For those who were literate, I discovered that they are using it as a tool to find out the answers to their tasks or assignments. Well, the answer may be correct or not. When they find the answers, they will copy and paste it on the assignment and submit them. Well, it looks nice on paper if they have good formatting skills. The problem is when they are to present the paper or justify what was written-they are not able to even explain, describe or justify what was on the paper. And they become fully dependent on A.I. that they do not know how to find the solution in a traditional way, what more to read a book. By being too dependent on A.I. they also lost the skills of problem solving since A.I. took that role from them.

?In the nutshell, we need to:

a.????? Educate them while they are still in college on how to use A.I. as a tool.

b.????? Eliminate the culture of Copy and Paste from A.I.

c.?????? Increase their comprehension of the subject matter in a traditional way and then use A.I. to assist them in QA/QC or enhancement.

d.????? Increase their problem-solving skills in a traditional way and then use A.I. for verifications.

?Without those actions, we are endangering our young generations’ minds by not having problem-solving skills, intellectual, and resourcefulness.?

? Recommendation: Universities and institutions should integrate A.I. literacy and code of conduct courses and enforce academic integrity policies.

3.????? Employment Risks

As described in the issues section, there will be many tasks being replaced by A.I. where human intervention is not required. Therefore, educational development also has to be tailored towards adapting the unwanted skills which A.I. is being used as a tool and focus more on the higher skillsets that the industry requires.

This also requires collaboration with the Employers in deciding what areas they plan to use A.I. and implement it in a gradual manner so that the employee is ready to take up the new role using the A.I. Proper training to equip the employee in using A.I. would be helpful in ensuring a win-win situation.

It is undeniable that embarking on A.I. as a tool would increase productivity, sales and profit but it has to be balanced with civil and society needs.

? Recommendations: 1. Educational Institutions should collaborate with Businesses in determining the type of A.I. skillsets they need. 2. Businesses should invest in A.I. training programs to ensure employees can leverage AI rather than be replaced by it.

4.????? Music and Arts

For those who love music and arts, it would be a great disappointment for them if the industry relies on A.I. to generate the music and arts based on generative A.I. Similar to item 2, it is either we kill the music and arts mind development or even replace them all together.

Even though A.I. may generate the music or arts, but without emotional input and creative thoughts, the output may not be appreciated by true music and art lovers. Besides, if generative A.I. can produce fantastic output without requiring music and art skills, then how would we honor the songwriter or artists when they died if the masterpiece were created by A.I.? Should we honor them or the A.I.?

Every masterpiece has a story behind it and only its true creator would be able to reveal it. So, we should not treat the production of arts and music like a factory product. It should come with a creative process and delicate touches to be a masterpiece.

? Recommendation: The music and art industry must establish clear ethical guidelines for AI-generated content.

5.????? Security and Safety

There are many aspects of security besides the insecurity of the A.I. tool itself. Generative A.I. is prone to be used by criminals to generate fakes, either image, text, video, or voice to perform many crimes. This is currently happening and without proper A.I. governance, it would become an epidemic.

Looks like A.I. is a double-edged sword and it is all because we have opened it up to masses without proper regulations.

To address this type of crime, the enforcement and financial personnel need to be retrained in adapting to the A.I. advancements. Similarly, parents and children need to be educated about this issue.

6.????? Emotionless – Insensitive

We know that A.I. has no emotion, human comprehension, and empathy. Just like the robot, they can only perform tasks they are programmed to do. When the person was exposed to too many interactions with A.I., their thoughts may also assimilate the robot-like thinking.?

Although some of the advice or solutions by the A.I. may sound logical to us, we cannot take it point blank. We need to filter it and consult human expertise when it comes to major advice or solutions.

A good example would be in the field of law, medicine, sciences, health, mental conditions, and faith, where we should not rely on output without proper checks. Because at the end of the day, the final decision will always be on us.? So, please educate all our family and friends about the importance of this subject.


4.0 CONCLUSION

One the major question being asked since during my college days over 30 years ago was “Can a Computer Think?” To me the answer is still – “No”. ?Thinking has a broad terminology which covers interpretations, computing, emotions, history, facts, experience, belief, common sense, and the list goes on.

A.I. researchers are still working tirelessly to make the computer able to think like humans, but it may take a long time just to be close to, say 20% (well, that number just pops out of my mind), what humans can do. So that’s the good news for all of us Humans, right?

But don’t celebrate yet because although A.I. cannot replace every Humans, but they can take away some of the jobs that human is doing. And it is getting more obvious nowadays. It may take a few more years before A.I. takes away many lower to mid-level jobs, and we humans are lucky to be their supervisors or managers in that sense-but only if we are qualified. So, it’s not a honeymoon period for us, especially for the new graduates and lower-level employees.

If we are to go full speed on A.I. in this country, we need to look from the perspective of a) producing the right graduates i.e. education, b) gradual industry deployment of A.I., c) proper retraining or upskilling, and d) the economic, social and environmental impacts.? I hope that this will be addressed as part of the A.I. masterplan.

?So, what’s the key take away here guys?

1.????? Be prepared.

2.????? Be educated in what you are supposed to do.

3.????? Be educated about A.I.

4.????? Be in front of the game.

5.????? Be Human.

Thank God we are Human.

?

Fuad A.

Author, ICT and BIM-FM

3 周

Yes definitely.

Paul Unting

Program Manager at Yokogawa Malaysia

3 周

Loved the article! Tools are meant to assist us to achieve our goals in more efficient ways, but we still need to be responsible and accountable. May I re-post this please?

要查看或添加评论,请登录

Fuad A.的更多文章

  • Unwanted Title

    Unwanted Title

    Life is a test, It tests your faith, resilience, emotions, and knowledge. We are not under control even if we feel like…

    1 条评论
  • THE PANDEMIC

    THE PANDEMIC

    It was told that every living soul on earth shall be tested either in terms of health, death of a loving person or loss…

  • The Beauty of Life

    The Beauty of Life

    Islam is about the way of life. What is the way of life? It is to do good things.

  • THE POWER OF DUPLICATION

    THE POWER OF DUPLICATION

    1.0 INTRODUCTION Have you heard of this phrase “You reap what you sow”? I am sure most of us have heard it.

  • The Dream Team

    The Dream Team

    1.0 PREAMBLE How many times have we seen great company rises, either locally or internationally, and after a few years…

  • Wanna-Be Entrepreneurship 101

    Wanna-Be Entrepreneurship 101

    1.0 Introduction Imagine if you can see what’s your future will be like.

  • KINDNESS ARE NOT TO BE ABUSED

    KINDNESS ARE NOT TO BE ABUSED

    1.0 PREAMBLE The world we live in are so colorful.

  • 7 IMPORTANT ADVICES TO YOUR FRESH GRAD STAFF

    7 IMPORTANT ADVICES TO YOUR FRESH GRAD STAFF

    1.0 PREAMBLE We, as parents, are always looking forward to see our kids successful in their life.

    1 条评论
  • Believing in What You Can't See

    Believing in What You Can't See

    The eyes don’t lie. Is that true? Magic and illusion defeats the eyes.

  • Not Making A Decision Is A Decision

    Not Making A Decision Is A Decision

    1.0 PREAMBLE We wake up, have a nice breakfast, chat with our loved ones, set to go to the office and face the reality…

社区洞察

其他会员也浏览了