Tulley: "What is AI?"
The Comparable of Languages represented in Billion Parameters (2022)

Tulley: "What is AI?"

I get asked internally and externally to engage in conversation on automation strategy and prompt engineering, frequently people use the term AI so I thought I’d classify all of artificial intelligence or AI into seven types to help us all get our heads around this.

Why is this important?

The most recent 2023 Oxford Insights Government AI Index report accentuates the point, Aotearoa scores are down to 49th position worldwide, in particular it highlights "nil points" for government AI Vision. (Australia on the other hand scored 100/100).

I'm going to attempt to classify all of artificial intelligence or AI into seven types. These seven types of AI can largely be understood by examining two encompassing categories.

There's AI capabilities and there's AI functionalities.

Note: If there is one take away it is the distinction of these two terms.


Prelude: (Skip if you already know AI)

So let's start basic - "Artificial Intelligence"

Artificial: "made or produced by human beings, rather than occurring naturally, especially as a copy of something natural."

Intelligence: "the ability to acquire and apply knowledge and skills from the collection of information"

Humans demonstrated General intelligence when Homo Habilis first flipped the stone.

It took millions of iterations of the same tool, us training our methods of thinking to imagine a broader use case from the same action. From this one act of flipping the stone, thousands of uses came from that self taught problem-solving, reinforced learning and showed wider recognition of our cognitive ability. (The reason I highlight this) this is General Intelligence, where we became homo erectus and walked as men.


We have Artificial Intelligence: (Replicating This)

Deep Learning -> feeds into Machine Learning -> feeding into AI

AI referring to the simulation of human intelligence in machines that are programmed to think and learn like humans. The term encompasses any machine or algorithm that can perform tasks requiring human-like intelligence, including problem-solving, recognition, understanding language, and learning.

Terminology to know: Key terms include machine learning (ML), natural language processing (NLP), robotics, cognitive computing, and deep learning (DL)

I'll attempt to cover this further on, which I'll try to cover in detail.


AI capabilities (there are three)

The first of which is known as artificial "Narrow AI" which also goes by the rather unflattering name of "weak AI", there's AGI which is known as "Artificial General Intelligence" and then there's "Super AI"

But also think of these three capabilities falling under two categories (Functionalities):

Realized AI-that's the artificial intelligence we have today.
Theoretical AI: (Everything else - It's the AI of Tomorrow)

GenAI (General AI) is Theoretical which I'll explain further down.

Let's go one layer deeper:


Narrow AI

A very interesting capability to start us off as it's a realised AI because actually, narrow AI is the only type of AI that exists today it's all we currently have. Any other form of AI is theoretical. - Now there are a lot of forms of automation engineering and prompt generation that falls under this but aren't actually AI.

Narrow AI can be trained to perform a narrow task, which, to be fair might be something that a human could not do as well as an AI can but it can't perform outside of its defined task. - It's weak intelligence.

It need us humans still to train it.

Note: Narrow AI represents all AI capabilities we have today and is commonly used.

Types of AI based on functionalities:

In the real world of realized AI we can think of narrow AI as having two fundamental functions. One of those is reactive machine AI now Reactive AI /Machine AI are systems designed to perform a very specific specialized task. Reactive AI stems from statistical math, and it can analyze vast amounts of data to produce a seemingly intelligent output. We've had reactive AI for quite a long time. Back in the late 1990s, IBM's chess playing supercomputer Deep Blue beat chess grandmaster Garry Kasparov by analyzing the pieces on the board and predicting the probable outcomes of each move. That's a specialized task with a lot of available data to create insights. The hallmark of reactive AI.

Other Narrow AI functionalities are classified as "limited memory AI" this is a form of AI that can recall past events, Limited Memory AI and outcomes and monitor specific objects or situations over time. It can use past and present moment data to decide on a course of action most likely to help achieve a desired outcome. And as it's trained on more data over time, limited memory AI can improve in performance. Think of your favorite generative AI chatbot, which relies on limited memory AI capabilities to predict the next word or the next phrase or the next visual element within the context it's generating.

There are two other functionalities but this will be under theoretical AI capabilities? - Covered in AGI and Super AI

When a Client is talking of AI:

In the modern workplace, they are not rivalling Google, Microsoft or building a Super AI to replace all their employees, in most cases they are talking about automation.

  • Fixed Automation
  • Programmable Automation
  • Flexible Automation
  • Integrated Automation

Prompt Engineering mostly, the practice of crafting inputs (prompts) to effectively communicate with AI models, particularly LLMs, to achieve desired output. At a high level, Prompt design, zero-shot learning, few-shot learning, chain of thought prompting, and instruction tuning.

Some have a robust data infrastructure & Touch on Machine Learning in innovative ways... These are fun conversations!

Machine Learning:

Nestled within AI, machine learning shines as a beacon of self-learning algorithms that feast on data to predict outcomes, while deep learning, a jewel in the crown of ML, boasts of automating the feature extraction process, thus embracing the challenge of big data.

Supervised learning, leverages labeled datasets to train algorithms on classification or prediction tasks, serving as the backbone for strategies like customer retention and pricing models. (Requires humans) Unsupervised learning, on the other hand, thrives on the analysis and clustering of unlabeled datasets, uncovering hidden patterns without a guiding human hand, a technique pivotal for customer segmentation and targeted marketing.

Lastly, reinforcement learning introduces us to a world where an agent learns from the consequences of its actions, rewarded for moves that align with desired outcomes, a method that's steering the future of autonomous vehicles. Now, this is a form of semi-supervised learning where we typically have an agent or system take actions in an environment.

We have had this in the workplaces for years!

Note: This is not the craze people are referring to when we talk about AI.


Generative AI

Note: This is not General AI, which aims to perform any intellectual task that a human can. - I'll come to this in a moment.

Generative AI models are more simple, these models learn patterns, styles, or features from large datasets during their training phase, enabling them to generate similar but unique outputs. Generative AI is a broad category that encompasses all AI models capable of generating new data across different modalities and is something we've had for years.

But just because you're using languages essential for AI, such as Python, R, and Julia in your workplace for automation and focusing on libraries like TensorFlow, PyTorch, and Scikit-learn or using a coded algorithm for an predetermined output, does not mean you're using Generative AI, this is still Integrated Automation and these are not LLM (Large Language Models)

Natural Language Processing (NLP) is also thrown around, how machines understand, interpret, and generate human language using techniques like tokenization, sentiment analysis, and language models (e.g., BERT, GPT) but this is not designing AI in your workplace.

So what makes AI?

Lets start at Deep Learning (DL) , an algorithm which became practical to implement on low cost hardware and is a hallmark to what we use today as it's an algorithm that does not need t be designed with significant knowledge of the task at hand. - We craft and learn each task from training examples, essentially programming themselves. Like the Myelination of our own nervous system (Like teaching a baby a red apple is sweet = Red + Apple = Sweet) the phrase it learnt from this sum of vectors is "a sweet red apple" and prompt engineering, (like the predictive text on our phone), simply put does the rest of the process.

Alpha-Zero bought in 2014 used "reinforcement learning" and it still is the greatest grand master. - Given no context , starting from Zero and only the knowledge of the framework (ie: Rules) Alpha Zero gained superhuman levels of chess play within 24 hours.

Since then we have built "Transformers" an algorithm that can perform many natural language tasks without any explicit training.- This birthed the GPT Craze, where "Trained" Transformer (GPT)-3 language model. - Note it's still trained by humans, even now.

OpenAI made a startling result by using the first half of "Google's Transformer architecture" (attention is all you need paper) a deep neural network could be pre-trained on a large corpus of text and used to generate new text. The gist of how GPT works, in 2020 GPT-3 used "175 Billion Parameters" then in 2022 Google took this and with a small change called chain-of-thought prompting could enable large language model to perform complex reasoning"

Logical reasoning: Simply by adding, "let's think step by step before each answer"

In 2023 GPT-4 it is not just capable of reasoning, it's better at reasoning than average humans. It scored in the 90th percentile on the Uniform Bar Exam for lawyers. It scores in the 99th percentile on the GRE Verbal Assessment. It even scored a 77% on the Advanced Sommelier examination (Without eyes or a nose) - Yes it knows wine and flavour better than you or me.


Artificial general intelligence (AGI)

A favorite of memes, science fiction, and betting markets is artificial general intelligence, also known as AGI also known as "strong AI".

To be clear, General AI (AGI) is currently nothing more than a theoretical concept. - Even Open AI hasn't reached this yet and the Computing Power is more than our technology can reach today.

Sidenote: Elon Musk's lawsuit is to expose that we have reached this milestone.

Artificial General Intelligence (AGI) is a general purpose artificial intelligence system - a machine having intelligence for a wide variety of tasks like a human. - This poses a great threat to humanity.

But essentially the idea: AGI can use previous learnings and skills to accomplish new tasks, new learnings, in a different context, without the need for us human beings to train the underlying models. If AGI wants to learn how to perform a new task, it will figure it out by itself.

Which sounds... disconcerting

When we look at AGI, we have to think about:

Adaptive learning, cross-domain learning, and cognitive flexibility.

"theory of mind AI" this would understand the thoughts and emotions of other entities (ie: Humans). "Emotion AI", it could infer human motives and reasoning and personalize its interactions with individuals based on their unique emotional needs and intentions. And actually, emotion AI is a "Theory of mind AI" currently in development.

AI researchers hope it will have the ability to analyze voices, images and other kinds of data to understand and respond to human feelings.

Ilya Sutskever - Chief scientist of Open AI really pioneered making AGI a reality but now it's pioneered by China (MT-NLG) and Microsoft (who owns 49% of openAI) it also has GPT-4 now de facto microsoft proprietary algorithm which could be reasonably viewed as an early (yet still incomplete) version of an Artificial General intelligence (AGI) or (GenAI) and Open AI is currently building Q-STAR which is building a stronger claim to AGI.

(November 2023) - It was determined OpenAI has not attained AGI even though Ilya Sutskever believed it was close. - Even Mr Hassabi (Deep Mind) emphasized the potential dangers of surpassing human intelligence.

Finally! An AI that really understands me.


Super AI - (SMI)

Finally, under super AI, that's artificial "super AI", we have "self-aware AI" winning my personal award for the scariest AI of all, it would have the ability to understand its own internal conditions and traits, leading to its own set of emotions, needs and beliefs.

Ever realized, super AI known as superhuman machine intelligence would think, reason, learn, make judgments and possess cognitive abilities that surpass those of human beings.

We as Humans have the Capability of so much more! - Our brain is the original supercomputer

The application's [possessing] super AI capabilities would have evolved beyond the point of catering to humans sentiments and experiences, and would be able to feel emotions and have needs and possess beliefs and desires of their own.

Yeah. So let's park that cheery thought for now.....


Keynotes and Takeaways:

We've covered seven types of AI, and only three of them actually exist today!

Three Capabilities and the Seven Functionalities:

Generative AI - Is an all encompassing term.

"On a sliding scale, what sort of AI are we talking about here?"

(Automation - Deep Learning - Machine Learning - Generative AI- Super AI)

This is a framework of AI today and into tomorrow!


  1. Artificial Narrow Intelligence (ANI) - (Realised AI)- Reactive Machines - (Realised AI)- Limited Memory - (Realised AI)
  2. Artificial General Intelligence (AGI) - (Theoretical AI)- Theory of Mind - (Theoretical AI)
  3. Artificial Superintelligence (ASI) - (Theoretical AI)- Self-aware AI - (Theoretical AI)


We've also touched on defining terms:

  • Deep Learning
  • Machine Learning
  • Generative AI


Note: Automation - (Does not always fall under Generative AI)

There is also not always a need for AI in every context and takes a lot of transformation of data and infrastructure to make the possibility a success.

There is still so much to be learned and discovered.

If you made it this far into the piece:

Take a look at the image I used above (headline)

Context: This gives the size and scale of the LLM's out there, each colour coded by country and scaled to size, represented in Billion Parameters per second.


Personal Thoughts:

Of course, most people reading this will have no idea the power of AI and most enthusiasts are trying to replace their own roles right now. But if we only replace segments of our roles by using Narrow AI in an iterative way, and we integrate a chain (of humans) reinforced learning across the process, more employees may adapt to different roles and capacity to guide, support and police this new world.

Generative AI is a technological leap bigger than the internet itself. - Its huge but successful implementation is hard.

Getting Data Warehousing set up correctly, the right infrastructure and the hardest part of all is getting (People) with the limited capability in New Zealand.

To limit the impact:

  • If we replace segments of our role using Narrow AI in new and iterative way
  • We integrate a chain (of humans) with reinforced learning across the process
  • We reinvent what Humanity can bring to a digital world.

People are misusing and misunderstanding the AGI and Generative AI distinction on the Daily… so I wanted to write a piece to highlight "What AI is Today."



Tulley Gray: RandstadDigital - Your Digital enablement Partner










Steve Cosgrove

All that's best in Education

6 个月

I think you have chosen a very interesting definition of intelligence. It talks about acquiring and applying knowledge and conveniently avoids the question of creating new knowledge - which is something that machine learning does not do at all well. (The exception is where synthesis of existing knowledge 'creates' knowledge that was not previously available.) "Nestled within AI, machine learning shines ... " What I don't find clear is exactly what 'AI' adds to ML to distinguish AI as something bigger (that encompasses ML within it). Are there other features of AI that make ML more accessible?

回复
Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

6 个月

It's inspiring to see your dedication to unraveling the complexities of AI and sharing your insights. AI encompasses a vast array of techniques, from machine learning to deep learning, with the ultimate goal of achieving artificial general intelligence. How do you navigate the balance between simplifying complex concepts and delving into the depths of AI's potential in your discussions?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了