AI Buzzword Bingo
Image Created with Microsoft Designer

AI Buzzword Bingo

Unless you have been living under a rock, you know that artificial intelligence is, or will be, impacting all aspects of our life sooner rather than later. There are many buzzwords floating around that may be confusing, misunderstood, misused or just plain scary.?The words "subset", "subfield", "type", "category", "branch" and diagrams out there just make things more confusing. Some words have multiple meanings depending on the context. I was scratching my head and needed to sort all of this out, so let’s untangle some of this.?

The Basics

Big Data: Let's start at the foundation of artificial intelligence - data.?Big data refers to extremely large datasets that are difficult to store, process, and analyze using traditional data processing methods. Big data is characterized by the three V's: volume, variety and velocity. It is often used as the raw material applications. Some examples of big data sources are social media posts, online reviews, sensor data, and web logs.?

Artificial Intelligence (AI): What actually is Artificial Intelligence? It is technology that aims to simulate human cognitive abilities such as reasoning, learning, and problem-solving. It is a broad field encompassing many different technologies and techniques. Some examples of systems that fall into AI include, identifying people in pictures, performing work in factories, and even doing taxes.

AI Learning: AI systems learn through mathematical and statistical algorithms that process data, whereas human learning involves complex cognitive processes, reasoning, and understanding. Nonetheless, the term "learning" in AI refers to the ability of the system to acquire knowledge and improve its performance based on data and experience. In other words, it depends on data.

  • An AI system's capabilities depend heavily on the quality and amount of data it is trained on. Providing poor quality or insufficient training data will limit the system's performance. So, “garbage in garbage out”.
  • To be very technical for a moment.?"Artificial intelligence" right now refers to two subcategories: Artificial Narrow (weak) Intelligence (ANI) and Artificial General (strong) Intelligence.?ANI refers to AI systems that possess human-like intelligence and can "understand", learn, and apply knowledge across a wide range of tasks, much like humans.?Think Alexa, self driving cars and web searches.?Whereas AGI refers to AI systems that possess human-like intelligence and can understand, learn, and apply knowledge across a wide range of tasks, much like humans.?In this article, AI refers ANI.?This doesn't matter a lot, however when as-such-and-such is referred to as a type/ category/ subset/ branch of artificial intelligence, AGI or ANI are actually the two categories artificial intelligence.

Prompt: Input or instruction provided by a user to generate a desired response. The prompt sets the context or guiding information for the model to generate a relevant and coherent output. It can be a question, a statement, or any text that helps frame the conversation with the AI model.

  • A basic example of a prompt is a user asking a question or making a statement to elicit a response from an AI language model like ChatGPT few examples:
  • Prompt: "What is the capital of France?" Response (from AI model): "The capital of France is Paris."
  • Prompt: "Tell me a joke." Response (from AI model): "Sure! Here's a joke: Why don't scientists trust atoms? Because they make up everything!"
  • Prompt: "Translate 'hello' to Spanish." Response (from AI model): "In Spanish, 'hello' is translated as 'hola'."
  • In these examples, the prompts are specific instructions or questions given to the AI model, guiding it to generate a relevant and informative response. The model interprets the prompt and generates the corresponding output based on its training (see below) and learned patterns.

The Technical Jargon

Machine Learning (ML): This is a technique used for computers to "learn" from data and improve their performance on specific tasks without being explicitly programmed for those tasks. This is the "how" for building an AI system. Unlike conventional programming, where rules are explicitly written, in machine learning, computers learn patterns and rules from a large amount of data. The complexity of machine learning requires careful design and testing to ensure useful, reliable, and unbiased models.

  • To be extra technical: Machine learning techniques can be classified into three types: supervised learning, which involves learning from labeled data; unsupervised learning, which involves learning from unlabeled data; and reinforcement learning, which involves learning from trial and error by interacting with an environment. A few specific types of machine learning tasks are spam detection, recommendation systems, sentiment analysis.

Model: A mathematical representation of a system or a phenomenon that can be used to make predictions or decisions based on data. It consists of algorithms that learn patterns and relationships from data during the training process. A model's performance can be evaluated using metrics like accuracy, precision, recall, etc.

  • Creating a model involves several steps or processes, such as collecting and preprocessing the data, selecting and training the model (we'll get to that), testing and evaluating its performance, and deploying and updating the model. These steps may vary depending on the type and purpose of the model.

Neural Network: A concept used in machine language modelling that draws inspiration from the structure and function of biological neurons in the human brain. Neural networks consist of layers of interconnected nodes (also called neurons), and each neuron processes and transmits information to the next layer. These networks can learn complex patterns from data through a process called training. Neural networks are highly versatile and can perform a wide range of tasks, including image recognition, speech synthesis, language translation, and more.

Deep Learning (DL): A technique in machine learning that uses multiple layers of nodes (neurons) to learn complex patterns from data. The term "deep" in deep learning refers to the significant number of layers in the neural network. Deep learning models have achieved remarkable performance across various tasks, including natural language processing, computer vision, generative AI (we'll get to that), and more.

Large Language Model (LLM): A specific type of machine learning model that is trained on a large amounts of natural language texts and can generate natural language texts or speech. Large language models can be used for various tasks, such as summarization, translation, question answering, etc. One specific large language model is GPT (this isn't the same as saying ChatGPT - keep reading) which can generate coherent and diverse texts on various topics and styles.

Training and Inference: There are two phases of machine learning, training and then inference. Training is the process of feeding data to a model and adjusting its parameters to optimize its performance. Inference is the process of using a trained model to make predictions or decisions on new data.?An example of training and inference is when you train a model to recognize handwritten digits using a dataset of labeled images and then use the model to classify new images of digits.

AI Concepts

Generative AI: This is a concept within artificial intelligence that focuses on creating new content or data, such as text, images, sound, and video. Generative AI applies to many areas, such as generating artistic works, enhancing entertainment experiences, enabling educational tools, image synthesis, music composition, data augmentation, and more. It uses machine learning to generate new content which is that based on its existing data - machine learning is the "how".

Conversational AI: This is specialized area within Generative AI that focuses on enabling machines to have human-like conversations with people. Generative AI encompasses the broader concept creating content, such as generating artwork, writing stories, composing music, and creating realistic images of non-existent objects or people. However Conversational AI is specifically for generating text-based responses in a conversation-like manner.

Natural Language Processing (NLP): This is a confusing term as it can mean many things. Natural Language Processing can be a capability of an AI system, a method of processing language, or just the concept within AI as a whole. Let's start with the easy one. AI systems can include the ability to understand, interpret, and generate human language in a way that is meaningful and contextually relevant. This capability is called Natural Language Processing. How is this done? Via Natural Language Processing. NLP also refers the method used to enable the ability of processing human language. And lastly, Natural Language Processing also refers to the concept as a whole - the broader idea of enabling computers to work with human language. This includes techniques, algorithms, and approaches that collectively contribute to the processing and understanding of natural language.

  • Conversational AI typically uses Natural Language Processing and machine learning models trained specifically for dialogue interactions.

Computer Vision (CV): This term is similarly confusing to Natural Language Processing. Computer Vision can mean a concept within AI that refers to the broader idea of enabling computers to "see" and understand the visual world in a manner similar to human vision - techniques, algorithms, etc. OR Computer Vision can mean the ability of an AI system to understand visual information by machines. CV enables computers to recognize objects, faces, scenes, emotions, etc. in images or videos. Computer Vision capabilities can be used for various purposes, such as medical diagnosis, self-driving cars, security, etc. Facial recognition systems use computer vision, which can identify and verify faces in images or videos.

AI Ethics

Hallucination: A phenomenon in which an AI system produces false or misleading outputs that do not match the reality or the input data. An example of hallucination is when a generative AI model produces an image of a cat with two heads or a text that contains false or contradictory information. Hallucination can occur in various AI applications, such as generative AI models producing unrealistic images or texts; NLP models producing nonsensical sentences or facts; CV models misclassifying objects or faces; etc. Hallucination can be caused by various factors, such as insufficient or noisy data; overfitting or underfitting of models; lack of robustness or generalization of models; etc.

Algorithmic Bias (or just Bias): An error or a prejudice that results from flawed data or algorithms that causes unfair or discriminatory outcomes in AI systems. Algorithmic bias can affect various domains, such as hiring, lending, healthcare, criminal justice, etc. Algorithmic bias can be caused by various factors, such as human bias in data collection or labeling; lack of diversity or representation in data or algorithms; inappropriate or inaccurate metrics or objectives; etc. An example of algorithmic bias is when a facial recognition system performs poorly on people of color due to the lack of diversity in the training data or the algorithm.

Deepfake: Content created by AI techniques such as deep learning, which can manipulate or impersonate human faces or voices for malicious purposes. An example of deepfake is when a video of a politician or a celebrity is altered to make them say or do something they did not. Deepfake can pose serious threats to the credibility and security of individuals and organizations.

Interpretability: It aims to make AI model outputs and decisions explainable and comprehensible to humans. AI involves the ability to understand and explain how a model or algorithm makes predictions or classifications. An interpretable AI model provides clear reasons for its outputs, enabling humans to grasp the connections between input features and the model's decisions. Interpretability is crucial for ensuring that AI systems are not regarded as "black boxes" and can be used effectively in domains where explanations are required, such as healthcare or legal settings.

Transparency: It focuses on making AI systems understandable and accountable. A transparent AI system allows users to comprehend how it arrives at its conclusions or recommendations, enhancing trust and accountability. Transparent AI models provide insight into the factors considered during decision-making and help identify biases or errors.

Other Key AI Topics

Generative Pre-trained Transformer (GPT): A type of large language model (LLM) that can generate human-like text based on patterns it has learned from a vast amount of data. The "pre-trained" part means that the model has been trained on a huge collection of text from books, articles, websites, and more.

ChatGPT: An application of the GPT model developed by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture and trained on a vast amount of text from the internet. ChatGPT is designed to engage in conversation and generate human-like responses to user input, making it useful for chatbot applications and interactive dialogue systems. https://openai.com/ . As of the date of this article, it only has data up to 2021 (ChatGPT-4).?It suffers from the same challenges of other large language models, such as data quality, which means the accuracy and completeness of the data it was trained on; ethics, which means the fairness and accountability of its responses; and scalability, which means the computational and storage resources required to run it.?

Bard: An application developed by Google that provides human-like responses to prompts given by the user.?Bard is powered by a lightweight and optimized version of the LaMDA large language model and is able to complete tasks such as generate text, write different kinds of creative content, translate languages and answer questions in an informative way.?Similarly to ChatGPT, it has the same LLM challenges like accuracy, completeness, ethics, fairness and accountability.

Graphics Processing Unit (GPU): Is a specialized computer chip that is designed to handle complex computations.?GPUs are used in AI because they can perform thousands of calculations all at once, which is ideal for faster performance, the ability to handle complex models and tasks, efficiency, and the ability to use less energy and resources.

?

I hope these definitions help you understand AI a little better.

?

Sources: ChatGPT , Bing , Claude.ai

Deborah L. Soule, MBA, DBA

Educator - Consultant - Researcher | Digital Transformation | Data Analysis | Change Management | I help individuals and organizations gain insight to harness digital opportunities

1 年

Love this AI "explainer" in non-technical language -- very helpful overview!

要查看或添加评论,请登录

Mary-Catherine G.的更多文章

社区洞察

其他会员也浏览了