AI Buzzword Bingo
Mary-Catherine G.
Certified Project Manager (PMP) | Certified Product Owner (PSPO) | Business Analyst | Problem Solver!
Unless you have been living under a rock, you know that artificial intelligence is, or will be, impacting all aspects of our life sooner rather than later. There are many buzzwords floating around that may be confusing, misunderstood, misused or just plain scary.?The words "subset", "subfield", "type", "category", "branch" and diagrams out there just make things more confusing. Some words have multiple meanings depending on the context. I was scratching my head and needed to sort all of this out, so let’s untangle some of this.?
The Basics
Big Data: Let's start at the foundation of artificial intelligence - data.?Big data refers to extremely large datasets that are difficult to store, process, and analyze using traditional data processing methods. Big data is characterized by the three V's: volume, variety and velocity. It is often used as the raw material applications. Some examples of big data sources are social media posts, online reviews, sensor data, and web logs.?
Artificial Intelligence (AI): What actually is Artificial Intelligence? It is technology that aims to simulate human cognitive abilities such as reasoning, learning, and problem-solving. It is a broad field encompassing many different technologies and techniques. Some examples of systems that fall into AI include, identifying people in pictures, performing work in factories, and even doing taxes.
AI Learning: AI systems learn through mathematical and statistical algorithms that process data, whereas human learning involves complex cognitive processes, reasoning, and understanding. Nonetheless, the term "learning" in AI refers to the ability of the system to acquire knowledge and improve its performance based on data and experience. In other words, it depends on data.
Prompt: Input or instruction provided by a user to generate a desired response. The prompt sets the context or guiding information for the model to generate a relevant and coherent output. It can be a question, a statement, or any text that helps frame the conversation with the AI model.
The Technical Jargon
Machine Learning (ML): This is a technique used for computers to "learn" from data and improve their performance on specific tasks without being explicitly programmed for those tasks. This is the "how" for building an AI system. Unlike conventional programming, where rules are explicitly written, in machine learning, computers learn patterns and rules from a large amount of data. The complexity of machine learning requires careful design and testing to ensure useful, reliable, and unbiased models.
Model: A mathematical representation of a system or a phenomenon that can be used to make predictions or decisions based on data. It consists of algorithms that learn patterns and relationships from data during the training process. A model's performance can be evaluated using metrics like accuracy, precision, recall, etc.
Neural Network: A concept used in machine language modelling that draws inspiration from the structure and function of biological neurons in the human brain. Neural networks consist of layers of interconnected nodes (also called neurons), and each neuron processes and transmits information to the next layer. These networks can learn complex patterns from data through a process called training. Neural networks are highly versatile and can perform a wide range of tasks, including image recognition, speech synthesis, language translation, and more.
Deep Learning (DL): A technique in machine learning that uses multiple layers of nodes (neurons) to learn complex patterns from data. The term "deep" in deep learning refers to the significant number of layers in the neural network. Deep learning models have achieved remarkable performance across various tasks, including natural language processing, computer vision, generative AI (we'll get to that), and more.
Large Language Model (LLM): A specific type of machine learning model that is trained on a large amounts of natural language texts and can generate natural language texts or speech. Large language models can be used for various tasks, such as summarization, translation, question answering, etc. One specific large language model is GPT (this isn't the same as saying ChatGPT - keep reading) which can generate coherent and diverse texts on various topics and styles.
Training and Inference: There are two phases of machine learning, training and then inference. Training is the process of feeding data to a model and adjusting its parameters to optimize its performance. Inference is the process of using a trained model to make predictions or decisions on new data.?An example of training and inference is when you train a model to recognize handwritten digits using a dataset of labeled images and then use the model to classify new images of digits.
AI Concepts
Generative AI: This is a concept within artificial intelligence that focuses on creating new content or data, such as text, images, sound, and video. Generative AI applies to many areas, such as generating artistic works, enhancing entertainment experiences, enabling educational tools, image synthesis, music composition, data augmentation, and more. It uses machine learning to generate new content which is that based on its existing data - machine learning is the "how".
领英推荐
Conversational AI: This is specialized area within Generative AI that focuses on enabling machines to have human-like conversations with people. Generative AI encompasses the broader concept creating content, such as generating artwork, writing stories, composing music, and creating realistic images of non-existent objects or people. However Conversational AI is specifically for generating text-based responses in a conversation-like manner.
Natural Language Processing (NLP): This is a confusing term as it can mean many things. Natural Language Processing can be a capability of an AI system, a method of processing language, or just the concept within AI as a whole. Let's start with the easy one. AI systems can include the ability to understand, interpret, and generate human language in a way that is meaningful and contextually relevant. This capability is called Natural Language Processing. How is this done? Via Natural Language Processing. NLP also refers the method used to enable the ability of processing human language. And lastly, Natural Language Processing also refers to the concept as a whole - the broader idea of enabling computers to work with human language. This includes techniques, algorithms, and approaches that collectively contribute to the processing and understanding of natural language.
Computer Vision (CV): This term is similarly confusing to Natural Language Processing. Computer Vision can mean a concept within AI that refers to the broader idea of enabling computers to "see" and understand the visual world in a manner similar to human vision - techniques, algorithms, etc. OR Computer Vision can mean the ability of an AI system to understand visual information by machines. CV enables computers to recognize objects, faces, scenes, emotions, etc. in images or videos. Computer Vision capabilities can be used for various purposes, such as medical diagnosis, self-driving cars, security, etc. Facial recognition systems use computer vision, which can identify and verify faces in images or videos.
AI Ethics
Hallucination: A phenomenon in which an AI system produces false or misleading outputs that do not match the reality or the input data. An example of hallucination is when a generative AI model produces an image of a cat with two heads or a text that contains false or contradictory information. Hallucination can occur in various AI applications, such as generative AI models producing unrealistic images or texts; NLP models producing nonsensical sentences or facts; CV models misclassifying objects or faces; etc. Hallucination can be caused by various factors, such as insufficient or noisy data; overfitting or underfitting of models; lack of robustness or generalization of models; etc.
Algorithmic Bias (or just Bias): An error or a prejudice that results from flawed data or algorithms that causes unfair or discriminatory outcomes in AI systems. Algorithmic bias can affect various domains, such as hiring, lending, healthcare, criminal justice, etc. Algorithmic bias can be caused by various factors, such as human bias in data collection or labeling; lack of diversity or representation in data or algorithms; inappropriate or inaccurate metrics or objectives; etc. An example of algorithmic bias is when a facial recognition system performs poorly on people of color due to the lack of diversity in the training data or the algorithm.
Deepfake: Content created by AI techniques such as deep learning, which can manipulate or impersonate human faces or voices for malicious purposes. An example of deepfake is when a video of a politician or a celebrity is altered to make them say or do something they did not. Deepfake can pose serious threats to the credibility and security of individuals and organizations.
Interpretability: It aims to make AI model outputs and decisions explainable and comprehensible to humans. AI involves the ability to understand and explain how a model or algorithm makes predictions or classifications. An interpretable AI model provides clear reasons for its outputs, enabling humans to grasp the connections between input features and the model's decisions. Interpretability is crucial for ensuring that AI systems are not regarded as "black boxes" and can be used effectively in domains where explanations are required, such as healthcare or legal settings.
Transparency: It focuses on making AI systems understandable and accountable. A transparent AI system allows users to comprehend how it arrives at its conclusions or recommendations, enhancing trust and accountability. Transparent AI models provide insight into the factors considered during decision-making and help identify biases or errors.
Other Key AI Topics
Generative Pre-trained Transformer (GPT): A type of large language model (LLM) that can generate human-like text based on patterns it has learned from a vast amount of data. The "pre-trained" part means that the model has been trained on a huge collection of text from books, articles, websites, and more.
ChatGPT: An application of the GPT model developed by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture and trained on a vast amount of text from the internet. ChatGPT is designed to engage in conversation and generate human-like responses to user input, making it useful for chatbot applications and interactive dialogue systems. https://openai.com/ . As of the date of this article, it only has data up to 2021 (ChatGPT-4).?It suffers from the same challenges of other large language models, such as data quality, which means the accuracy and completeness of the data it was trained on; ethics, which means the fairness and accountability of its responses; and scalability, which means the computational and storage resources required to run it.?
Bard: An application developed by Google that provides human-like responses to prompts given by the user.?Bard is powered by a lightweight and optimized version of the LaMDA large language model and is able to complete tasks such as generate text, write different kinds of creative content, translate languages and answer questions in an informative way.?Similarly to ChatGPT, it has the same LLM challenges like accuracy, completeness, ethics, fairness and accountability.
Graphics Processing Unit (GPU): Is a specialized computer chip that is designed to handle complex computations.?GPUs are used in AI because they can perform thousands of calculations all at once, which is ideal for faster performance, the ability to handle complex models and tasks, efficiency, and the ability to use less energy and resources.
?
I hope these definitions help you understand AI a little better.
?
Educator - Consultant - Researcher | Digital Transformation | Data Analysis | Change Management | I help individuals and organizations gain insight to harness digital opportunities
1 年Love this AI "explainer" in non-technical language -- very helpful overview!