TOP 100 AI GLOSSARY
Personal research, Vello, ChatGPT, Claude, Perplexity, Google Academics, Google, Bing, DeepLearning AI, Andrew NG, Geoffrey Hinton, ...

TOP 100 AI GLOSSARY

10 Basic Definitions:

  1. Artificial Intelligence (AI) is the simulation of human intelligence in machines programmed to think, learn, and make decisions. AI systems can perform tasks that typically require human intelligence, such as understanding language, recognising patterns, solving problems, and making decisions.
  2. Machine Learning (ML): A subset of AI that enables systems to learn and improve from experience without being explicitly programmed.
  3. Deep Learning (DL): A subset of machine learning based on artificial neural networks with multiple layers.
  4. Neural Network: A computing system inspired by biological neural networks consisting of interconnected nodes (artificial neurons).
  5. Algorithm: A set of rules or instructions given to an AI system to help it learn and make decisions.
  6. Supervised Learning: A type of machine learning where the algorithm is trained on labelled data.
  7. Unsupervised Learning: Machine learning where the algorithm finds patterns in unlabeled data.
  8. Natural Language Processing (NLP): The ability of machines to understand, interpret, and generate human language.
  9. Computer Vision: The field of AI that trains computers to interpret and understand visual information from the world.
  10. Big Data: Extremely large datasets that can be analysed to reveal patterns and trends.


The Next 90


  1. Activation Function: A function used in artificial neurons to define the output of that node given an input or set of inputs.
  2. Adversarial Machine Learning: A technique used to fool machine learning models by inputting malicious data.
  3. Alignment: Adjusting an AI system to produce the desired outcomes consistently. This can involve anything from content moderation to ensuring positive and beneficial interactions with humans.
  4. AI Ethics: The branch of ethics that deals with the moral issues surrounding the use and development of artificial intelligence.
  5. AI Governance: The frameworks and guidelines for managing the development and use of AI technologies.
  6. AI Safety: The field concerned with ensuring that AI systems do not cause harm to humans or the environment.
  7. Anomaly Detection:?identifying rare items, events, or observations that raise suspicions by differing significantly from most of the data.
  8. Anthropomorphism is humans' tendency to attribute human characteristics to nonhuman objects or systems. In AI, this often involves perceiving a chatbot or AI system as more humanlike or conscious than it is, such as believing it experiences emotions like happiness, sadness or even sentience.
  9. Artificial General Intelligence (AGI): A hypothetical type of AI that would have the ability to understand, learn, and apply intelligence in a way similar to humans.
  10. Attention Mechanism: A technique mimicking cognitive attention, allowing neural networks to focus on specific input parts.
  11. Autonomous Systems: Systems that can perform tasks without human intervention.
  12. Backpropagation: An algorithm for training artificial neural networks.
  13. Batch Normalisation: A technique for improving artificial neural networks' speed, performance, and stability.
  14. Bias (in AI): Systematic errors in AI systems that can lead to unfair outcomes for specific groups.
  15. Chatbot: A computer program designed to simulate conversation with human users, especially over the internet.
  16. Cloud Computing: The delivery of computing services over the internet, including servers, storage, databases, networking, and software.
  17. Clustering: The task of grouping a set of objects so that objects in the same group are more similar to each other than those in different groups.
  18. Confusion Matrix: A table used to describe the performance of a classification model.
  19. Convolutional Neural Network (CNN): A class of deep neural networks most commonly applied to analysing visual imagery.
  20. Cross-entropy Loss: A loss function used in classification tasks.
  21. Cross-validation: A resampling procedure used to evaluate machine learning models on a limited data sample.
  22. Data Augmentation: Techniques used to increase the amount of data by adding slightly modified copies of already existing data.
  23. Data Drift: The phenomenon where the properties of the model's inputs change over time, potentially degrading model performance.
  24. Data Labeling: The process of identifying raw data and adding meaningful labels to provide context for machine learning.
  25. Data Mining: The process of discovering patterns in large datasets.
  26. Data Preprocessing: Transforming raw data into a more suitable format for machine learning models.
  27. Decision Tree: A tree-like model of decisions and their possible consequences.
  28. Dropout: A regularisation technique for reducing overfitting in neural networks.
  29. Edge Computing: A distributed computing paradigm that brings computation and data storage closer to the location where it is needed.
  30. Ensemble Learning: A machine learning paradigm where multiple models are used to solve a problem.
  31. Ensemble Method: A machine learning approach that combines several base models to produce one optimal predictive model.
  32. Ethical AI: The creation and use of AI that aligns with moral principles and societal values.
  33. Expert System: An AI system that emulates the decision-making ability of a human expert.
  34. Explainable AI (XAI): AI systems that can explain their decisions or outputs in a way humans can understand.
  35. F1 Score (F-score): The harmonic mean of precision and recall, providing a single score that balances both metrics.
  36. Fairness in Machine Learning: Ensuring that AI systems do not discriminate against particular groups or individuals.
  37. Feature Extraction: Reducing the resources required to describe a large data set accurately.
  38. Feature Selection: The process of selecting a subset of relevant features for model construction.
  39. Federated Learning: A machine learning technique that trains an algorithm across multiple decentralised devices holding local data samples.
  40. Fuzzy Logic: A form of many-valued logic in which the truth values of variables may be any real number between 0 and 1.
  41. Generative Adversarial Network (GAN): This consists of two neural networks that compete with each other to generate new data that is indistinguishable from real data.
  42. Generative AI: AI systems that can create new content, such as text, images, or music.
  43. Genetic Algorithm: A search heuristic inspired by Charles Darwin's theory of natural evolution.
  44. GPU (Graphics Processing Unit) is a specialised hardware component initially designed to accelerate the rendering of images and video for display. Over time, GPUs have also become essential for tasks requiring high parallel processing power, such as deep learning and artificial intelligence.
  45. Gradient Boosting: A machine learning technique for regression and classification problems.
  46. Gradient Descent:?An optimisation algorithm that minimises some functions by iteratively moving toward the steepest descent.
  47. Human-in-the-loop: An approach that leverages human and machine intelligence to create machine learning models.
  48. Hyperparameter: A parameter whose value is set before the learning process begins.
  49. Imbalanced Data: A situation in machine learning where the classes are not represented equally.
  50. Inference: Using a trained machine learning model to make predictions.
  51. Internet of Things (IoT): The interconnection of computing devices embedded in everyday objects, enabling them to send and receive data.
  52. K-Nearest Neighbors (KNN): A non-parametric method used for classification and regression.
  53. Knowledge Representation: The field of AI is concerned with representing knowledge in forms that a computer system can use.
  54. Large Language Model (LLM): an artificial intelligence model designed to understand and generate humanlike text. These models are trained on vast amounts of data, often billions or even trillions of words, and use deep learning techniques, particularly neural networks, to process and produce language.
  55. Long Short-Term Memory (LSTM): A type of RNN capable of learning long-term dependencies.
  56. Model Deployment: Making a machine learning model available in production environments.
  57. Model Versioning: The practice of tracking different versions of machine learning models.
  58. Named Entity Recognition (NER): A subtask of information extraction that seeks to locate and classify named entities in text into predefined categories.
  59. Weak AI (Narrow AI): AI systems that perform specific tasks instead of general intelligence.
  60. One-Hot Encoding: A process by which categorical variables are converted into a form that could be provided to machine learning algorithms.
  61. Optical Character Recognition (OCR): The electronic or mechanical conversion of typed, handwritten or printed images into machine-encoded text.
  62. Overfitting: When a model learns the training data too well, including noise and fluctuations, leading to poor performance on new data.
  63. Perceptron: The simplest type of artificial neural network.
  64. Precision and Recall: Metrics used to evaluate the quality of results in classification tasks.
  65. Predictive Analytics: Using data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes.
  66. Prompt: Refers to the input or instruction given to an AI model to generate a response, typically in text. It serves as the starting point for the AI to process and produce relevant output, whether answering a question, completing a sentence, or performing a specific task. The quality and clarity of the prompt can significantly impact the effectiveness of the AI's response.
  67. Prompt Engineering: The practice of designing and refining input prompts to guide an AI model toward generating more accurate or desired responses. This involves carefully crafting the prompts' wording, structure, and content to optimise the AI's output.
  68. Quantum Computing: A type of computing that uses quantum-mechanical phenomena to perform operations on data.
  69. Random Forest: An ensemble learning method for classification, regression and other tasks that operates by constructing many decision trees.
  70. Recommender System: A subclass of information filtering systems that seeks to predict the "rating" or "preference" a user would give to an item.
  71. Recurrent Neural Network (RNN): A class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence.
  72. Reinforcement Learning: A type of machine learning where an agent learns to make decisions by acting in an environment.
  73. Responsible AI: Designing, developing, and deploying AI systems with good intentions to empower employees and businesses and somewhat impact customers and society.
  74. Robotics: The branch of technology that deals with robot design, construction, and operation.
  75. Semantic Web: An extension of the World Wide Web that aims to make internet data machine-readable.
  76. Sentiment Analysis: Using natural language processing to identify and extract subjective information from source materials.
  77. Singularity: A hypothetical future when artificial intelligence surpasses human intelligence.
  78. Speech Recognition: The ability of a machine or program to identify words and phrases in spoken language and convert them to text.
  79. Style Transfer: The capability of AI to apply one image's visual style to another's content. This involves interpreting the artistic attributes, such as colours and textures, from one image and blending them with the structure or subject of a different image. For example, a Rembrandt self-portrait can be recreated using the distinct style of Velázquez.
  80. Support Vector Machine (SVM): A supervised classification and regression analysis learning model.
  81. Swarm Intelligence: The collective behaviour of decentralised, self-organised, natural or artificial systems.
  82. Tensor: A generalisation of vectors and matrices to potentially higher dimensions.
  83. Text-to-speech (TTS):?Is a technology that converts written text into spoken words. It uses artificial production of human-like speech, allowing computers and devices to "read aloud" any text input. TTS systems rely on speech synthesis techniques to generate natural-sounding speech, and they are commonly used in applications such as accessibility tools, virtual assistants, and language learning. (Example: Siri, Alexa, and so forth)
  84. Token: A token in AI is a small unit of text, such as a word or part of a word, used by models to process language. AI splits text into tokens to understand and generate responses effectively.
  85. Transfer Function: The mapping of input to output in an artificial neuron.
  86. Transformer: A deep learning model that adopts the self-attention mechanism, differentially weighting the significance of different parts of the input data.
  87. Turing Test: A test of a machine's ability to exhibit intelligent behaviour equivalent to or indistinguishable from a human's.
  88. Underfitting: When a model is too simple to capture the underlying structure of the data.
  89. Virtual Assistant: An AI-powered software agent that can perform tasks or services for an individual.
  90. Word Embedding: The representation of words and documents using a dense vector representation.

Really useful resource! I have one suggestion, if I may: In machine learning, there are generally two classes of problems — classification and regression. Your glossary includes terms like confusion matrix, recall, and precision, which are used to evaluate performance classification models. I would suggest adding evaluation metrics for regression models as well, such as MSE, RMSE, and MAE...but it is a great Glossary

要查看或添加评论,请登录

社区洞察