Types of Machine Learning Models From Basics to Advanced

Types of Machine Learning Models From Basics to Advanced





Machine learning models can be categorized into different types based on their learning approach, function, and application. Below is a comprehensive list of ML models:


1. Supervised Learning Models

These models learn from labeled data.

Regression Models (Predict Continuous Values)

  1. Linear Regression
  2. Polynomial Regression
  3. Ridge Regression
  4. Lasso Regression
  5. ElasticNet Regression
  6. Bayesian Regression
  7. Quantile Regression
  8. Support Vector Regression (SVR)
  9. Decision Tree Regression
  10. Random Forest Regression
  11. Gradient Boosting Regression (GBR)
  12. AdaBoost Regression
  13. XGBoost Regression
  14. LightGBM Regression
  15. CatBoost Regression
  16. Gaussian Process Regression (GPR)
  17. Huber Regression
  18. Theil-Sen Estimator Regression
  19. Tweedie Regression

Classification Models (Predict Discrete Classes)

  1. Logistic Regression
  2. Na?ve Bayes (Gaussian, Multinomial, Bernoulli)
  3. K-Nearest Neighbors (KNN)
  4. Support Vector Machines (SVM) - Linear & Non-linear
  5. Decision Tree Classifier
  6. Random Forest Classifier
  7. Gradient Boosting Classifier
  8. AdaBoost Classifier
  9. XGBoost Classifier
  10. LightGBM Classifier
  11. CatBoost Classifier
  12. Bagging Classifier
  13. Extra Trees Classifier
  14. Quadratic Discriminant Analysis (QDA)
  15. Linear Discriminant Analysis (LDA)
  16. Perceptron
  17. Ridge Classifier
  18. Passive Aggressive Classifier


2. Unsupervised Learning Models

These models learn patterns from unlabeled data.

Clustering Models

  1. K-Means Clustering
  2. Hierarchical Clustering (Agglomerative & Divisive)
  3. DBSCAN (Density-Based Spatial Clustering)
  4. OPTICS (Ordering Points to Identify Clustering Structure)
  5. Mean Shift Clustering
  6. Gaussian Mixture Model (GMM)
  7. BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies)
  8. Affinity Propagation
  9. Spectral Clustering

Dimensionality Reduction Models

  1. Principal Component Analysis (PCA)
  2. Kernel PCA
  3. Incremental PCA
  4. Truncated SVD (Singular Value Decomposition)
  5. t-SNE (t-Distributed Stochastic Neighbor Embedding)
  6. UMAP (Uniform Manifold Approximation and Projection)
  7. LDA (Latent Dirichlet Allocation)
  8. Factor Analysis
  9. Independent Component Analysis (ICA)
  10. Autoencoders (for Representation Learning)

Anomaly Detection Models

  1. Isolation Forest
  2. Local Outlier Factor (LOF)
  3. One-Class SVM
  4. Elliptic Envelope
  5. Autoencoder-based Anomaly Detection


3. Semi-Supervised Learning Models

These models use a small amount of labeled data with a large amount of unlabeled data.

  1. Self-training Models
  2. Label Propagation
  3. Label Spreading
  4. Graph-Based Semi-Supervised Learning


4. Reinforcement Learning Models

These models learn from interaction with an environment.

  1. Markov Decision Process (MDP)
  2. Q-Learning
  3. Deep Q Networks (DQN)
  4. SARSA (State-Action-Reward-State-Action)
  5. Actor-Critic Methods
  6. Policy Gradient Methods
  7. Proximal Policy Optimization (PPO)
  8. Trust Region Policy Optimization (TRPO)
  9. Deep Deterministic Policy Gradient (DDPG)
  10. Twin Delayed DDPG (TD3)
  11. Soft Actor-Critic (SAC)
  12. Monte Carlo Control
  13. Evolutionary Strategies


5. Deep Learning Models

These models use neural networks for feature extraction and learning.

Feedforward Neural Networks (FNN)

  1. Multi-Layer Perceptron (MLP)

Convolutional Neural Networks (CNN)

  1. LeNet
  2. AlexNet
  3. VGGNet
  4. GoogLeNet (Inception Networks)
  5. ResNet (Residual Networks)
  6. DenseNet
  7. EfficientNet
  8. MobileNet
  9. Vision Transformers (ViTs)

Recurrent Neural Networks (RNN)

  1. Simple RNN
  2. Long Short-Term Memory (LSTM)
  3. Gated Recurrent Unit (GRU)
  4. Bidirectional LSTM/GRU

Transformer-Based Models

  1. Transformer (Original by Vaswani et al.)
  2. BERT (Bidirectional Encoder Representations from Transformers)
  3. GPT (Generative Pre-trained Transformer, GPT-1, GPT-2, GPT-3, GPT-4, etc.)
  4. T5 (Text-to-Text Transfer Transformer)
  5. XLNet
  6. RoBERTa
  7. ALBERT
  8. DistilBERT
  9. BART (Bidirectional and Auto-Regressive Transformers)
  10. Whisper (Speech-to-Text by OpenAI)


Generative Models

  1. Autoencoders (Vanilla, Variational, Denoising)
  2. Generative Adversarial Networks (GANs) Vanilla GAN Deep Convolutional GAN (DCGAN) Conditional GAN (cGAN) StyleGAN CycleGAN Pix2Pix
  3. Normalizing Flows
  4. Diffusion Models (DALL-E, Stable Diffusion, Imagen, etc.)

Graph Neural Networks (GNN)

  1. Graph Convolutional Networks (GCN)
  2. Graph Attention Networks (GAT)
  3. GraphSAGE
  4. ChebNet (Chebyshev Graph CNNs)
  5. GNN Explainers

Self-Supervised Learning Models

  1. SimCLR (Simple Framework for Contrastive Learning)
  2. BYOL (Bootstrap Your Own Latent)
  3. MoCo (Momentum Contrast)
  4. DINO (Self-Supervised Transformers)
  5. MAE (Masked Autoencoders for Vision Tasks)


6. Hybrid Models & Meta-Learning

  1. Stacking Models
  2. Blending Models
  3. Bayesian Optimization-Based Learning
  4. Neural Architecture Search (NAS)
  5. Few-Shot Learning Models (Siamese Networks, Prototypical Networks, Matching Networks, etc.)
  6. Federated Learning Models



要查看或添加评论,请登录

Naresh Maddela的更多文章

社区洞察

其他会员也浏览了