MLOps Components & WorkFlows
MLOps Components and workflows
?? Data Ingestion & Versioning via Github ??
Our MLOps pipeline, defined in code, undergoes an intricate journey of Data ingestion, Versioning, and ETL processing. Triggered upon merging to the main branch, this ensures pipeline reproducibility and meticulous tracking of code modifications. ????
?? Feature Preprocessing ??
In the ML pipeline, we retrieve, validate, and forward features from the Feature Store to subsequent stages. Feature metadata, a crucial element of the trained model, is diligently logged and stored in the Experiment Tracking System. This paves the way for smooth feature processing and straightforward model reproduction. ???
?? Model Training ??? ♂?
We train and validate our model on preprocessed data, storing all associated metadata within the Experiment Tracking System for ensured traceability and reproducibility. This opens doors for easy identification of the best-performing model and smooth deployment to production. ????
?? Model Registry ???
Post validation, the model artifact is dispatched to the Model Registry, our 'museum' of all model versions. This allows us to trace the model's evolution and ensure the right version is deployed to production. ???
?? Model Containerization ??
领英推荐
Once validated, the model is containerized and geared for exposure as a REST or gRPC API. This guarantees easy model deployment to production and scalable model handling to meet varying loads. ????
?? Experiment Tracking ??
Metadata from the Experiment Tracking System is tied to the Model Registry for each model artifact. This encourages traceability and eases the identification of the best-performing model. ????
?? API Deployment ??
As the model shifts to production, a webhook is triggered, deploying a new version of the containerized API. This ensures the right version of the model is always available to users. ????
?? Model Inference ??
Product applications send requests to the API. The Real-Time Feature Serving API supplies features for inference, and inference results are returned to the application, delighting our end users. ????
?? Model Monitoring ???
Models in production are under continuous scrutiny. If performance dips, automatic retraining is triggered, ensuring our models are always on their A-game, meeting user needs. ?????