Scaling machine learning experiments + other resources
It’s time to wrap up May's content! Let's dive in and pick up some stuff on how to fine-tune hyperparameter searches, scale up ML experiments, and build out some solid MLOps capabilities.
Enjoy!
---
Case studies & practical MLOps
>?Building MLOps Capabilities at GitLab As a One-Person ML Platform Team? - Let's start with the story of Eduardo Bonet who is an incubation engineer at GitLab , building out their MLOps capabilities. In this article, he shares how he integrates CI/CD with ML workflows and builds a native experiment tracker.
---
Guides & tutorials
>?How to Optimize Hyperparameter Search Using Bayesian Optimization and Optuna ?-?Gourav Bais knows how to streamline hyperparameter tuning with Bayesian optimization and Optuna. In this post he shares?the ideas behind it, how it works, and how to perform Bayesian optimization for any of your machine-learning models.
>?Scaling Machine Learning Experiments With neptune.ai and Kubernetes ?-?Scaling ML experiments efficiently is a challenge for ML teams, and this is where experiment trackers and orchestration platforms come in. In this article, you will learn how to scale experiments effectively using Neptune and Kubernetes, including practical tips and a detailed step-by-step guide.
---
领英推荐
Tools
---
ML Platform Podcast
> And we're still going strong with the ML Platform Podcast!
In this new episode,?Olalekan Elesin ?shares insights on building the ML Platform at Scout24 SE , including problems to be solved, reasons for going multi-cloud, point solutions vs. end-to-end platforms, data platforms vs. ML platforms, and more.?
To get notifications about new episodes, follow us on Spotify , Apple Podcasts , or subscribe to our YouTube channel .
---
Okay, that's it for today. Hope you find these useful!
Feel free to forward this newsletter to your friends and communities, if you find it useful!
Cheers!