People Management for AI: Building High-Velocity AI Teams

People Management for AI: Building High-Velocity AI Teams

This article describes how ML infrastructure, people, and processes can be meld together to enable MLOps that works for your organization. It provides practical advice to managers and directors looking to build a robust AI/ML practice with high-velocity AI teams.

The advice we share in this post is based on the experience of the Provectus AI team that has worked with multiple customers on different stages of their AI journeys.

What Is a Balanced AI Team?

A balanced AI team is a cross-functional, high-performing team with multiple roles, each handling their portion of ML infrastructure and MLOps . Such a team can deliver viable AI/ML projects to production quickly, efficiently, and at scale.

The Role of a Data Scientist

In a modern AI team, a data scientist or citizen data scientist is an essential member. Data scientists are subject matter experts who understand the data and the business holistically.?

They are hands-on in data mining, data modeling, and data visualization. They stay on top of data quality and data bias issues, analyze experiments and model outputs, validate hypotheses, and contribute to the ML engineering roadmap.

The Role of an ML Engineer

A balanced AI team should also include an ML engineer whose skillset is different from that of a data scientist. They should have deep expertise in particular AI and ML applications and use cases.

Every ML engineer should have MLOps expertise, but the ML/MLOps infrastructure itself, including its tools and components, should be the domain of a dedicated MLOps professional.

The Role of a Project Manager

A project manager should be trained to execute ML and AI projects. A traditional Scrum or Kanban project workflow does not work for ML projects. For instance, at Provectus we have a specific methodology for managing an ML project’s scope and timeline, and for setting the expectations of its business stakeholders.

Management Challenges of Building AI Teams

AI teams need a balanced composition, in order to enable MLOps and accelerate AI adoption . Aside from that, effective management is vital to making AI teams work in sync with the ML infrastructure and MLOps foundation.

Organizational Structure of a Typical AI Team

  • Business units and traditional software engineers who report to the VP of Engineering
  • DevOps professionals and infrastructure experts who report to the VP of Infrastructure
  • Data scientists who handle data and, as a rule, work directly with business stakeholders
  • Data engineers who build systems designed to convert raw data into usable information for data scientists and business analysts

Organizational Structure of a Typical AI Team

Management Challenge #1

Companies’ understanding of ML workflow and AI project management is limited. It can be challenging to translate business goals into launching an AI product in production.?It becomes impossible to manage the project’s scope and KPIs, resulting in failure to meet the expectations of business stakeholders.

Management Challenge #2

Companies delegate AI projects to existing data science teams that historically work in their own silos and rely on Data Science approaches that do not work for AI/ML projects.?They end up with unfinished products and projects that cannot be deployed to production.

Management Challenge #3

Companies assign AI projects to traditional Java and .NET developers, or leverage third party ML APIs. But they do not have a deep understanding of data and its underlying algorithms to use these APIs efficiently. They end up with a growing technical debt in the form of Data Science code that will never see production.

Solution: A balanced AI team that utilizes an end-to-end ML & MLOps infrastructure to collaborate and iterate.

How a Balanced AI Team and MLOps Infrastructure Work Together

The synergy of specific roles in a balanced AI team and MLOps infrastructure can be visualized as a three-tiered ecosystem:

The synergy of specific roles in a balanced AI team and MLOps infrastructure

Tier #1 — The infrastructure backbone of MLOps

This tier is supported by Cloud & Security professionals, and DevOps. This tier hosts baseline infrastructure components such as access, networking, security, and CI/CD pipelines.

Tier #2 — Shared and reused assets of MLOps

This tier is managed by ML engineers and MLOps professionals, and includes notebooks with various images, kernels, and templates; pipelines with components and libraries that are treated as shared assets; experiments; datasets and features; and models.

Tier #3 — AI projects

This tier is the responsibility of data scientists, full-stack software developers, and project managers. This tier is independent of the other two, yet is enabled by them.

Mid-Tier Roles and Functions

  • Cloud & Security owns the infrastructure backbone, but they are also responsible for the reused assets layer, ensuring that all components and checks are in place.
  • DevOps professionals handle the automation parts of the two bottom tiers, from automating builds to managing environments.
  • ML engineers have both MLOps infrastructure and project expertise. They are responsible for individual components of the reused asset tier.
  • MLOps specialists work hand-in-hand with ML engineers, but they own the entire infrastructure (e.g. Amazon SageMaker, Kubeflow).
  • Citizen data scientists can prioritize the implementation of an AI/ML project, working in notebooks. They own a specific part of the ML pipeline.?
  • Full-stack engineers can work on the regular software portion of an AI product, ranging from UI to APIs.?
  • Project managers who are trained to do AI/ML work are responsible for the product’s implementation.

ML Infrastructure Backbone

ML Infrastructure Backbone

Data Scientists

Here we see that data scientists have the tools to work with raw data, perform data analysis in notebooks, and check hypotheses. They can easily run experiments in an experimentation environment managed by ML engineers.

ML Engineers

ML engineers are responsible for productionalizing ML models, meaning that they prepare algorithm code and data pre-processing code to be served in the production environment. They also build and operationalize various pipelines for the experimentation environment.

DevOps Professionals

DevOps professionals help to efficiently manage all of the infrastructure components. For example, in our reference architecture, numbers from one to four demonstrate a CI workflow that is handled by a DevOps.

MLOps Enablement

MLOps is as much about people and processes as it is about actual technologies. It is not overly complicated if you can organize specific roles and functions, matching them to corresponding components of your ML infrastructure.?

People + Infrastructure = MLOps

Building High-Velocity AI Teams with Provectus

At Provectus, we help businesses build state-of-the-art AI/ML solutions while nurturing high-velocity AI teams, supported by a robust infrastructure for MLOps . Kindly reach out to us to start assessing options for your organization!

If you are interested in building high-velocity AI teams and MLOps, we recommend that you also request this on-demand webinar . It is free!

The full version of the article is available here .

要查看或添加评论,请登录

社区洞察

其他会员也浏览了