The 3 Layers Of The AI Stack

The 3 Layers Of The AI Stack

"Should we build everything from scratch, use third-party machine learning services, or a combination of both?"

When you identify AI/ML use cases for your company (or your company's clients) that will lead to business value and ROI, you need to know the various implementation options.

Every company is different: Each has its own business objectives, customers, AI adoption roadmap, data strategy, AI monetization plan, timeline, and resources.

It's important that you and your team choose the right tools for each use case. There is no absolute right or wrong answer, and you must be open to change if the initial tool selected turned out to be inappropriate for a given use case.

In practice, it's difficult to assess the 'perfect' combination of tools/libraries at the beginning, especially if you are implementing a brand new use case. The best approach is to make the most educated guess about what will work, get started, and iterate/pivot as your team moves from research to production.

These are the 3 layers of the AI stack:

1) Frameworks and infrastructure

The first level of the AI stack is comprised of machine learning libraries, such as TensorFlow, PyTorch, and others. We can get more technical by going below the ML libraries, but our focus is practical AI adoption from a business leader perspective.

These machine learning frameworks provide your team with the highest flexibility to implement custom use cases and build intellectual property over time.

Some teams build their own in-house custom machine learning platform, building every component from scratch using these libraries. These platforms are created to meet specific short-term and long-term business needs.

Working at this level of the AI stack takes the most time because you are building from the ground up. It also requires the highest level of AI/ML technical expertise and a strong understanding of the interconnectedness of many machine learning elements.

2) Machine learning platforms

The next level of the AI stack involves machine learning platforms, such as Amazon SageMaker and GCP's AI Platform. These platforms provide a centralized place to handle end-to-end machine learning workflows, from research to production.

These platforms have several components that make end-to-end machine learning easier, faster, and more efficient, such as:

  • Cloud computing and on-demand specialized hardware
  • Data labeling tools and services
  • Jupyter notebooks to write custom code (first layer of the AI stack)
  • Pre-built machine learning models and model training services
  • Automated hyperparameter tuning
  • Easy access to machine learning APIs and cloud storage

As you can see, these platforms provide great flexibility and efficiency for managing your machine learning workflows.

The AI adoption timeline for a given use case can be greatly reduced because the most complex components of the machine learning process have been built already. Also, these centralized platforms allow you to iterate quickly and keep track of all experiments with their associated metrics/KPIs.

3) API-driven services

The third layer of the AI stack involves adding machine learning capabilities to applications through API calls. This layer is where software engineers feel most comfortable because they don't have to understand the underlying data science behind the services.

These third-party ML services are most useful when you are implementing a simple use case that is common across companies/industries, such as:

  • Language translation
  • Speech-to-text
  • Text-to-speech
  • Caption generation
  • Generic image classification
  • Domain-specific ML services, such as healthcare or retail

Since these AI/ML APIs are pre-built and provided out-of-the-box, implementing these use cases takes the shortest time. This gives you opportunities to get quick wins to prove the value of AI for your organization with the least amount of risk and resources.

Be careful when using these pre-built APIs, though. One of the biggest mistakes software engineers make when they first get into AI is thinking that machine learning capabilities are nothing more than API calls.

You need the data scientists and machine learning engineers to understand what is truly happening 'behind the scenes' with these ML services.

Debugging a machine learning application is tricky, and often counterintuitive. If you blindly call APIs without having a true understanding of the datasets and machine learning algorithms encapsulated by these APIs, it will be difficult to understand why things don't work/perform as expected.

As a manager/leader of AI/ML teams, it's helpful to have this understanding of the 3 layers of the AI stack to make better business assessments, understand what your team is doing, and guide them accordingly.

If you need help to accelerate your company's machine learning efforts, or if you need help getting started with enterprise AI adoption, send me a LinkedIn message or email me at [email protected] and I will be happy to help you.

Subscribe to my blog to get the latest tactics and strategies to thrive in this new era of AI and machine learning.

Subscribe to my YouTube channel for business AI video tutorials and technical hands-on tutorials.

Client case studies and testimonials: https://carloslaraai.com/enterprise-case-studies/

Follow me for more content: linkedin.com/in/CarlosLaraAI

#ai #career #artificialintelligence #machinelearning #deeplearning #datascience #business #enterprise #leadership #careers #aicareer #aiadoption #projectmanagement #productmanagement

Khuyen Nguyen

Product Management at Topgolf

4 年

Darek Hys, MBA, PMP?this is the AI program that I took.?

Carlos Lara

Principal Machine Learning Engineer | AWS

4 年
回复

要查看或添加评论,请登录

Carlos Lara的更多文章

社区洞察

其他会员也浏览了