Three Meaningful API Metrics

Three Meaningful API Metrics

How can you improve an API if you’re not measuring its behavior?

There’s no way to improve what you can’t measure. Businesses measure customer satisfaction so they can improve what’s making customers unhappy. Schools measure students’ knowledge so teachers know which areas to improve. Hospitals measure wait times so they can improve the way they treat patients. So, why should APIs be any different? Believing you can improve an API without measuring its behavior is a recipe for failure. What are then the things you should measure? Read on to learn about three metrics that can help you understand how to improve your API.


This article is sponsored by Scalar.

Scalar is a suite of powerful & customizable developer tools to help you at all stages of API development. Create beautiful API Documentation in a notion-like editing experience. Try Scalar for free now!


Everything starts with metrics. But before learning about those metrics, let’s first see how we can use them to refine your API. What’s the goal of having a metric if you don’t know how to use it? I like to follow a framework that helps me navigate my API governance activities. In this framework I follow, the ultimate goal I have for any API is to expose it to consumers. To reach that goal, I follow a lifecycle composed of the design, implementation, release, and maintenance stages. The lifecycle is standardized across all APIs, so I can improve their quality uniformly. I know what the quality of an API is by using metrics, calculating scores, and making them available through a shared dashboard.

Everything starts with metrics. And the first, most important metric to look at is the time a new consumer takes to make a call to your API. The “time to first call,” or simply TTFC is a metric that gives you information about the friction your API onboarding process has. It’s an actionable metric in the sense that the higher it is, the worse the API quality is. So, your goal is to decrease the TTFC as much as you can. How do you measure it? Good question.

Measuring the TTFC metric can be tricky because it involves understanding the API consumer onboarding funnel. What you want to measure is the time between a user’s first interaction with your API’s documentation and the moment they make a first call to the API. Everything the user does in between those two steps contributes to increasing the TTFC. In other words, the easier your API is to use, the shorter the TTFC will be. But how do you know if a user has made a call to the API? If you let users try your API from the documentation, that should be easy. You simply add a Web event called “user called API,” for example, to your analytics platform. Otherwise, you need to add code to your API — or your gateway — to push a similar “user called API” event to your analytics platform. After you have this event you should be able to build a funnel with it as the last step.

While the TTFC metric has to do with onboarding, the second metric we’re measuring identifies challenges related to maintaining an ongoing integration with the API. The metric I’m talking about is the number of breaking changes. With this metric, you can understand how often an API changes in a way that is not compatible with previous versions. Whenever that happens, consumers might need to rebuild their integrations. That’s why this metric is a good way to calculate the amount of maintenance that an API requires from a consumer perspective.

So, how do you measure the number of API breaking changes? The first thing to know is how to measure the number of changes — breaking or non-breaking. Then, you’ll be able to categorize some of those changes as breaking and count them. Luckily, there are many tools that can do that for you. One such tool is oasdiff, and it's open-source. According to its documentation, it supports over 250 checks categorized as breaking changes, potential breaking changes, and other changes considered non-breaking. You can get a human-readable report by default or choose from a variety of formats such as JSON and YAML.

Now we know how hard an API is to start using and how often you need to maintain the integration you built with it. The last metric we’re seeing helps you identify challenges related to reliability. It’s an important metric because it gives you a hint about how often the API has problems or, at worst, is down. The metric I’m referring to is called uptime. The best way to measure it is to set up an API monitor. While that might sound simple, I want to highlight that you want to monitor the API from as many locations as you have consumers in.

Additionally, you want to monitor making calls to operations that are meaningful and can represent the health of the API. So, for example, it’s not enough to create a monitor that simply makes periodic calls to a /health endpoint that returns an ok response or something similar. You probably want to consume an endpoint that makes use of a database or other resources your API depends on. The best is, in fact, to have multiple endpoints that you can monitor. Each one of the endpoints will give you information related to at least one important area of your API.

Everything starts with metrics. Right? Starting with metrics, as you’ve just seen, makes it easy to understand how to improve the quality of your API. These three metrics are just a starting point of what you can do. The more meaningful metrics you understand the better the quality of your API will be.


This post was originally published on the API Changelog newsletter as “Three Meaningful API Metrics.”


Mark Kaiser

Senior Enterprise Solutions Architect

1 天前

What get measured, gets done.

回复

要查看或添加评论,请登录

Bruno Pedro的更多文章

  • Data Models, Types, or Schemas?

    Data Models, Types, or Schemas?

    This article was originally published in the API Changelog newsletter on February 14, 2025. Naming things is hard.

    2 条评论
  • Selectively Serving Your API Reference

    Selectively Serving Your API Reference

    This article was originally published in the API Changelog newsletter on February 7, 2025. What are you looking for…

  • Are AI Agentic Workflows the Future of Automation?

    Are AI Agentic Workflows the Future of Automation?

    This article was originally published in the API Changelog newsletter on January 30, 2025. Most integrations are just…

  • Non-technical API Design

    Non-technical API Design

    Originally published on August 27, 2019, on my personal blog. Last week I published a tweet asking people that consider…

    2 条评论
  • What are Web APIs

    What are Web APIs

    What exactly are Web APIs? Why are Web APIs so popular and widely used? Let’s first explore what APIs are so you can…

  • Best practices for securely storing API keys

    Best practices for securely storing API keys

    In the past, I’ve seen many people use Git repositories to store sensitive information related to their projects…

  • How to securely store API keys

    How to securely store API keys

    In the past, I’ve seen many people use git repositories to store sensitive information related to their projects…

  • 5 steps to API frustration

    5 steps to API frustration

    This article is a satire that describes what often happens to developers that are looking for an API and want to…

  • API friction

    API friction

    The concept of friction in products and applications is not something new and can be experienced by almost everyone…

  • Growing your business with an API

    Growing your business with an API

    This article summarizes a talk I recently gave at the Nordic APIs Platform Summit in Stockholm, Sweden. The full title…

社区洞察

其他会员也浏览了