[Contract Driven Development - Post 1] - The trouble with API mocks and stubs

[Contract Driven Development - Post 1] - The trouble with API mocks and stubs

Preface / Introduction

Over the last 6-7 years, I've studying several microservices architecture transformations. What I've realised is a lot of them are not bearing the returns the stakeholders were expecting. Many organisations were expecting that they would be able to ship microservices and eventually full features faster. However, they had to freeze their feature development/deployments repeatedly to just clean up the integration hell between your services. If you can related to this, I'm hoping the following series of posts will be extremely relevant to you. The goal is to share me and my team's learning about how we solved these problems related to “integration hell” with microservices and the insights we have gathered along the way.

Context

Let us start with an example to understand the root cause of the problem better. Consider the scenario below where we are building an ecommerce mobile application that requests product details from a service and displays the same. The application requesting the data is the “consumer” and the service responding with the data is the “provider”. This looks like a fairly straightforward application. What could possibly go wrong with it? Let us explore this as we begin to build the consumer mobile application.

Building the consumer - The need for emulating the provider

Let us assume that the “provider” / API / service has not yet been built. A possible option is to wait for this service to be available before beginning the development of the “consumer”. However, such a sequential style of development is not productive for shipping features quickly. So a common workaround for the “consumer” application engineers is to stand up a mock server that emulates the “Provider” in order to make independent/parallel progress.

Even when the actual provider is available, it may not always be practical/productive to leverage the actual provider to integrate during development or testing for the following reasons:

  1. It may be hard to get the actual “provider” application running on the “consumer” team’s local environment / developer laptops
  2. Access to “provider” applications running in remote environments can be slow / inconvenient because of network connectivity, access control, etc. Also several developers accessing a single shared “provider” application instance can unintentionally get in each other’s way through data corruption, etc.
  3. “Consumer” team might not have control over the “provider” application. It may cause surprises and confusion for example when the “provider” team is trying to re-deploy the application or try-out some new configuration.
  4. We may want to set up test/canned responses, simulate faults, etc. to test the consumer application’s behaviour

For the above stated reasons, mocking and other strategies to isolate the consumer application development from the provider are really critical for productivity.

The seductive promise of mocks/stubs and the real trouble with them

So what is the harm with leveraging mocks and stubs to make independent progress on Consumer application development? While this looks like a great strategy, it has a fundamental flaw. The mock, as the name suggests may not be truly representative of the real “provider” (example: number and data types of parameters may vary) and this can lead to integration issues late in the game when we deploy the consumer application with the actual provider / service in higher environments. Even if the mocks do not start out that way, what stops them from drifting away from the actual API as it evolves?

Let's look at how this might happen with each of the mocking strategies.

Understanding provider emulation techniques

So far we have been using the term “mock” to refer to a wide category of provider emulation strategies, however, there are significant differences between these approaches. Mocks, stubs and service virtualization are not the same.

Popular provider / API emulation strategies fall into one or more of these categories.

  1. Consumer defined - These are techniques that consumer application teams may leverage to help them move forward independent of the provider.

1.1. Record and replay - Tools such as VCR and Wiremock allow us to record interactions with the actual provider and subsequently use these recordings in place of the provider application. Since there is a dependency on the actual provider application to record the interaction, the downside is that we need to wait for the provider to be built and available thereby forcing a sequential development style. And more importantly the recording / stub can become stale as the “provider” evolves / changes. Even with constant upkeep there may be no guaranteed way to know when the recording is not in line with the “provider”.

1.2. Hand rolled mocking - Here the consumer team can either write their own custom service that is hardcoded to return a response or leverage a mocking framework for this purpose to return canned / hard coded responses. While this enables the consumer team to progress independent of provider application, since this technique is completely isolated from the actual “provider”, the mock can be an incorrect emulation of the “provider” from the get go.

2. Provider defined - Some provider teams address the problems with consumer defined emulators by taking on the ownership of creating and sharing tools to emulate their service. This promotes a consistency across consumers and also reduces any duplicate effort between consumers who may each try to build their own emulators.

2.1. Service Stub - In this technique the provider shares a Service Stub with some predefined canned responses. Since the provider team itself builds this utility, it is bound to be closer in behaviour to the actual application. However, this again creates a dependency on the provider team where the consumer teams now have to wait for the service stub to be available. Consumer teams usually cannot update the canned responses. Also even with this approach, unless there is some process to ensure that the latest version of server stubs are used, consumer teams may continue using outdated service stubs.

2.2. Virtualization - This can offer some sophisticated capabilities like simulating delays, errors, etc. However it takes quite some time, effort and cost investment from the provider teams. And depending upon how the teams are set up, it may not provide granular control to Consumer teams to update canned responses.

To recap, consumer defined emulators help consumer teams by giving them a good amount of control and isolation. However they are lacking when it comes to being truly representative of the provider they are trying to emulate. Provider defined emulators are better at being realistic representations of the service they are emulating owing to the fact that the same team that is building the API is also building the mock. However here the consumer teams have to compromise on the control over canned responses etc. And in both categories, there is always a risk of deviation either through incorrect emulation, duplication, stale data / stubs, etc. All of this leading to integration issues later in the development cycle.

Overall such mocking techniques leave a lot of room for human error and only amplifies any communication gaps / incorrect assumptions about the API design between teams instead of helping them collaborate.

Keeping the providers in sync

So far we covered how consumers require mocks to emulate providers to make independent progress. Providers applications on the other hand have a similar yet unique problem. They are usually built in isolation completely removed from how the consumer application will interact with them in higher environments and eventually in production. We can always argue that API tests serve this purpose of emulating consumer behaviour. However, often API tests are not owned or contributed to by the consumer teams and so do not truly represent their point of view. The first instance where providers actually get a taste of actual consumer application behaviour is in higher test environments such as integration or staging.

The need for “Integration Testing”

To quickly recap, both “consumer” and “provider” applications do not have any way of testing compatibility early.

  1. Consumers leveraging API mocking techniques such as those mentioned above to isolate from the provider may not have a truly representative emulation of the provider
  2. Providers again have no emulation of their consumers in the early stages of development

So it is impossible to say if they will work well with each other when deployed together. Which is why “Integration Testing” becomes absolutely necessary to verify compatibility before these two components can be shipped to “Production”.

In our next post we will cover why “Integration Tests” are not an effective mechanism to identify contract compatibility issues between microservices.




Further reading - API Mocks play an important role in identifying contract compatibility issues early in the development cycle of API consumers / clients. Here is a blog comparing Specmatic and Wiremock.

Stay tuned to know more. Signup for our newsletters on https://specmatic.in.

Marcel van der Poel

Product Owner / IT Chapter Lead E2E Test Automation at ING

1 年

I am missing any mention of consumer driven contract testing in this article. Can we expect that in post 2?

回复
Arun Garodia

?Tech Leadership Reimagined: Director @ Penguin Electronics & Pacific Cyber Tech ?? AI Enthusiast & Learner & Advocate ??? Strategic Visionary-Project Management & Consultancy ?? Growth Hacker & Innovator & Founder

1 年

Thanks for sharing It's insightful

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了