3 API Bugs to Watch For in 2018
API are the backbone of countless platforms and apps today. Yet, only a fraction of the amount of time and money is spent testing them compared to websites and apps. Thankfully, in the past four years more tools and knowledge have come to the market.
I have been in the space since 2013, and can tell you the amount of ways an API can fail is infinite and always changing. Sometimes new technologies or practices leads to a certain kind of failure one year, that we will not see the next. Using some hard lessons learned in 2017, and with an eye to the future, I present some API bugs that we think will be unfortunately common in the next year.
- 200 is Not Ok
- A common test we see is to ping an endpoint and validate there is a 200 status code. Unfortunately, there are countless ways in which this is not enough. Soft error codes (giving a 200 even when there is an issue), database issues, and schema innaccuracies are just the start. When you hit an API you should be validating the entire response — header and payload. Every object and piece of data should be reviewed, and that’s where automation comes in. Don’t trust a quick manual test, use the tools we have in the market now.
- Use Real and Random Data
- Fake tests using fake data leads to fake results. Whenever possible API tests should be done against live data and databases. Too often a test is simply a handful of calls, done locally, against a small CSV of test data. With API tests you can have the first step of a test call a series of data, and then randomly uses that data in the subsequent call. True, random, powerful testing that helps catch the countless small bugs that leads to major losses in customers and money.
- Architectural Flaws
- Issues such as memory leaks and race conditions are often caught after an API program is live and starts significant traffic. Load testing APIs is a really important, but rarely done, step. Hit those APIs, validate the payloads, and monitor the memory on the machines. Do it as part of your CI/CD process, and validate a good deployment every time.
APIs are much more complicated than simply being up or down. There are many small, subtle issues that are costing you severly everyday. Do not just manually test APIs. It is time to acknowledge that automation is no longer a nice to have, but an absolute need. Make it part of your deployment (CI/CD) plan, and monitoring coverage.
Read about more potential bugs in 2018 in our case study.
API Fortress is a complete performance and quality platform for companies with business-critical APIs. A web-based platform to help teams evaluate API accuracy, monitor performance, and perform load tests. Reduce costs with automated test generation, save time with an intuitive interface, and decrease risk by catching problems before your customers or partners. To learn more about why companies are switching to API Fortress visit API Fortress.
Please contact Vas Edelen, ([email protected]) if you would like login credentials to Fortress or would like to have a clearer understanding of the importance to testing and monitor your APIs.
- #api, #testing, #apitesting, #monitoring, #apimonitoring, #performance, #apiperformance, #qa, #qualityasurance, #quality, #automated, #automatedtesting, #software, #platform, #apiplatform, #apim, #apimanager, #rest, #http, #soap, #lifecycle, #SoapUI, #smartbear, #runscop, #jmeter, #mule, #mulesoft, #mashery, #tibco, #apiary, #apigee, #oracle, #wso2, #layer7, #apidocs, #apifortress, #Patrickpouilin, #vasedelen, #desktopenvironments, #microservices, #services, #docker, #tag, #containers,