You Can Still have 100% Code Coverage With Untested?Code!
Photo by Mohammad Rahmani on Unsplash

You Can Still have 100% Code Coverage With Untested?Code!

Code coverage has always been a metric of testing the number of lines that are successfully executed under a test procedure, but what does your code coverage percentage actually tell you?

I’ve briefly tried to answer the question “Is a 100% code coverage a metric for reliability and code quality?”. I demonstrated a testing pipeline using CodeCov and GitHub Actions to visualize how coverage works and use that as an attempt to answer the question. In this article, I will dive more into what can your code coverage metric tell you and what to expect out of it.?

In conclusion, the higher the percentage of code coverage is only a measurement of your testing procedure having hit a high number of code lines. Does it measure reliability, security or compatibility? Not necessarily — it is a metric to understand how much coverage is reached by the testing producers of your codebase. Putting that into perspective, what happens with a 100% code coverage then?

Code Coverage As a?Metric

I am not saying that code coverage is unnecessary. On the contrary, I think code coverage should always be measured in every project. If anything, code coverage is an easy metric, but it needs to well-defined to be served as a solid and stable ground to tested code. In other words, that metric is as good as your programming language type, testing producers, business logic and integrations. I know that seeing a higher percentage of coverage on a repository is catchy and interesting. You automatically think, “Wow! That’s a well tested and reliable repo”. But, you’ll probably find some opened issues, opened pull requests and some bugs here and there. What does that mean? Why does that happen?

On the other hand, code coverage itself is not a testing method. It is a measurement of how many lines of code are executed after a specific function is called. Code coverage is the equivalent of hit rate and miss rate in computer architecture. How many hits — executed lines of code — versus hits — unexecuted lines of code. However, the hit-and-miss rate in a computer architecture can determine the performance of the cacheing memory, but code coverage percentage does not.

I received some genuine and constructive feedback regarding some of the articles I wrote before, especially regarding the pervious code coverage article. Here are some of the ideas that can happen whilst still having a higher percentage of code coverage.

Untested Code

“You can still have 100% code coverage with untested code” — Amr Galal, Software Engineer. Remember that repo that was catchy and reliable? Imagine some developer was tasked with implementing an API endpoint and for some reason it was not actually tested. The coverage in that case is deceiving, because the invoking of the API from a testing procedure did not happen, meaning the entry point in which the coverage will measure the hit-and-miss lines of code did not actually take place.

Does that mean you can get away with untested code? Well, first of all, you should not have untested code. In other words, whatever testing method you are using — unit tests, integration tests…etc, you should not submit code blocks without testing the basis of the logic and any integrations the code block needs.

Data Retrieval

Coverage is not a metric for data retrieval and integration. Data retrieval customization is always a hassle between backend and frontend developers. Developers agree on a certain customization that fits the design and start the exchange of data.

However, that customization then falls under integration tests. In other words, the validation between several entities that a testing procedure passes successfully. Unless you extend your coverage across both entities where the percentage metric can measure the validity of each entity on its own and the integration among them, you would still be measuring a unit test that assumes the customization.

To put that into simple context, imagine a static paginated API is being tested, meaning the number of pages and items are statically set by design. On the backend side, the coverage will be a result of the testing cases, right? In that case you are testing the API right? Does the testing procedure validate the data retrieval customization or not? Because if it didn’t, it will be Untested Code, and vice versa. Therefore, you can have a higher coverage of your code lines and still have a problem with data retrieval and integration.

Failure Tolerance

Most projects attain to be failure tolerant. The fastest way to reach that is by trying to handle as much corner cases as you can. Error handling is a major factor in every project, and tests should retain that.

But is that what testing is all about? Not really! Let me explain.

In a pervious project, with 100% code coverage, I was conducting an internet speed test at intervals of time and saving it as a data entry in the database. My testing methods were as basic as it could be. Using continuous integration — running tests on GitHub’s servers tests always passed, because of course there is always internet on the pipeline runners. However, testing it on my local machine, without internet connection, the project and tests failed.

What does that mean? Is my project badly tested? But my coverage is high!

Eventually I added an error handler in the project to check the internet connection first of course, and adjusted everything accordingly. But finally, that means the testing design is important. But what do i use coverage for? Use coverage to validate that your testing producers test every part of your implemented code. Mainly, to validate that every corner case is actually reached testing blocks.

Of course there are many techniques of approaching problems like untested code, data retrieval and failure tolerance. For example, a common technique is system and data mocking. You mock functions, methods or even responses to yield a reliable and stable infrastructure. However, this is not the aim of this article, the aim is to try and visualize what does a 100% code coverage says about your project. As mentioned before, code coverage is a representation of your testing, mocking and error handling functions reaching a higher reach of the original code base.

Imagine a developer making working on a new feature in a project. The developer then writes some testing procedures and submits their code. What should happen to the code coverage? Increase? Decrease? Stay the same?

As a developer, you can use code coverage as a flag for your testing procedures, new features and general logic. Code coverage can help you spot parts of the logic you haven’t tested, hence — Untested Code. On the contrary, the developer would now understand how to amend the testing blocks now. On the other hand, if the code coverage increases or stays the same, that’s a relatively good thing, as you are in the clear of the logic put by the tests and the project. However, the testing basis itself couldn’t be measured by the effects to code coverage.

Finally, if you have tested every corner case, every integration for a new feature and your code coverage did not change or increase, you are in the clear *relatively*?!

Thanks!

要查看或添加评论,请登录

Ibrahim Roshdy的更多文章

社区洞察

其他会员也浏览了