Navigating the Balance of Automated Testing
Alexsandro Souza
Tech lead | Author | Instructor | Speaker | Opensource contributor | Agile coach | DevOps | AI enthusiast
Automated testing is a cornerstone of modern software development, essential for ensuring that our systems remain reliable and effective. It's the safeguard that ensures our systems operate as intended, even as we introduce new features or refactor existing code.?
Why Invest in Automated Tests?
The primary motivation behind writing automated tests is to improve confidence. Whenever we change or add something, we want to be sure it doesn’t break anything. Automated tests help us do just that, without having to check everything by hand every time.. This confidence is not just for the developers or the product team but extends to stakeholders and, ultimately, the users who rely on the software to perform flawlessly.
The Dilemma of Over-Testing
Martin Fowler, a luminary in software development, posits that while testing is indispensable, it's indeed possible to test too much. The litmus test for over-testing, according to Fowler, is if removing certain tests does not diminish the overall confidence in the system's functionality.
Kent Beck, another esteemed figure known for introducing Test-Driven Development (TDD), echoes a similar sentiment. He articulates that the goal isn't to amass unit tests but to test sufficiently to achieve a good level of confidence. He succinctly states, "I get paid for code that works, not for tests" underscoring the philosophy of testing as little as possible to maintain confidence.
Think about getting rid of most of your unit tests. If you consistently need to fix your unit tests with every code refactor, it's worth questioning their utility. Unit tests should give you confidence that changes in your code haven't impacted the desired behavior of your application unit. If they're not serving this purpose and instead require frequent updates, you're only fooling yourself. It might be time to discard them and focus on writing proper tests that truly validate your unit behavior
领英推荐
Optimizing for Confidence, Not Coverage
This introduces a crucial concept: optimizing for confidence rather than quantity. It's about striking a balance, ensuring that each test adds value and enhances confidence without redundantly covering the same ground across unit, module, and end-to-end (E2E) tests. Acknowledging the significant investment in writing and maintaining automated tests, it becomes clear that each test should be purposeful and carefully considered.
The Misconception of Code Coverage
Code coverage, often seen as a metric of testing completeness, can be misleading. High lines of code coverage do not guarantee readiness for real-world scenarios, nor do they ensure the quality or defect-free operation of the system. The fallacy lies in equating code coverage with code quality—a correlation that has been statistically debunked.
The danger of code coverage metrics is their potential to distract from the essence of software development: addressing and fulfilling use cases. For instance, setting arbitrary thresholds, such as not shipping code with coverage below 80%, can lead to the proliferation of trivial tests that add little value or confidence but merely inflate coverage statistics.
However, as Martin Fowler points out, code coverage is not without its merits. It serves as a useful tool for identifying untested code, offering insights into areas that may require attention. The key is to leverage coverage as a guide to uncovering untested paths, not as the sole indicator of quality or completeness.
In summary, by embracing a balanced testing strategy, informed by the wisdom of industry experts, we can navigate the complexities of software development with assurance, focusing on delivering code that works, satisfies user needs, and stands the test of time.