Test the test: testing programming guides
The implementation of an automatic coding guidelines check produces code; in the case of the Axivion Suite mostly Python code that makes use of Axivion Suite's API, to flag certain places in code as an "issue" and to report it accordingly.
As you should always test your code - you also have to test the programming guide. So we have to answer the question: What best practises should be followed when testing programming guides?
Strategy
1) For each and every programming rule, create a "unit" test including positive and negative test cases. In our case this means we need code that violates the rules as well as code that follows the rules.
Creating code beyond the unit tests that follows all rules at the same time is very difficult. Therefore, the following is more efficient:
2) Create an overall test based on a stable branch or a frozen branch of a project to assess the impact of changes to the coding rules in the delta views of Axivion Suite. (Here we change the rules rather than the code.) In the case that there are no violations of certain rules, one can create additional violations by "error seeding".
Practical steps
For case 1) the Axivion Suite offers a test framework that allows creating code snippets in the respective source language along with configuration and expected test results as comments of the source language. The tests can be executed as regression tests automatically and are well suited for a CI/DevOps environment.
Example:
For case 2) you can use an old branch of your project that does not change any more. In principle, you can also make use of open source projects, if the rule sets applied are not too far away from your rules. By means of the delta analysis, the differences can be easily used to assess changes.
To get more insights into my work at Axivion, register for our newsletter, visit our blog or follow us on LinkedIn and Twitter.