Code quality best practices. Part 1?— Code review quality assurance
Introduction
Almost all development teams use code review in their daily practice, but often no one pays much attention to how effectively it works. Today, I’ll try to answer the very significant question: How to ensure that code review in your team works properly and how to measure the effectiveness of code review in your team?
This article assumes that the reader is familiar with code review fundamentals and strategies. On the one hand, the provided materials are mainly oriented on a team leaders and project managers. On the other hand, it can be very helpful for regular developers who aspire to higher positions in the project. So, let’s get started!
Code review quality assurance
Imagine that you have run up against a problem that your team has been conducting code reviews, but you think that all the team members or some of them lack experienced in this practice or their attitude leaves a lot to be desired.
In such a situation, the key to success is to organize trainings and perform optimization. But first of all, it is necessary to “measure code review” and obtain some numbers in order to figure out what’s going wrong. One of the best approaches here is to consider the gathering and analysis of metrics.
Basically, all the metrics can be split into the following types:
There is no need to memorize all the metrics that will be considered, as different metrics are applicable to different teams and projects. However, it is important to understand their life cycle in order to make sure that they are suitable in your projects.
Code-related quality metrics
This type of metrics is usually based on the following input data:
LOC
LOC basically determines the size of a review. The code review literature identifies that developers should review no more than 200–400 LOC at a time.
1000 LOC = kLOC
IDA and MDA
The more defects identified and the fewer defects missed, the better the code review quality.
For instance, let’s consider that the code review was conducted throughout the “t” period and this period divided into 3 parts. In each of the parts the IDA and MDA parameters were different:
Depending on the proposed IDA and MDA parameters values, the code review quality metric will look like the following:
However, a large number of identified defects can also indicate a poor code quality and it might mean that something else goes wrong with the development process (for instance, absence of coding standards or knowledge sharing).
Defect density
This metric measures the number of defects per specified amount of the code. It is calculated by dividing of IDA found during an inspection by the total number of lines of code inspected.
Different code can have a different Defect density. Let’s consider some examples:
IDA per kLOC
It is the most significant code review metric that shous us how effective the review is going.
Industry average: 15–50 IDA / kLOC.
For instance, if there are 3000 LOC for review, the average value for this metric will be 45–150 IDA / kLOC.
领英推荐
The value of this metric that is more than the mentioned range can signal that the quality of the code decreasing. However, if this value tends to zero it can show that the review is not working properly.
This metric is also can be individual for each team member and the effectiveness of the reviewer can be compared with the average numbers and other reviewers.
MDA per kLOC
MDA is another metric, that can be both common and also individual. It reflects how many defects where missed in 1000 LOC. This metric is not taken into account the defects, that can not be identified by code review process and will be catch only during the testing phase of SDLC.
The lower the value, the better result. An increase of this metric can indicate that there are some gaps exists in the code review strategy or guidelines.
Defect rate
This metric measures the speed of finding defects and also can be both common and individual. Usually it represents by IDA / h.?
The typical value, that is present in the articles and code review related books is about 5–20 IDA / h. It also depends on many factors like reviewer’s attitude, experience and understanding of the business logic.
Code review rate
Basically, this metric shows the personal speed of conducting the code review and can help plan the workload for reviewers (developers).
The unit of measurements is: LOC / h.?
The value of this metric depends on the type of review. Let’s consider some average values:
The average values are also depend on reviewer’s attitude, experience and understanding of the business logic.
Time-related metrics
Basically, the code review timeline can be divided into several stages, and time-related metrics will show the time spent on each stage of the review, including the so-called lead time.
All the metrics that is shown on the timeline above are represented in time units. For counting “Review time” and “Correction time” the time value of the each stage should be taken into account.
The basic rule for all time-related metrics sounds like “The lower the value of the metric, the better”.
These kind of metrics are usually used when it is necessary to release code very often and you need to optimize the code review process from the time point of view.
Organizational metrics
These kind of metrics are rarely used and they are usually focused on organizational aspects of code review process.
Possible organizational metrics are listed below:
Conclusion
Do you remember the question that was asked at the beginning of this article? I hope you found the answer.
All in all the code review metrics can help to “measure code review” and make a well-established code review process.
Code review quality metrics are specific for different teams, projects and needs. Please, don’t expect fast results. Sometimes gathering the metric takes months, and only after that time you can see the full picture. For managers and team leaders, code review metrics can help create strategies for improving this process. For regular developers, it can help identify areas of weakness.
References
C++ Robotic SW Developer; R&D SW Engineer
1 年Hi Arseni, Thank you for interesting post. Not fully to the topic but is there any information on how usage of the automatic testing tools, like clang-tidy, affects those metrics?