Continuous Reviews for Clean Code Delivery - II
https://www.pexels.com/photo/close-up-code-coding-computer-239898/

Continuous Reviews for Clean Code Delivery - II

Importance of Code Reviews : Continuous Reviews for clean Code Delivery – I

Now let's look at code review "best practices", "aim" of code reviews, "when" you do code reviews, "how" do you do code reviews and "what" do you look for in code reviews and other obvious few questions related to code reviews.

Aim of code review

This is the most important thing for performing the review. One thing to be sure of is this is not a tool to find fault with others or in code. It is a tool toot to add value. Reviewers should understand this while reviewing and giving feedback. Three ways it adds value [3]

Add value to the system:

  • Automated Code Review: There are a plethora of static and dynamic tools out there based on technology and especially open source tools. One can use it to customize the coding standards, rules, quality gates, thereby value add to the system by checking code compliance to the industry standards
  • Manual code reviews: This is important and the aim should be to add value to the maintainability by making it easy for the operations team to manage the code by reviewing exception handling is proper, check the readability of the code, variables, methods, whether the logging is proper, etc that make sure application has good performance and scalability. These are kind of the things that the reviewer should look for during the manual code review and add value add to the system

Add value to developers

  • The other point is reviewer should add value to the developer. Most of the time the reviewers are more experienced than the developers whose code is being reviewed and the reviewer knowledge possess should be transferred out to the developer, basically, this is a value add to people, developers also the opportunity to learn new things.

Add value to best practices/ identify best practices

  • The reviewer should observe or look out for the patterns, which are committed to the source code control tool, identify and share it with the developer and add it to the best practices or lessons learned on the way such issues are handled

Overall the aim of all your code reviews it's to add value to the system, to the developer and make sure best practices are identified and shared with the developers of whom code is being revived.

When will you do the code reviews ?

As early as possible i.e. as soon as code is available for review. Sooner the better because the code is fresh in the developers' mind and that's the best time to do the review. Code has to be checked into the system often. Before it is merged into the mainline, review has to be done at least two levels of manual reviews [ will discuss later ].

However in case of waterfall methodology is adopted or the code is in a distributed system like ClearCase or equivalent then the following two conditions also apply to pick up the code for doing the review, periodically

A new developer in the team: Whenever a new developer joins the team they would take some time to get accustomed to the standards which are followed in the team and it's very important to make them understand by the Senior in the team on the code review aspects Architect or the Senior developer can sit with the developer and do the review. As written earlier, the important aspect here is to make sure that he feels comfortable and you tell that it is not mean to try to find fault with them but it's more helping them to get used to the project's coding standards and whatever things they need to know while doing coding in the project

The new framework introduced: The other high focus time is whenever a new methodology or a new framework or a new technology is being introduced a new framework or new important feature is being implemented those are the times when you would have a high focus code review and

How you do code reviews ?

There are no set guidelines. It is achieved through two means - automated and manual

https://www.sonarqube.org/

Automated Code Reviews: Use static & dynamic code analysis tools as part of the continuous integration. Especially SonarQube is mostly used by many for static analysis to analyze/ scan the code and report quality metrics, indicating the health of the code. This is one part of the review and a really good thing. However, that's not the end-all-be-all. Tool selection should be based on technology. Customize the rules, quality gates, and be rest assured on the tool. CAST, PMD, checkstyle, and many more tools are familiar.

https://www.twenty20.com/photos/56626174

Manual code reviews: Must and really important because automated static analysis has limitations that it can't check the readability of the code, variables, methods, etc are these properly named. Those are the kind of things which are not really found out by automated review tools. Therefore manual code reviews are important and the aim should be to add value to the maintainability by making it easy to for the operations team to manage the code by reviewing exception handling is proper, whether the logging is proper, etc that make sure the application has good performance and scalability. These are kind of the things that the reviewer should look for during the code review.

Static/ dynamic analysis though the tools and for manual code review ideally, pair programming helps to spend time on reviews while development is in progress, or as soon as static analysis available one can take a quick look at it and then sit with the developer to do the pair programming review. Usually, it's easy to refactor quickly rather than putting an effort into comments which would need to be understood by the developer, and moreover difficult to understand the same meaning as the reviewer intended while commenting. So in doing pair programming reviews by sitting with the programmer helps them improve the code right then and there, be it renaming the variables or methods and things like including fixing the issues reported by static analysis tools sooner.

When to do the review ?

As early as possible i.e. as soon as code is available for review. Sooner the better because the code is fresh in the developers' minds and that's the best time to do the review. Code has to be checked into the system often. Before it is merged into the mainline, review has to be done.

Few good practices

Lets try to understand few good practices

  • One Defect fix OR one issue per one pull request: Try to make the code in the scope of the pull request meaningful enough that it can be independent and merge after approval without breaking the other code. Limit the scope under test or code under review making sure that you size your stories appropriately and make sure that you try and keep your branches as short-lived and small impacting as possible. Sometimes this can be done by shipping code that's maybe required for a feature but doesn't actually have any UI changes.
  • Minimum two reviews / approvals before the merge: Interesting trends and relationships between "the number of reviewers and the number of defects found". Fewer reviewers find more defects. If many reviewers are assigned, the number of defects found by the reviewers goes down. Fewer reviewers find more defects [1]
No alt text provided for this image
  • One other interesting relationship on time spent by the reviewers reviewing the code. Do not assign too many reviewers. Typically one assigns one and a half to two and a half times the number of required reviewers. The reason is that if one reviewer is overloaded or off then hopefully someone else will step in and approve and thus reduces our overall cycle time. More reviewers spend less time[1]
No alt text provided for this image
  • Size of code for the review: When you assign a reviewer for 10 lines of code, there will be 10 issues and if you assign 500 lines of code, the probability of code being fine is very high with no issues. So if the user story or the requirement being coded is large, one has to look at options of breaking it logically, code & review in small chunks so that one gets the opportunity to increase the quality of the code. On how much time people spent in reviews based on the actual content of the review and it turns out that for relatively small reviews developers spend a reasonable amount of time but for insanely large reviews they suddenly gained this magical ability to review hundreds of lines of code per second which obviously means they're not actually reviewing the code at all so try and keep the reviews as small as possible and check your change is incremental [2]
No alt text provided for this image
Review fewer than 400 lines of code at a time : A SmartBear study of a Cisco Systems programming team revealed that developers should review no more than 200 to 400 lines of code (LOC) at a time. The brain can only effectively process so much information at a time; beyond 400 LOC, the ability to find defects diminishes.

In practice, a review of 200-400 LOC over 60 to 90 minutes should yield 70-90% defect discovery. So, if 10 defects existed in the code, a properly conducted review would find between seven and nine of them.


  • Use checklists : It′s very likely that each person on your team makes the same 10 mistakes over and over. Omissions in particular are the hardest defects to find because it′s difficult to review something that isn′t there. Checklists are the most effective way to eliminate frequently made errors and to combat the challenges of omission finding. Code review checklists also provide team members with clear expectations for each type of review and can be helpful to track for reporting and process improvement purposes.[2]
  • How to plan or how much time to estimate for the review ? Some teams, seem to have a problem where all issues get stuck in review state towards the end of the Sprint. Developers prefer to write new code rather than becoming reviewers, to review other people's code, so one way to handle this is assign one day or two days a week for the reviews. In some of the organizations there is a practice to pick a day or two and declare them inbox zero days where every developer on the team has to clear out all of their reviews by merging any pull requests that are already approved and then going back and approving or providing the feedback on any outstanding pull requests that they're assigned to. [ 1]
  • On the effort-wise to accommodate in the plan, there is no right or wrong answer to this. As discussed earlier it could be number of days in a week or if a code review is done by an external team member [ recommended ] it might amount to approximately 30% to 35% of the development effort. But if this is done by another project team member who was part of the development team [not recommended] this can be completed in approximately within 20% of the time taken for the development effort. If this is done by the SME from the shared resource team, this can also be approximately within 20%.
What are the metrics to be captured ?
  • Defects / pull request or the code artifact is the key metrics. Table with the following data might help to capture the measures. However, script can be handy to pull the information from the logs without intervention
  • Date: Pull Request: Time is taken to review: No of reviewers: No of defects Found: No of defects fixed

Other Metrics[4]

  • Delivery metrics — how long pull requests are taking, context around how they fit into the overall delivery process.
  • Code quality metrics — churn, frequently committed to areas of code, language specific static analysis, Maintainability, Security, Technical Debt. etc, determine hot spots of code, risk factors, security scanning, Extremely useful when tied back into the delivery timeline.
  • Organizational Indicators — topologies, knowledge clusters, comments on Pull Requests, etc.

Thank you.



References:

0. Code Review at Microsoft

1.Tips of effective code review

2. Best practices for Code Review

3. Code Review Best Practices

4. Code Metrics as performance indicators

要查看或添加评论,请登录

Anil Kalose的更多文章

社区洞察

其他会员也浏览了