Code review mastery: 8 practical examples that will help you elevate your code review game

Code review mastery: 8 practical examples that will help you elevate your code review game

There aren’t many aspects of software engineering as crucial as code reviews. In this article, we’ll delve into 8 practical examples that will help you elevate your code review game. Let’s begin!


1. Reviewer (Nigel Nitpick ????): You should have used the K&R indentation style here and placed the bracket in the line above.

What do you think about this feedback – is it good or bad?

This is bad feedback.

Why? Matters of style should be settled by an authoritative style guide. Every organisation should have an established style guide which should be enforced automatically (e.g. via the virtue of pre-commit hooks) wherever possible. In this case – indentation style – is something that can be automated away, and it is not worth spending time and effort discussing it and correcting it manually.


1. Reviewer (Archie Analyzer ????): Nice catch here. Thanks!

What do you think about this feedback – is it good or bad?

This is good feedback!

Why? We often treat code reviews as a contest of "who can spot more flaws in the submitted code". Hence, we frequently overlook the fact that code reviews can also serve as a platform for conveying positive feedback too! This offers numerous benefits as it reinforces positive behaviours, fosters a positive work environment, and boosts morale.

Next time you come across someone's outstanding work in a pull request, make sure to give them the recognition they deserve!


1. Reviewer (Jade Judge ????♀?): Actually, I don’t quite like this "#include" preprocessor directive idea here. Maybe we should rather have a real module system in C?

What do you think about this feedback – is it good or bad?

This is a bad feedback.

Why? Well, imagine such a situation, huh? You've dedicated an immense amount of time and effort to?implementing a feature that handles the "#include" directive in a C preprocessor, only to learn, after submitting your pull request, that maybe the whole idea that you implemented is wrong at its core, and you should rather do a major rework.

This is not good feedback because pointing out a major design flaw after the feature has been implemented is already too late. The optimal time to identify major design flaws and make critical design decisions is during the design review process.


1. Reviewer (Henry Heuristic ????♂?): This is some great work; thanks! While you’re at it, could you refactor all of our 100 other pipeline jobs to use this new modernised docker image?

What do you think about this feedback – is it good or bad?

This is a bad feedback.

Why? While code reviews are indeed valuable for enhancing the overall health of your codebase, it's important to request only slight improvements each time a particular area of your code is touched. Introducing them gradually ensures that the codebase remains stable and manageable. Requesting major improvements and refactorings every time a specific area is touched can indeed become a slippery slope, potentially causing frustrations among PR (pull request) submitters. Remember that slow and steady wins the race.


1. Reviewer (Helena Holography ????):?I’m not entirely sure if I understand this part. Could you please provide some clarification on the functionality of this code?

What do you think about this feedback – is it good or bad?

This is good feedback!

It is common to be afraid to ask for an explanation of what the code exactly does – after all, there may be a lot of people viewing that pull request – and you might be afraid of appearing clueless. However, observation that you cannot comprehend the code, could be a clear signal that it needs to be rewritten to allow for easier understanding. Of course, there could be situations where you cannot make the code much more readable. In that case, the PR (pull request) author can either bedizen the code with some comments to make it more understandable, or they could respond to your comment with a written explanation, which will help you to get back on track with reviewing the code and also will be persisted for the future readers in your code review tool.


1. Reviewer (Griffin Grumble ??): Why didn’t you just use Guava’s LoadingCache for caching?

What do you think about this feedback – is it good or bad?

This is a bad feedback.

Why? The comment raises a couple of concerns. First of all, it introduces personal bias into the discussion (note the use of "you"). It is highly unlikely that a single person will have ownership over the codebase, more likely it is owned by the entire team. In that case, we should refrain from the use of the word "you", we can replace it with "we", where it’s possible. Also, the comment does indeed carry a passive-aggressive and judgmental tone. Instead, the PR (pull request) author should be more courteous and respectful and phrase the comment kindly. A more effective version of the comment should provide additional context about the concerns raised. For instance:

I noticed we’re implementing manual caching in the current solution. Exploring the possibility of integrating Guava’s LoadingCache as a replacement might be beneficial. It’s designed to handle automatic cache population, invalidation, and cleanup, potentially simplifying the codebase and enhancing efficiency. LoadingCache also offers advanced features that could improve the resilience and maintainability of the application

— Grifjn Grumble ??


1. Reviewer (Felix Feedback ??????): Nitpick (non-blocking) let’s chain these

This is good feedback!

Why? It is a good idea to mark the severity of a comment.

Is the issue you've spotted something that requires immediate attention, or can it be addressed at a later stage? In other words, is it a blocking issue or a non-blocking one? Is the comment a nitpick, some optional suggestion, some to-do item, a major showstopper or perhaps just some FYI (for your information) type comment? Make it clear in your comment like in the example above!


1.?Reviewer (Paris Patch ????): If we recursively query only for java_ rules here, it may give us a performance boost.

2. Author (Diana Deploy ??♀?) RE: I don’t think it will, I guess the Bazel server startup will consume more time than we could save from omitting non-java rules.

What do you think about this response from the PR (pull request) author? This is a bad response from the PR (pull request) author.

Why? Guessing is rarely a good idea in engineering. It's far more effective to conduct experiments and present factual evidence to validate our hypotheses. Decisions should be made based on data.

An example of a good response from the PR (pull request) author could be as follows:

I did some tests using the "time" linux command line utility and the results indicate that the current strategy using ":all" aggregate target performs better than recursively querying only for java_ rules:
strategy all (no cache after bazel clean --expunge): real 1m22.039s user 0m4.894s sys 0m0.398s
strategy querying only for java_ rules (no cache after bazel clean --expunge): real 2m7.919s user 2m1.356s sys 0m14.606s

— Diana Deploy ??♀?

Other principles

Having explored those 8 practical examples, let’s top it off with some additional best practices for both PR (pull request) authors and code reviewers.

As a PR (pull request) reviewer, you should also:

  1. Perform a self-review before submitting a change. Even if it's not your regular practice, you might be surprised by the number of issues you can catch by reviewing your own pull request – especially after some time has passed since completing your work on it.
  2. Write a good pull request description. As a rule of thumb, you should make it as easy as possible for the reviewer to check your code. This is a win-win situation. The reviewer will have an easier job reviewing your PR (pull request). You will get better feedback on your pull request, and the merged code could end up in better shape than otherwise.
  3. Split up your PRs (pull requests) to a manageable size. A SmartBear study indicated that developers are advised to review between 200 and 400 lines of code (LOC) in one sitting. This recommendation stems from the brain’s capacity to process information effectively; it’s found that beyond 400 LOC, a developer’s proficiency in identifying errors decreases.

Another practice that you may try is crafting your commits so that the reviewers can read them one by one and follow the same thought process as you had during the implementation of the feature.

Conversely, as a PR (pull request) reviewer, you should strive for:

  1. Making the code review process quick. Why? According to the DORA research 2023, "teams with shorter code review times have 50% better software delivery performance". Long and tiresome code review processes can be a major source of developer frustration. To increase code review efficiency, consider automating manual tasks, keeping pull requests (PRs) small, and allocating dedicated time for reviews.
  2. Engaging junior members of the team. There is a common misconception that pull requests should be reviewed and accepted only by senior members of the team. However, reviewing pull requests or part of those by more junior members of the team could be a great way for them to get familiar with the codebase and gain additional experience. Transferring knowledge should be one of the main functions of the code review process apart from ensuring the quality of the code before it is merged. You may be also surprised how much can you learn from the insights of junior members of the team!

Closing thoughts

Code review stands as a crucial pillar in maintaining high code quality and facilitating the transfer of knowledge among team members, fostering an environment of continuous learning and improvement. By applying the best practices of code reviews, teams can significantly enhance their engineering output, making it an indispensable tool in the arsenal of software development.


Michel Francis

Investor | Founder

3 个月

Astronuts automates the entire code review process with AI, along with code quality metrics in your GitHub PRs. https://github.com/marketplace/astronuts-app

回复
Marek Rusowicz

Engineering Manager | Agile | Servant Leader | Building self-managed teams

5 个月

Thanks, Szynon! A very good summary of basic good practices we should not forget about!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了