Effective Code Review: an Enigma
Akhilesh M.
Senior Software Engineer @ Bloomberg LP | Fx trading systems. Distributed Systems, C++ Latency analysis All views are my own
Recently I was reading about the concept of 10th man aka devil's advocate. And I marvel at the very necessity of dissent, even if illogical. For me, personally, it is important to have a critical analysis of code when it goes to production. And here-in comes the importance of code reviewer, our own devil's advocate. As psychologists would have us known that we over-value our creativity to a large degree. At worse, we would be downright defensive and at best we would be obscure to the problem in our creativity. What we see is a harmonious inclusion of our code in the entire program could very well obfuscate errors in another flow.
A code review is very essential in writing an effective code. Bring in another person to have a second look at our code. They look at our change, and depending upon their experience, suggest modifications. Now there are two people responsible for this piece of code. Seems cute. If the world of reviewing and coding were so straight forward.
But I have seen so many code reviews go wrong. That I have started questioning the code review itself. Not that the process is wrong, but how we go about it is flawed. I have seen people dig deep into naming conventions and do not pay due diligence in checking the correctness of code. Some organisations have guidelines and people would go so deep into adhering those guidelines that whole program looks like it was created by one person. So I am writing this post as my perspective on what an effective code review should look like.
- An effective code review has four aspectsFunctional Aspect. It does what it is suppose to have been created for, while keeping other functionality in their previous order.
- Can the performance be increased. We thought, a std::string would be good but you could increase the code performance with const char*. Are we over optimising.
- Does the code tries to emulate SOLID principles. Now we have come to a subjective arena. And we would expect a lot of discussions here. Is it a subjective issue? Are we over-engineering.
- Conforming to your company's guidelines for variable naming, class naming etc.
It is so easy to mention in point format, all the features of what an effective code review should be and quite another challenge to play it out for real. At-least we now have some clarity over what points should we be looking for in code review. But does that help? Not that much. A code review is a process between individuals. And as with any interaction, there is a psychological dimension to it.
Looking from the creator's perspective, it is his/her code that s/he created. And a part of pride goes into such creativity and as such we can become very resistant finding fault in our creativity. Most of us do not this as a learning opportunity. Or may be reviewer is a new entrant to your company or less experienced to you or both. I think that creativity and concept of I are greatly intertwined. Most of the creativity takes place when people goes against the norm and as such disrupters are highly valued. But because we take pride in our creativity, coding being a small form of creativity, it is difficult to embark on a fault finding mission.
Looking from the reviewer's perspective, it is a highly tedious job which can jeopardise the social status quo. I have seen reviewer meekly submitting to reviewer either because the coders were more experienced or have been in company much longer than reviewers themselves. In this case, reviewers are not taking any responsibility if the program goes south. On the other hand, with experience, a reviewer might be shrouded with so much paranoia that they would want a complete conformity of what they perceive a good code should be. Here also, I believe the truth lies some what in between.
So what should we do? Given the set of problems, a code review looks like a just another useless tool in practise? No. Given our prerogative of learning, it can be correctly assumed that we can be taught how to give and take, a review. May be we can put this in a one day workshop where we even take a mock reviews.
- Before the review, we would discuss the scope of the review, what the review would encompass. This can be put into guidelines also. But it is good that all the players be told the rules of the game upfront. And that conversation can be started by anyone. Either reviewer or coder.
- A major portion of effective feedback is language. Language should be inclusive. It must not feel like we are pinning blame on someone. A small example is the usage of "we" as against "you".
- In situations where there are differences, lets have the opinion of Google. Most of the issues could be solved at this level only. If not, we can perform any small objective analysis. Hail https://godbolt.org for this. If we have not reached any consensus yet, it would be better to involve experienced programmer. We can have unofficial discussions with them.
- We should abstract the points and store it in a dedicated wiki. It would be good if we can write enough points so that when a new person opens it, s/he can satisfy her/his desire of information. This would help in repeating same things again and again. And in-fact, after coding, coders can be pointed to this segment of wiki to make sure, they are adhering to the guidelines.
As emotional beings, we bring subjective anecdotes in an objective analysis. And code review is no different. But there is a way. I have been implementing these techniques, modifying it when need arises. I have been seeing good progress in how my team-mates are seeing the review process as such. I would welcome any comments on the same.