On Code Reviews
Marco Ramirez (MSc.) ??
???? Autonomous Mobility | AUTOSAR Expert @ Continental (WG-DIA Member) | Continental Ambassador | LSS Black Belt | Agile (ScrumAlliance CSM, SAFe POPM)
We've heard that, we know that. Humans are terrible in so many aspects. Reviewing things is one of them.
We have a limited amount of attention, we're biased, sometimes not comfortable challenging others. So why do we still insist on handing activities to us?
Our profession is still young. As more and more recently graduated software engineers join the workforce, the median age of teams do not increase. So is experience.
Therefore, it is almost certain that practices like code reviews were long ago stablished ("it was already like that when I came here"). Unfortunately these young engineers do not have the experience to feel confident challenging the status quo.
So, how do we break this cycle? It's easy. Free the human of as much activities as possible. As much as our resources allow.
Let's take Code reviews. Code Reviews check for functional and non-functional aspects on the code. Whether it aligns to the functionality being implemented or if this code aligns to the principles set by the architecture and the team.
On the functional side, moving things away from the human is easy if we take for granted that the requirement and the test (requirements-based) have already been reviewed and approved. It will help if these tests are automated and easily executable so we can run them right away after finishing our implementation.
The idea is to focus this attention on the code itself. It will help a lot if the reviewer knows already that the intended functionality is accomplished, so it can focus on other aspects. So here the recommendations are: Only implement code that comes from reviewed requirements, and bring tests as sooner as possible and make them easily accessible.
Next, non-functional aspects. Is the code readable, maintainable, extensible, etc. Take the rules from Clean Code. The idea will be to set a number of principles and rules within the team and evaluate each one of them if they can be tested through static analysis.
Replace magic numbers with named constants? Can be detected by a computer.
Choose descriptive and unambiguous names? This almost certainly needs a human.
Functions have to be small? Do one thing? Comments shouldn't be redundant? Bring examples, be clear. Enumerate these principles in a checklist along with their explanation so anyone can evaluate each principle and know exactly what we're looking for.
For those rules that due to technological limitations it necessarily needs a human, at least make them clear enough for even the junior engineers to assess them.
Please let me know your comments.