Use Guardrails for AI-Assisted Coding
The three fundamental guardrails when adopting AI-assisted coding tools: Code Quality, Code Familiarity, and Code/Test Coverage.

Use Guardrails for AI-Assisted Coding

AI-assisted coding is still in its early stages. This article by our Founder and CTO Adam Tornhill explores the immediate and future impacts of integrating AI assistants into the software development process.


Large language models and generative AI have enabled machines to write code. The resulting movement, AI-assisted coding, promises to improve developer productivity, shorten onboarding time and may even elevate junior programmers to a skill level that traditionally took years to master.?

As promising as it all sounds, AI-assisted coding is still in its infancy. This implies that we have to adopt it with caution. This article discusses both the short- and long-term implications of putting AI assistants into software developers' hands.


Can AI makes us ship features faster?

AI-assisted coding has come far enough to be considered disruptive. Shorter development cycles mean quicker product iterations, a critical factor in today's competitive market. At some point, AI will fundamentally alter how software is developed, though we are not quite there yet.

Even if the marketed AI productivity gains of becoming "55% faster" materialize, it won't mean we can ship new features 55% faster. This becomes evident when looking at the software life cycle. In software, maintenance accounts?for over 90%?of a typical product’s life cycle costs. This means that if AI-assisted coding tools help with the programming, that coding amounts to only 10% of a workweek. An improvement for sure, but neither groundbreaking nor a free lunch. Those software life cycle numbers imply that the primary task of a developer isn’t to write code, but rather to understand existing code. That critical objective is at potential odds with AI-assisted coding. Let's explore why.


Caution: Code Quality In The AI Age?

AI-assisted coding today is imprecise and error-prone. A?2023 study?found that popular AI assistants only generate correct code between 31.1% and 65.2% of the cases. Similarly, in our Refactoring vs Refuctoring study, we found that AI breaks our code in two out of three refactoring attempts(!). A human developer shipping code of such low quality would have a hard time keeping their job, so why do we accept such a low performance rate when it comes to AI?

An easy answer is that the AI serves as a shortcut: a developer might know what code they want, and the AI might help them get there faster. A skilled developer can then inspect, tweak and correct the resulting code, hopefully in less time than if they’d started from scratch.

However, when working with code, the actual typing isn’t the hard part. Instead, the bottleneck is the effort required to understand existing code. As AI accelerates the pace of coding, human readers will have a hard time keeping up. The more code we have, the more difficult it becomes to understand a software system. Acceleration isn’t useful if it’s driving our projects straight into a brick wall of technical debt. These promising AI assistants will serve more as legacy code generators than genuine help unless we introduce proper guardrails. Let's cover them in more detail.


Apply Guardrails when adopting AI-assisted coding tools

These guardrails need to come in three shapes: code quality, code familiarity, and strong test coverage to ensure correctness:

  1. Guardrail: code quality. High-quality code has always been?a competitive business advantage, allowing shorter development cycles and fewer production defects. Such code is easier to comprehend, making it safer and more cost-effective to modify as needed. Maintaining the same bar for AI-generated code mitigates several risks.
  2. Guardrail: code familiarity. The second guardrail involves processes and practices for ensuring code familiarity.?Research?shows that developers might need 93% more time when solving a large task in code they haven’t looked at before—it’s the cost of onboarding. When embracing AI, we developers are constantly presented with new and unfamiliar code. Therefore, we must ensure that every developer builds a strong familiarity with the generated code, too.
  3. Guardrail: test coverage. As discussed above, an AI-assistant frequently generate incorrect code. We often claim that Large-Language Models aren't truly creative, but after performing our AI research I beg to differ: there's no limit to the creative ways in which an AI breaks our code. Some of them are subtle, like negating a logical expression, others downright nasty (like removing the this keyword in JavaScript code, fundamentally altering the meaning of the function).? Strong automated tests offer much needed protection here. And no -- the tests shouldn't be AI generated from the code. Doing that misses the point of the double bookkeeping aspect of tests. (e.g. who tests the tests?).


The three fundamental guardrails when adopting AI-assisted coding tools: Code Quality, Code Familiarity, and Code/Test Coverage.


This might sounds like a lot of effort, but the good news is that the core of these guardrails can be automated. The next figure show the existing software metrics which we use to implement these guardrails:


Automate GenAI guardrails via the CodeScene tool. Integrate them into build pipeline via automated code review, and observe the trends as a part of your daily work.



Tips For Succeeding With AI-Assisted Coding

The coming decades will see a hybrid model where code is written by both humans and machines. In that context, it's easy to mistake code-writing speed for productivity. To reap the benefits of AI-assisted coding while minimizing risks, we need to:

1. Set realistic expectations:?AI-assisted programming helps with specific tasks, but its lack of consistent correctness means that it cannot replace human programmers. Acknowledge those limitations to focus on augmentation rather than substitution. Ensure all AI-generated code is covered via tests.

2. Make code quality a KPI: Enforce a minimum bar for both humans and machines. Implement automated quality checks in your delivery pipelines. Use a reliable and proven metric to maximize the signal and minimizing the false positive noise.

3. Conduct continuous code inspections: AI-generated code needs to be understood. Never accept code that the team doesn’t grasp or hasn’t reviewed. Visualize the code familiarity for the whole team so that you can catch any emerging knowledge islands on time.

4. Acknowledge the rising importance of understanding code over merely writing it:?Comprehending AI-generated code will be a crucial skill, so ensure your processes, practices and training embrace this shift.

As we can see, there’s a common thread here. Succeeding with AI-assisted coding requires that we keep skilled humans in the loop while introducing dedicated tooling and processes to ensure a healthy codebase. Navigating this new frontier means we need to refocus on code quality and continuous learning. It's fundamental.

The guardrails introduced in this article act as important feedback loops to ensure your codebase remains maintainable even after a GenAI rollout. With the current hype, it's easy to mistake code-writing speed for productivity. Just remember that writing code is not the bottleneck in programming; understanding code is. Optimize with that in mind, and make it your organization's advantage.


Ready to test the guardrails for free? Sign up for the free trial.


Behzad Imran

Power BI | Tableau | Python | Data Science | AI | Machine Learner | Marketing

5 个月

AI-assisted coding boosts speed and personalization, but high code quality, code familiarity, and strong test coverage are essential to mitigate risks. Human oversight is still crucial for reliable development.

Rikard Larsson

Co-founder & Partner at Decision Dynamics AB

5 个月

As #AI is boosting the #digital transformation even further, it becomes even more important for organizations assess #coding #quality in general and coding quality of AI-generated coding in particular. Adam Tornhill and CodeScene are international leaders in the area of assessing and improving coding quality. ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了