[Software Development] Premature optimization: what it is and how to avoid it
Picture from Andrea Maietta @ Codemotion Milan 2016

[Software Development] Premature optimization: what it is and how to avoid it

Let's imagine two developers - Jack and Bob - working on a fresh project. One of the tasks is to implement a menu from which customers can easily navigate the project's features and content.

Jack thinks the best way to go about it would be to write a very simple menu that would require minimal effort, but may be hard to scale. Jack argues that it's unlikely the project will change enough over time that it will require a reimplementation of the feature, and that this will save time on the roadmap to release.

Bob thinks the best solution would be to create a fully scalable solution that would handle any situation. This would take at least double the time to implement and would be much harder to understand for new developers coming into the project later, but it would, almost without doubt, be able to handle anything that ever came its way, thus saving time in the long run. Bob argues that, someday, new and complex requirements will come along that such a system could handle easily, even if it takes longer to build.

So, who's right?

It depends on the situation. If Jack and Bob are part of a small project, like a presentation website or a simple app that tells the weather, is there really a need for a scalable solution? The clients most likely roadmapped the project over a few months, and see it as being either a final version, or as a version that, when succeeded, will require a full rebuild anyway. In this case, Jack's minimal (yet functional) implementation will save time on a project that does not have much of it to begin with.

But what if we're looking at a project that could predictably be alive and kicking, based off the current code, 5 or 10 years down the line? Well, that's when Bob is right. If we were to go with Jack's solution here, and build a minimal implementation, we'd end up either doing it again over and over throughout the years, or eventually end up having to do it Bob's way, anyway - for a similar tradeoff in time as if we had went that way to begin with.

This is the problem of premature optimization: do we build something that can solve any problem because we're expecting the current problem to grow far beyond its original parameters? Or do we assume the problem will retain its shape for as long as the current solution is necessary? Getting a good idea about the project's expected lifespan and long-term complexity is key to making good decisions in this area.

The takeaway: If your project is being built as a one-off thing (hints towards this would be that the team will leave at its end, the total dev time is relatively short, there's another big project looming on the horizon, etc.), then as a developer or project manager, you're probably better off favoring fast solutions that save time - of course, as long as you make sure that they're well implemented and, while not scalable, do perform well reliably.

If you're looking at code someone 5 years from now will be getting acquainted with for the first time, you're on the safe side forgoing fast releases (as much as you can - everyone wants everything done yesterday regardless) for scalability. Your future devs will be especially grateful if you set aside the time to write some documentation for it, too.

Of course, the real world of software development is rarely so simple that you can fully rely on these ideas - for example, you may be facing a project that will almost surely be alive 5 years from now but absolutely needs to release ASAP, so you might have to pick a fast solution that you know won't last - but when you have the luxury of choice, keep in mind these guidelines when deciding between release speed and scalability.

No use building a supersonic jet or space shuttle to visit your nextdoor neighbour. But what if they move to the Moon?


要查看或添加评论,请登录

社区洞察

其他会员也浏览了