Is Good Enough, Good Enough? (Part 2)

Is Good Enough, Good Enough? (Part 2)

In my previous article, I discussed what “Good Enough” software is, its common pitfalls when chasing perfection, and why perfection is neither achievable nor desirable. In this article, I’ll describe several engineering practices that help us build good software with decent quality, extensibility, and on time.

This is a two-part article, and this is the second part. You can find the previous article here: Is Good Enough, Good Enough? (Part 1)

But first, a personal story

Remember that legacy component that’s been around forever? Over time, it grew and grew, accumulating a significant amount of knowledge and becoming a cornerstone of the project. Chasing business goals and building the next big thing never allowed the team to improve it, and slowly it became an obstacle to development and evolution.

The team felt it was time for a rewrite. But what would that look like? Obviously, it had to be perfect. Over the years, we’d grown as engineers. We’d learned new frameworks and technologies, understood the limitations of the current setup, and knew how to make it amazing. We were quite confident in ourselves. We’d just take the old service and rewrite it completely. It would be perfect.

Presenting such a story to other decision-makers warrants a follow-up question: How long will it take?

Honestly, we had no idea. One year sounded sensible. Realistically, we’d have to stop development on everything else for that time and focus our efforts, but in the end, it would be perfect.

Unsurprisingly, ideas like this aren’t generally well-received by upper management and are seldom given the green light.

I know this story sounds didactic, but I’m also sure that almost all of us, at some point in our engineering careers, have felt very enthusiastic about a huge project that was supposed to end all suffering but ultimately didn’t materialize.

Let’s see what we’d miss if we went this way:

  • These projects are generally riddled with risk. A huge codebase that has evolved over many years contains a colossal amount of (often) hidden knowledge. Some of it will surely be lost, resulting in roadblocks or critical issues after deployment.
  • Closely tied to the previous point, engineers tend to underestimate the rewriting effort because they’re very confident in their understanding of the current status quo. During such high-profile projects, people tend to focus on the big picture. In reality, however, significant time is spent on implementation details.
  • Ignoring other business priorities for a prolonged period and stopping development efforts on other features is seldom feasible or preferable. Competitiveness is a real thing; engineering teams need to keep up with business needs.
  • We’d miss out on valuable early feedback from ourselves, other stakeholders, and/or customers. We might not learn early in our rewrite effort that the new architecture doesn’t perform well. We might also envision an amazing abstraction that our new setup depends on, but which turns out to be technically infeasible. The later we learn these things, the more likely we are to make decisions that will be very expensive to fix.
  • And my favorite: based on changing business priorities and overall company direction, it’s very likely that engineering efforts would need to be redirected mid-flight, resulting in a half-baked codebase with an uncertain future. This is an annoying but very natural phenomenon as priorities shift to react to market changes or business opportunities. Long projects are very vulnerable to this reality.

The above counterarguments aren’t only relevant for rewriting legacy codebases; they can easily be applied to undertaking huge projects in the hope of achieving perfection in one fell swoop.

So, if large-scale rewrites (and projects in general) are often problematic, what are some more effective strategies for improving existing systems or building new ones?

Strategies That Work

Prototypes

Prototypes are a great way to explore an idea. They capture core concepts inexpensively. It’s much easier to discuss a feature or idea if there’s something tangible. Prototypes should be easy to create (ideally, with as little code as possible) and should be considered ephemeral.

What could have been a prototype for the legacy component mentioned above? Given its size, probably many things:

  • An example of a new, modern version of an actual API
  • A barebones version of the application’s core logic that represents the overall architecture
  • A way of error handling that captures some common cases
  • A prototype of the desired architecture without much logic

We should treat the prototype as a scientific experiment.

  1. We have a theory.
  2. We build a prototype to scrutinize the theory.
  3. We see if our prototype holds up against our tests and actual use cases.
  4. We change the prototype if needed.
  5. We repeat the process until we’re satisfied or until we learn that our underlying theory is fundamentally wrong (and then, we scrap it).
  6. We use the learnings from the prototype for building the actual system.

And just to re-emphasize, prototypes should be:

  • Easy and cheap to build
  • Disposed of after they’ve served their purpose

To summarize: Prototypes are a great way to make sound decisions early in the development process.

MVPs

The Minimum Viable Product (MVP) concept can serve us quite well in our “battle against perfection.”

By defining an MVP, we focus on the following question: What’s the least we could build that would bring the most value? Or sometimes: What’s the least we could build that would still bring some value?

MVPs foster productive conversations because stakeholders need to figure out what’s most important for users. They can uncover misconceptions and disagreements. Ideally, MVPs should also be supported by data.

After defining a suitable MVP, things should clear up. Ideally, the work ahead feels much less daunting and more achievable. Customers or users of the product would also get something earlier compared to the fully planned functionality. The feedback loop would also shorten: By releasing something usable (and not just usable — but something highly useful), we’d receive feedback earlier, which could fundamentally change our plans for the future.

Prototypes and MVPs work well together: For example, a prototype might be used to validate a key assumption before building an MVP. An MVP can then be used to gather feedback and inform further iterations. That’s right: Iterations.

Iterations

I strongly believe that iterative development is paramount:

  • Higher quality: Each iteration allows for gathering and acting on feedback, making the product better with each cycle.
  • Faster time-to-market: Each iteration delivers a useful, standalone increment of the product. Customers can try new features and provide feedback earlier and more often.
  • Reduced risk: It’s easier to reason about smaller changes, so an increment is more likely to work reasonably well compared to a massive, one-time change. It’s also easier to roll back if things go south.
  • Improved flexibility: Iterative development allows for changes and adjustments throughout the process, accommodating evolving requirements and feedback. It’s also worth noting that in our current fast-paced world, we may need to shift focus frequently. Scrapping a huge, non-iterative project mid-flight is a devastating loss. Scrapping a smaller increment is less tragic because some value has already been delivered, and less effort was put into that specific increment to begin with.

The concept of MVPs almost mandates an iterative way of working. Although MVPs have a slightly emphasized role, they frequently act as an entry barrier for the entire concept. Should the MVP be unsuccessful or not well-received, it would probably result in scrapping the project and not continuing with further iterations. But that’s a good thing, right?

Interfaces and Abstractions

Closely related to the concepts above, I’d also like to mention a more technical area.

Breaking down a large project can be intimidating for engineers. If we have to deal with so much uncertainty, how can we build something maintainable? If we don’t address all the corner cases or lack basic information, how can we be future-proof?

Here, we must return to software engineering essentials and focus on breaking up our applications into smaller, well-defined chunks. These small, independent modules would expose interfaces that others could use without needing intimate knowledge of the internals. It’s nothing groundbreaking; it’s object-oriented programming fundamentals. This way, we could limit the effects of changes to a narrower area instead of having rippling effects throughout our applications.

Another key aspect is getting abstractions right. Just right, not perfect. Good abstractions capture essential domain knowledge, reflecting core concepts and relationships. This makes the system understandable and adaptable. As our domain understanding evolves, so should our abstractions. “Just right” means capturing current knowledge while remaining flexible for the future. This ties into principles like the Open/Closed Principle: abstractions should be open for extension but closed for modification.

Start with simple abstractions, refining them iteratively. Avoid premature complexity. Align abstractions with the domain’s ubiquitous language (from Domain-Driven Design) to ensure everyone shares the same understanding. Consider composition over inheritance when building these abstractions. Instead of creating a complex hierarchy, combine simpler interfaces. This often leads to more flexible and maintainable designs. Regularly revisit and refactor abstractions as needed. Embrace change and design for easy evolution, creating a maintainable and adaptable system.

Summary

In this two-part series, I explored the pitfalls of chasing perfection in software development, advocating for a “good enough” approach.

The first article argued that perfection is a myth, often unattainable and undesirable. I defined “good enough” software as meeting core requirements, being reliable, maintainable, delivered on time, and providing a positive user experience — explicitly not being sloppy, buggy, or unmaintainable.

The second article outlined practical engineering practices for achieving this, including prototyping, building MVPs, embracing iterative development, and focusing on well-defined interfaces and “just right” abstractions.

My hope is that by embracing a “good enough” mindset, we can not only deliver better products faster, but also create a more sustainable and less stressful development environment for ourselves.

要查看或添加评论,请登录

Márk Verebélyi的更多文章

  • Is Good Enough, Good Enough? (Part?1)

    Is Good Enough, Good Enough? (Part?1)

    “Perfect is the enemy of good.” - Voltaire This is a two-part article, and this is the first part.

  • How to nail your software engineer interview?

    How to nail your software engineer interview?

    Over the past decade, I’ve conducted countless interviews for software engineering roles, meeting hundreds of engineers…

其他会员也浏览了