Software Design: Necessity or Overhead?

Software Design: Necessity or Overhead?

In software engineering we are constantly faced with situations where we need to decide how best to proceed about implementing a certain functionality. Do we ‘jump in’ and code the first solution that comes to mind? Or do we take a step back, study the various possibilities, weigh the pros and cons of each, document the plan, think about how we’re going to test it, how it integrates with the existing framework … and then make an educated pick. While the choice may be obvious to many, it’s not always as easy as it first appears.

If you’re going to sort a list of records, there’s no merit is evaluating various sorting algorithms, running benchmarks etc. Your standard library should offer you a working sort function that will suffice in most cases. This may seem like a trivial example but it underscores an important point - of software reuse. Learning to build on what’s already there is a critical lesson that is typically learnt rather late.

On the other hand let’s say you are writing an online shopping application and want to build it to support many merchants. Do you design for multi-tenancy from the get go? It’s complex to get the logical separation right but can pay off in the long run by keeping your upgrades and deployments simple. It can also keep your costs low if you’re able to share resources. But how do you know if you’ll ever get to a scale large enough that warrants such concerns? Isn’t it easy to just create isolated environments that don’t interfere with each other.

These are questions that every software developer is faced with from time to time. One line of thought is to design everything to be extensible, flexible, scalable and secure from the start. This may seem like an obvious choice but I think it needs to be weighed in carefully against business requirements. Do you think-up all pieces of the puzzle, smoothen out all the kinks before you open your IDE? Or can you build it in increments and get to market faster?

Software design principles offer some guidance but these are constantly changing. The open closed principle e.g. is not held to be sacrosanct anymore. I personally believe that as long as your external interfaces stay compatible, you should be able to change the internals as needed. This allows us to stay away from ‘analysis paralysis’ and ‘speculative generality’. The latter is widely recognized as a code smell. An example being adding fields to a model/class with the expectation that you may need them in future and wouldn’t have to change the class definition.

Agile methodology of software design (not to be confused with scrum) advocates building software in increments, and is a very popular approach to writing code. But you still have to give enough thought upfront on what those steps are going to be and how they will work together. The steps need to be aligned towards the same goal.

This is where a detailed design can play a crucial role - especially if you have multiple people/teams working on the project. It acts as a platform for discussion, doing cost/benefit analysis, and can provide clear hand-off points between teams and modules/layers - such as APIs, middleware and storage. It can also provide automatic documentation for future reference.

Being able to balance the decision to either take a ‘shortcut’ and just do it vs spending time thinking thru, prototyping and designing before starting to code is the key that separates successful software engineering projects from the rest. Poorly designed software will require frequent rework costing the team dearly in testing and patching. Over engineering and ivory tower pontification on the other hand will mean poor execution and getting beaten by the competition.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了