The "Zero Principle" of software design
Before starting this article, I would like to recall the goal we set ourselves at the beginning of this series and that is to create an objective and systematic method, made up of rules and techniques, to be applied in the software design phase to obtain a “clean architecture”. So we started by defining what a clean architecture is and then we analyzed the five founding principles of clean architecture (SOLID Principles). We illustrated some real examples for the five SOLID principles, and in the case of the first "Single Responsibility Principle" and the fifth "Dependency Inversion Principle", we also found some rules associated with these principles and also some?transformation techniques to bring back a design to meet the rule.
In particular, in the case of "Single Responsibility Principle" we have found two measurable qualities, "cohesion" and "coupling", and we have defined their calculation and graphic representation, therefore we have identified the "Good Design Zone" within the cohesion-coupling graph.
For the "Principle of inversion of dependencies", we found two rules: the “stratification rule” and the “dependence between concrete classes rule” (to be precise there could be a third rule relating to the dependence of creation that we will see in one of the following articles) and we also identified a number of transformation rules to apply to the design to make it meet these rules.
In the next article we should also find principles for the components, remember that we have now only talked about classes (for component we consider a set of classes/interfaces that work together to provide functionality outside it and which are normally packaged inside a versioned binary unit), and then in following articles we will analyze the main patterns and then we will finally define a systematic way to create a clean architecture.
But now, talking to some of you, I understand that we have forgotten to enunciate the fundamental principle, the most important that comes before the SOLID Principles, and it is a principle that we can call the "Zero Principle".
Once we have decided what we mean by software architecture, the “Principle Zero” will define the importance of structure in software design, and therefore will demonstrate the need to identify the right architecture.
After all, science teaches us that the Zero Principle is at the basis of almost all disciplines. For Classical Chemistry, the zero principle is Dalton's law (1.), for Thermodynamics the zero principle is that of thermal equilibrium (2.), for Classical Mechanics the zero principle is that of Inertia (3.), for Special Relativity the zero principle is the constancy of speed of light (4.), for General Relativity the zero principle is the geodesics equation on a generic curved space (5.), for Quantum Mechanics the zero principle is the Heinsenberg Uncertainty Principle (6.), for Analytical Mechanics the zero principle is the Principle of Least Action (7.). To be sure, the principle of least action is most than a “Zero Principle”, it is the most compact form of the classical physical laws. The Principle of Least Action (it can be written in one line) sums it all up. Not only the principles of classical mechanics, but electromagnetism, general relativity, quantum mechanics, fields theory, everything that is known about chemistry, up to the last known constituents of matter, elementary particles.
But let's go back to our goal that is to try to give a definition of software architecture and then to define a the “Zero Principle” of software design which tells us what it means to have software with structure and what it means to have software without structure.
Let me start with a personal observation : it doesn't take a large amount of knowledge and skill to make a program work.
Many programmers work hard on large documents contained in huge problem tracking systems to make their systems "work" with willpower. The code they produce may not be pretty but it works. It works because getting something to work once isn't that hard.
Getting it right is another matter. Getting the right software is difficult. It takes knowledge and skills that take years to acquire. But when we get the right software, something magical happens: we no longer need many programmers to make it work.
When the software is done right, it requires few programmers to maintain. Changes are quick and easy. Defects are few. Effort is minimized and functionality and flexibility are maximized.
Well then from this observation we can think of deriving a good definition of architecture
In my opinion the most interesting and appropriate definition of "Software Architecture" is the one given by Grady Booch (best known for developing the Unified Modeling Language with Ivar Jacobson and James Rumbaugh) (.8)
“Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change”.
Time, money, and effort give us a measure of the goodness of architecture. Not only does a good architecture meet the needs of its users, developers, and owners at a?given point in time, but it also meets them over time.
Also, when we talk about software architecture we must always keep in mind the physical constraints. Architecture can come from dreams, but must be adapted to reality and context. Processor speed and network bandwidth, memory and storage can limit the ambitions of many software structure. Software can be made of dreams, but it must work in the physical world.
So, what is the goal of a software architecture or a software structure ? the goal of software architecture is to minimize the human resources required to build and maintain the required system. The measure of software design quality is simply the measure of the effort required to meet the needs of the customer. If that effort is low, and stays low throughout the lifetime of the system, the design is good. If that effort grows with each new release, the design is bad. It’s as simple as that.
Well, now that we have given an almost objective definition of software architecture, let's try to arrive at the definition of a zero principle for software design. For this purpose we must make some other considerations: Every software system provides two different values to the stakeholders: ‘behavior’ and ‘structure’. Software developers are responsible for ensuring that both those values remain high. Unfortunately, they often focus on one while excluding the other. Even more unfortunately, they often focus on the lesser of the two values, the behavior, leaving the software system valueless.
Many programmers believe that ensuring the behavior of the software system is their entire job. They believe their job is to get the software system implement the requirements and fix any bugs. (Unfortunately they are wrong.)
Now the real question is: Behavior or Architecture? Which of these two provides the greater value? Is it more important for the software system to work, or is it more important for the software system to be easy to change?
If you ask managers, they’ll often say that it’s more important for the software system to work. Developers often go along with this attitude. But it’s the wrong attitude.
? If we developed a program that works perfectly but is impossible to change, then it won’t work when the requirements change, and we won’t be able to make it work. Therefore the program will become useless.
? If we developed a program that does not work but is easy to change, then we can make it work, and keep it working as requirements change. Therefore the program will remain useful.
Here is the ‘Zero Principle of Software Design’ :
“The right structure (or architecture) of a software system is more?important than is correct behavior”.
Always remember this : “If architecture comes last, then the system will become ever more costly to develop, and eventually change will become practically impossible for part or all of the system”.
Before seeing a real example of what can happen to a company that does not respect the "Zero Principle", I would like to make three considerations about software architecture:
Well now let’s see a real case described by Robert Martin (.9) relating to a company that he wanted to leave anonymous. First, let's take a look at the growth of the technical staff. As we can see, this trend is very encouraging. Growth like the one shown below (Figure1) must be an indication of significant success.
Figure1 : Growth of the engineering staff
Now let’s look at the company’s productivity over the same time period, as measured by simple lines of code (Figure 2).
Figure2 : Productivity over the same period of time
Clearly something is going wrong here. Even though every release is supported by an ever-increasing number of developers, the growth of the code looks like it is approaching an asymptote. Now here’s the really scary graph: Figure 3 shows how the cost per line of code has changed over time. These trends aren’t sustainable. It doesn’t matter how profitable the company might be at the moment: Those curves will drain the profit from the business model and drive the company into a collapse. What caused this remarkable change in productivity??
Figure 3 : Cost per line of code over time
When systems are thrown together in a hurry, when the number of programmers is the only driver of output, and when we think little about code cleanliness or design structure, then we will end up as shown by Figure 4.
领英推荐
Figure 4 : Productivity by release
Figure 4 shows developer productivity by release. It starts with high developers productivity values but with each release their productivity decreases. From the developers’ point of view, this is tremendously frustrating, because everyone is working hard. Nobody has decreased their effort. And yet, despite all their dedication, they become less and less effective. All their effort has been diverted away from features and is now consumed with managing the mess. Their job has changed into moving the mess from one place to the next, and the next, so that they can add one more little feature. These developers accept a familiar lie: "We can clean it up later; we just have to get to market first!”. Of course, things never get cleaned up afterwards, because market pressures never decrease. And so the developers never switch modes. They can’t go back and clean things up because they’ve got to get the next feature done, and the next, and the next. And so the mess builds, and productivity continues to decrease. The biggest lie developers trust is the idea that writing messy code makes them fast in the short term and slows them down in the long run. Developers who accept this lie show overconfidence in their ability to switch modes from making messes to cleaning up messes sometime in the future, but they also make a big mistake. The fact is that making messes is always slower than staying clean.
If we consider “Zero Principle” as a postulate we may derive some corollaries from it, like a mathematical theory.
These corollaries are nothing more than antipatterns.
Antipatterns provide further depth for understanding the principles. Antipatterns describe what not to do or also the solutions applied in the wrong context. I believe that antipatterns are useful because they are written in a manner that emphasizes the problem, allowing them to be easily recognized when problems occur.
Let's start with the first one, it is called “A Big Ball of Mud”.
“A Big Ball of Mud” is a software design anti-pattern in which a software system lacks perceptible structure. This means that, for an outside observer, the system has no distinguishable architecture and as such it is a huge pain to maintain. There's also another form of this anti-pattern, that of "An architecture exists but it sucks and needs to be changed," which will probably never go away.?For both forms, the solution is the same: implement a flexible, appropriate architecture for any given system. This means doing research. See if anyone has already done a similar project (the answer is almost always "yes") and what he has done to solve that problem, as well as if such a solution can work for our systems as well. The problem arises when we don't. When we get straight to coding without understanding an appropriate architecture or framework to work in. In this scenario we will end up with a big ball of mud and we have to rewrite many things (deep refactoring). I've done a lot of in-depth refactoring and can confidently say that sometimes it's the only option left. But the trick is, when planning your rewrite (why you should be considering refactoring as new projects) make sure you do the necessary research to determine an appropriate architecture. "Hours of planning will save us weeks of coding".
A second antippatern that we can consider the reciprocal of " A Big Ball of Mud " is the one called "Gold Plating".
Gold Plating is an anti-pattern that happens when a developer continues to work on a particular task long past the point where any work on that task was useful. Alternatively, it can also mean developing features that were not originally requested by the customer.
Another antipattern connected to the “Zero Principle” is the one called "Golden Hammer".
The Golden Hammer anti-pattern occurs when you use a familiar solution to attack an unfamiliar problem.?Sometimes this might actually work out, but most of the time it's an inefficient way to solve problems.?We developers must be aware of the strengths and limitations of all our tools, and know how to find the correct ones when the situation arises. Generally, people fall into this anti-pattern if they try to use a particular tool, architecture, suite, or methodology to solve many kinds of problems (especially problems that could be better solved by alternate means). They are often doing this because they are very familiar with that tool set or framework and believe that it can be used to solve many problems.
The last antipattern connected to the "Zero Principle" that we consider is the one called "Spaghetti Code".
Spaghetti Code is a programming anti-pattern in which code becomes almost impossible to maintain or modify due to changes in progress, dependencies between modules, or general untidiness. Usually this does not happen all at once; rather, it happens slowly over a long period, and only after some time do we notice the mess. Also in this case the solution is "deep refactoring".
In conclusion we can say that the best option is for the development organization to recognize and avoid its own overconfidence and to start taking the quality of its software architecture seriously. To take software architecture seriously, you need to know what good software architecture is. To build a system with a design and an architecture that minimize effort and maximize productivity, you need to know which attributes of system architecture lead to that end.
In this series of articles we have talked and will continue to talk about software design principles, transformation rules and techniques and optimal solutions to recurring problems (patterns) in order to describe what good clean architectures and designs look like, so that software developers can build systems that will have long profitable lifetimes.
Well now we can conclude this topic, in the next article I would like to start shifting the attention from the object oriented design to the components design, in practice we will move our analysis from the class diagram to the component diagram.
Also in this case we will find a set of principles that can guide us to a correct design even at the component level..
I remind you of a previous article that I recently rearranged because it is very related to “Zero Principle” of Software Design:
thanks for reading my article, and I hope you have found the topic useful,
Feel free to leave any feedback
your feedback is appreciated
?
?
Stefano
1.?Silvestroni,?“Fondamenti di chimica” - Casa Editrice Ambrosiana” p.133
2.?Nobili,?“Elementi di Termodinamica Classica” – Corso Editore Ferrara p.31
3.?Feynman ?“The Feynman Lectures on Physics”, Vol. I – Addison Wesley p.64
4.?Susskind ?“Special Relativity and Classical Field Theory” - Penguin Books p.9
5. Weinberg?“Gravitation And Cosmology” - John Wiley & Sons, Inc. p.77
6.Susskind “Quantum Mechanics: The Theoretical Minimum”-Penguin Books p.99
7. Susskind “Classical Mechanics: The Theoretical Minimum”- Penguin Books p. 105
8. Grady Booch “Object-Oriented Analysis & Design with Applications”- Addison Wesley
9. Robert Martin “Clean Architecture”- Prentice Hall
10. Jacobson "Object Oriented Software Engineering" - Addison Wesley