How Software Is Made: The Ideal Gas Law – Part One

How Software Is Made: The Ideal Gas Law – Part One

Somewhere back in my earlier schooling I was taught something called “The Ideal Gas Law”. Now, this is not a law you can break and get a ticket for thankfully. This law describes the naturally observed relationship of several measurable attributes of a substance while in the gaseous state. More specifically, the ideal gas law relates pressure, volume, temperature and amount of gas along with a constant that is associated with the particular gas substance.

There is a simple equation for the ideal gas law which is PV = nRT where P is pressure, V is volume, n is the gas constant, R is the amount of gas measured in moles (or the molecular quantity) and T is the temperature of the gas. If you recall from mathematics, the equality symbol is a clue about the relationships with regard to change. If the equation is to balance, or to remain true, if something changes on one side, a corresponding change must occur on the other side.

So, if a gas in a stable state has a change in pressure within the same volume, one can expect to measure a corresponding change in temperature for example. In the practical world we have this phenomenon to thank for things such as refrigerators and our air conditioning units in the parts of the world that use them.

So how does this relate to software? Don’t worry, we’re heading in that direction. First some observations about the IT industry over the last 25 years or so. I remember a time when the only kind of computer used in large organizations was the of the mainframe type. After that the industry moved to a client-server type arrangement which used large computer systems in a data center while the end user had PCs at their desks. Today we have PCs, tablets phones and devices. We still have data centers and networks but somehow we’ve renamed them to “the cloud”.

In the data center, where one computer was called a “server” or a “host” we now have collections of computers that appear as a single “host”. Conversely, we have single machines that can appear as many “servers” to the outside world. We can group these servers together in various ways to deliver services to end users no matter what kind of consuming device they may have.

Why does this matter? Technology is always moving they say. Change is the only constant they say. I’ll accept that but, what changes specifically are responsible for what we have observed? Is there one? Two? Ten thousand? What follows is a mental model I’ve been pondering for some time and just now decided to attempt to articulate to others so please bear with its rough edges.

In modern computing, there are primarily two major components – hardware and software. Almost all computer and network services are delivered with varying proportions of both components. Across these two components let us consider several aspects with an economic lens. These aspects are costs of development, ease of change in the field, capability to do work, costs of labor and toleration for errors. With each of these aspects there exists a relativity of the magnitudes to each other with respect to the components.

Let us look a bit closer at a hypothetical example. Consider what it takes to design, develop, manufacture and deliver a modern microchip. Consider what it takes to remediate an error should millions of chips in the field be found faulty. These two considerations alone could lead one to expect that developing microchips is very expensive not only because of the skill and equipment necessary to do so but also for the extensive testing and quality assurance that must go into ensuring precision and correctness in the design. A company has to invest a great deal in the development of chips as well as other types of computing hardware. This means that they have a high incentive to want to recoup their investment and generate profits by selling as much of those components as possible.

Now let us have a look at software. Historically, software development has been highly specialized and required tools and knowledge inaccessible to many people. As time has passed that situation has changed. Along with access to information and tools, more persons with the aptitude and interest to develop software have entered the field. Today, many of the tools are available at low or no cost and the knowledge afforded by the internet to much of the developed world is more than sufficient to support many entry level and early professional software developers beyond their post secondary educations.

Let us now compare the development of hardware and software for a moment. Developing chips and other hardware is high cost, long lead time, no toleration for errors, costly to change in the field. Developing software is high cost but it is frequently less cost that developing hardware, it has shorter lead time than developing hardware, it is much easier to change software in the field and software can have a greater toleration for errors. Naturally, one could see that the economic interest here clearly says – move as much functionality as possible to software.

While that is simple to see and exclaim, the truth is that much about computer science that is flaunted about today as new, is not actually all that new. The problem is that while the abstract theoreticals have been available in the computer science body of knowledge, the computer hardware has not been capable enough at a tolerable price point in order to make things like “big data” affordable to anyone but those with the deepest coffers.

In addition to the cost, the capability to do work with regards to computational capacity, storage capacity and memory of computer systems was not enough to do the work required in a short enough time period to be valuable. Sure you can have that marketing report, it will take a week to finish running the job. No thanks.

What has driven the increase in computing capacity is primarily related to two things. First, the speed and capacity of compute, memory and storage. Second, the reduction in costs of that capacity. Yes, it is the hardware that has mostly driven what the capability of the combination of hardware and software components have to offer. Now let us take another look at the software world.

As mentioned earlier, the primary requirements a person needs to develop software is an aptitude for it and the access to tools and knowledge. So, if you are a hardware vendor and you want to sell more hardware you need a vibrant and growing ecosystem of software developers and vendors to create and sell software for your hardware to run. Remember however that in the early days of software development, this access to tools and knowledge was limited and therefore the laws of supply and demand allowed for software developers to be quite expensive.

The situation that I observe today when looking back at how this situation evolved is almost shocking. Some might say that I’m seeing an illusion, other might say that I’m na?ve not to have seen it. Nevertheless, here is what I see.

In order to sell more hardware, we need software to run on it. In order to have software to run on the hardware we need resources to develop the software. Those resources are expensive and so the total system cost is still high. The speed and capability of the hardware is increasing and we need to be able to justify to customers to spend their money to upgrade. Software components continue to add capability and features which demand more hardware capability. Yes, this is starting to make the head spin. Perhaps some structuring is in order.

If the ideal gas law is PV = nRT, let us see how to use that to describe the acting forces involved in the hardware/software cooperative.

V = the number of software instructions required to complete a task

P = the cost of software development resources

T = the rate at which the hardware can complete instructions

R = the cost of hardware development

n = error rate constant

Understanding this will take some exploration, so let us explore. First we look at V and P. Now working in software I know a few things about the tools. I know that tools that are easier to use that make software development more accessible to lesser skilled resources can frequently generate inefficient software. So, there are more computing instructions generated to complete a task. So, in order to balance equality on the left side only, decreasing the cost of the software developers means that more software instructions are produced. This is evidenced by today’s proliferation of software abstraction frameworks and interpreted dynamic languages. Those tools are much easier to use however their cost is obscured somewhat by the increase in compute horsepower necessary to run them and still deliver an acceptable time experience.

Now in looking at the right side of the equation we have R and T. R is representing the cost of the hardware development. Here some assumptions are required. We have to assume that higher costs result in a higher capability (T) component. We know of course that cannot be universally true. Let us assume it for this illustration at least. In consideration of n, let us consider that it is possible to compensate for some hardware design errors in software. While this is unusual and undesirable there is precedent for it. Intel once produced a Pentium chip with a floating point error which they corrected with later runs of the chip but offered a software patch for units already in the field. This means that if n increases there must be a corresponding increase in cost somewhere else.

Let us take an inventory of where we are at this point.

  • There is a relationship between hardware and software related to cost and capability
  • The relationship is somewhat symbiotic
  • There are economic forces at work that greatly influence the stability of the relationships
  • There appears to be what is a “ideal gas law” where as the hardware increases in capacity, the gas (software) expands to fill the space

There are a number of hypotheses that I have drawn from the observations discussed in this article thus far. Before I take steps to articulate them I am going to conclude this article as a “part one” so that I can reflect on the topic further and perhaps present my observations in a more succinct manner in part two.

---

Chris Snowden is a philosopher, musician and a multidisciplinary information technology professional with experience in strategy, architecture, design, development and operations. Currently he is a Principal Solutions Architect and the President of Reflective Logic Systems, where he helps customers decide, design, develop and deploy solutions on AWS.

Learn more about Reflective Logic Systems at https://www.reflectivelogicsystems.com

Really nice Chris! Physics rules.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了