Defining software
Working across many projects led me to believe that defining software is no different to testing it. Here's why, and why is it so important?
Quality and progress in software really come hand and in hand. Far too often team and the business split in deciding "is it a bug or a feature?". Far too often it's hard to say, if you really didn't nail down the features, you can't tell what is said to be expected, what was assumed to be expected and what is just plain unexpected.
Business and developers tend to live in different worlds, both ruled by assumptions that often are strange to the other group. Therefore the whole process of putting a logic into software development is to say what they are the features of any software product that are really expected. The rest, anything else is a bug: either in scope or product. This is about finding the common ground between business and development.
Defining features is therefore the first step towards distinguishing features from bugs and defining a common protocol of what business and developers are wanting to build. Defining and counting features gives you also a very good indicator for progress, something that is equally hard to grasp as software quality itself. It's not easy though as even a simple "feature" like "have a shopping basket" can mean different things to different developers and it is a different thing in different businesses.
How to define features?
In order to define features you really need to side step into software testing. After all, the quality assurance is the place to look for people who know the products really well and are supposed to measure the quality it in a really organised way. They do, and they do it by employing one or several of well known methodologies.
Software testing methodologies generally fall into one of the categories of unit testing, domain driven testing, or behavioural driven testing. I'll give you the common denominator for those, just to give you a quick start into software testing in general:
- each software is a sum of expectations called features. Each individual feature is a sum of conditions, here to be called tests that need to be met. The feature is "done" and "good" once it meets all the conditions. The software itself is "done" and "good" once all features are complete. So far, so good.
- Each test starts with a some conditions, or initial assumptions, just to place a starting point for the condition you'd be measuring. For testing shopping basket application, the conditions might be that there is a basket to test, and there is a specific type of user testing it.
- Each test defines list of steps that need to be taken in order to meet the expected results: to conclude a test either as a success or failure. The steps should be clear, and reproducible, for instance: a) user opens specific page, b) user finds a product worth $100 on the page, c) user clicks the shopping basket icon on the page.
- Each test defines a list of expectations that are the direct outcome of test steps and that should be easy to understand and always reproducible, i.e. the shopping basket icon is clickable, clicking on shopping basket icon updates the balance visible the site exactly to match the value of $100.
Measuring software projects
Once you think of it, there's an awful lot of tests to be written just to test a simple e-commerce website. That is true, and this is the hard part. Even harder, the tests sometimes touch a very subtle things that need to be right, and those also need to be covered by the testing procedure describing steps and expectations so that both business and developers can understand them. Tests that logically fall into testing the same thing in a variety of reasons should be grouped in features. That way features are defined by a closed set of test.
The good thing about large number of features, tests and steps is that... you don't have to get it right the first time. In fact no one does. If your organisation works according to the agile methodology, definition of software features is expected to grow as your product grows. It's an evolving thing. With each iteraration developers catch up with the features, and business catches up with more clear and complete way of explaining them, or even changing the expectations as software evolves. That is perfectly normal.
Universal way of measuring software
Software is "done" and "good" once it meets all the tests. This sounds simple but in fact this is the hardest thing of the concept. As you'd quickly find out, with each implementation, even in the aforementioned simple shopping basket example, there would be quickly a vast space for quite a few misses both in the quality of the software itself, but in tests: their coverage and details that define what a shopping basket application is expected to do, in a way that both business and developers can read and understand.
Most of the teams find that a large number of assumptions made by developers don't meet expectations of the business. This is exactly the gap in the definition of features - this is a potentiall for a "bug" in product definiton. Equally, each change of software can potentially trigger a number of unexpected behaviour that doesn't match the specification of existing tests. These are "bugs" in the software itself.
In early stages projects tend to have more issues in the scope of their definition, whereas mature and stable projects usually focus on finding issues in software itself.