Why Methodologies Fail to Bring Home the Bacon

Why Methodologies Fail to Bring Home the Bacon

In my career I have seen methodologies falling in and out of favour, with each new wave promising to solve the problems that plagued the previous methodology du jour. Don't get me wrong, I am not against using a methodology, whatever works for you I am fine with it. Just as long as you are getting reliable results and everyone is happy with it. What I find problematic is that we move from one set of complaints to another set of complaints, we moved the goal posts but we still seem to be having problems.

In my view, I think the problem is larger in scope, and has to do with issues that are related to the relative novelty that is software engineering. This is a field that only took off after the Second World War, it is still in its infancy in terms of practises and established traditions. Unlike other engineering practises that span at least since the age of enlightenment and in some cases have written foundation books and documents that are much older. This means that some sort of state of flux is to be expected in this field until some common agreed rules that are proved to reliably work and in which contexts.

The problems that methodologies try to address are complicated in nature, and software by its own nature doesn't make them any easier...

Issues With Software Development

Unlike physical things like a building, a bridge, a toaster or a book, software doesn't have a physical existence that can be touched or seen in its totality. We are able to see its artefacts, screens, commands, files, physical media (disks, flash drives, etc...), but we can't fully verify its completeness. In this, software is a functional abstraction that implements a set of expectations, these can only be verified through its usage.

Modern computing languages allow a large degree of freedom on how to implement concepts, these aren't constrained by physical limits, materials or manufacturing processes. For software the limits and constraints are usually over the available computation power, storage and the ability to model solutions with the available tools. This has meant that for any set of similar requirements we can have a large degree of variability in terms of implementation, which can vary depending on the languages or libraries/frameworks that are fashionable at any given time. Also, there is a large degree of variability due to human factors, like personal preferences, negative biases toward particular technologies, considerations about learning curves and the odd lack of competence here and there.

By allowing a large degree of variability in terms of implementation this has meant that good design is at a premium, and traditionally computer science has stressed the teaching of algorithms and data structures, and other low-level constructs. But, though these can be useful in the cases that developers are doing foundational work in operating systems, implementing databases, device drivers, and the sort. Most software is built around businesses or business applications, or products that are implemented using higher level or scripting languages. Another issue is that, most people are actually bad at design and tend to have little knowledge and exposure to different exemplars of design solutions. That usually results in attempts to force design solutions to problems that they weren't expected to solve, this would be like trying to make a rocket using propellers and piston engines. Also, bad design choices can have disastrous implications when new or ancillary requirements appear, making development more onerous and time-consuming or requiring starting from scratch.

When it comes to implementation, the immaterial nature of software makes it problematic to verify its state of completeness when it is being developed. Most software developers strive to develop their code as close to the requirements as they can understand them, but their understanding of the problem might be incomplete. There might also be time constraints that limit the amount of analysis done, that can impact of the quality and fitness of the code. In these cases, a developer might also have an incentive to feel that the work is complete and only perform superficial tests, this is more probable where its something he/she doesn't think that is pleasant or understands well. And as such the lack of clarity in what constitutes completion given an understanding of a requirement is a constant point of contention.

From the project management standpoint the problem of the verification of the state of completeness is even larger, in most cases managers will not have the technical expertise to evaluate the source code. Managers will often stress for control in terms of deadlines and budget, and this creates a conflict of interest between developers and project managers. That means that developers that are being pressured to end their tasks, to meet deadlines, will have an incentive to say that their work is complete and worry later about defects. While managers that are under pressure to keep the budget and deadlines under control might be less willing to allow the necessary time for the project requirements being met. This coordination problem derived from the uncertainties around the requirement completion is one of the main drivers for control processes and the adoption of methodologies.

The employment of Quality Assurance Analysts / Testers to verify completion is not without issues, the same problem of understanding the requirements and building test cases that can cover these is not an assurance that the software will be actually complete. The requirements might have omissions or hidden assumptions, these can lead to conflict between testers and developers, these might not even be evident to the business analyst or for the people that written the requirements. Also, some cases might require to test a large degree of variation, this is often expensive and time-consuming to be done manually. And, even if test automation is available it is highly dependent on the person modelling the test case generation for getting maximum coverage.

A Very Short Story of Methodologies in Software Development

Initially attempts to devise software development methodologies circled around structured programming, this was the foundation of many of the things that we take for granted today. The focus was developing code block concepts that could compartmentalize functionality and allow source code that was easy to read. This allowed for concepts like flow diagrams to model programs, and from here things would get increasingly more complex.

The development of subsequent methodologies paralleled the processes that were common in big engineering organizations, projects or products had reasonably long development cycles from concept to market. The computational tools that were available in 60's and 70's didn't allow for fast development processes, and in many ways the waterfall process is a product of a time where software developers didn't have interactive debuggers and were forced to test their programs by batch running in time shared environments.

As operating systems and computers allowed for more interactive IO (Input / Output), there was an increasing push for more interactive and incremental methodologies that allowed for quicker release cycles. This, in a sense, was a measure of how computation technologies were becoming more ubiquitous. Big organizations could afford long development cycles but smaller organizations didn't have the same level of tolerance and resources.

The increasing availability of computers through the 80's and 90's allowed for even more organizations to start developing software, this time with the availability of ever more powerful editors and development environments. Some started copying some version of waterfall, others practised a more ad hoc or interactive process. Again, big organizations could afford the cost associated with using waterfall, RUP (Rational Unified Process) or other common methodologies at the time. But one thing was sure, the drive for faster development cycles was a persistent trend.

One thing was common through this whole time, rates of success for software projects were not encouraging. Large projects using waterfall methods could get bogged down between steps, change management was a large painful process that could scuttle the project. This would mean missed deadlines and cost overruns, and other types of losses depending on the type and function of the project. But even incremental or ad hoc methods didn't fare much better, there was a generalized dissatisfaction with the state of software development.

In a sense this was the result of the successes and the large strides that were made up to the 90's and early 2000's, the tools were much better and the infrastructure available was a big improvement from the days of time share mainframes. Databases, queueing middle-ware, operating systems, etc, there was a rich ecosystem of platforms and tools, but the development of business applications had moved the goal posts. For staying in same place everyone had to run faster, and that meant developers had to take into account more functionalities and concerns. And this trend is continuing to this day, and doesn't seem to be slowing down.

The agile manifesto was a rallying cry against waterfall, specially the large overheads in documentation and analysis, the low flexibility to change and the long cycles. Instead it proposed methods that appeared in the last half of the 90's, like scrum, XP and others. In its birth it was mostly a hodgepodge of methods that were loaned, but it grew and gained momentum. It kept the habit of taking recent ideas and calling them agile, nowadays it kinda seems like a large buffet of methods and processes.

Agile is the new waterfall, it has become the dominant methodology buzzword of the day. It allows a lot of flexibility on what is called agile and this low strict adherence to a range of integrated methods has been one of the key factors allowing its growth. That and, the sprint model that limits the cycles of development into a fixed number of days. If you are doing sprints, then it seems you are doing agile...

Issues With Any Methodology

Adherence to a methodology is not a sure-fire way that you'll get better results, there are many aspects that might lead to a less effective adoption. Also, there might be business and regulatory considerations that might conflict with the use of a particular methodology without getting rid of aspects of the previous one. Here are some for your reading pleasure...

Methodology Theater

Methodology theater is when an organization decides to buy into a methodology on a superficial level. It implements the parts that don't require any organizational change, like arranging development cycles around sprints, allowing or requiring that some key members are certified, using buzzwords in documents and on the company's official communication. But on the overall, the development process hasn't changed, the responsibilities within the team are unchanged and the previous status quo was kept.

The other issue with methodology theater is the increase in requirements for moving up in the ladder, it is not unusual to have some positions limited for those people who have training or are certified in a particular methodology.

No True Scotsman Fallacy

This is a common situation when the failure of a project while adopting a methodology or process is treated as if the methodology wasn't really implemented. When people don't want to believe that the methodology is in part or on the whole a cause for the failure, then they double down on their attempts and try to have a more strict adherence to the process. Things like, not agile enough are a possible indication that deeper problems might be at the root, and methodology adherence by itself might not solve them.

Having the Cake and Eating the Cake

This is when companies setup agile methods but keep waterfall requirements, this results in scrumfall the worse of both worlds. And because of this, developers are saddled with the same documentation requirements, but with less time available on the development cycle and worse requirement specs from the stakeholders. Here, organizations are attempting to cut development times by constraining developers, but letting the business off the hook in terms of specifying in detail what they need. At the same time they might be legally bound by regulations to have documentation tracking the whole process.

Cheating on the Learning Curve

This is when an organization thinks that the implementation of a methodology will magically solve the issues related with having people with training and knowledge, and without having the proper organizational knowledge base to successfully finish projects. No methodology can replace the accumulated organizational experience of the many people who work within a company, these informal ties that bind the organization into a functional organism are essential. Trying to skip and cut corners on the learning curve will leave outcomes to chance.

A Proper Way to Sort it All...

In my view the proper way to check if a methodology works, and in what context it is better suited, is by science. Namely statistics, proper experience modelling, surveys, and in the best of worlds it would be excellent if it could be verified by double blind testing. Unfortunately the last option is not possible... I propose three ways to sort out what methods work better.

The Undergraduate Challenge

Have several teams of undergraduates distributed around several university campuses, projects would be randomly assigned as well as the methodology each would be applying. For each team there would be a task master monitoring, each team would work 8 hours a day in the same room until project completion or until the time was available was spent.

The projects would fall into different categories of problems, though these would be similar to usual business applications not as academic research projects. There would be variations on the same category to avoid cheating or collusion, also there would be some maintenance cycles added so that the teams would be forced to live with the consequences of their design decisions.

Evaluation would be done by analysing the results, on how many teams completed the challenge, how many development cycles were completed, time to completion, defect counts, defect resolution counts, and other metrics. This would be cross-tabulated by methodology and project category to check for success indicators that can indicate the level of fitness of each methodology.

Problems with this approach:

  • Undergraduates mostly lack work experience and are still too used to develop throwaway code for class projects.
  • The way the incentives are designed for the experiment might not match the reality of work in companies.
  • The sample projects might lack the sort of "experiences" that cause friction and generate what could called an implementation fog.

The Mass Survey

This would be a large scale survey that would track several organizations during several years, this would check on several project teams and follow which methodologies they are using. And, check on the rates of success, how they vary over time and when teams change methodology. This would also have to control for team sizes, relative levels of experience within the teams, types of projects, technologies and platforms used. The data gathered would then be treated to check if there were any differences in outcomes due to the use of a methodology, or if other factors had a larger weight.

Problems with this approach:

  • Gathering data would be tricky, companies don't usually like this kind of scrutiny and voluntary participation isn't a sure thing.
  • Survey modelling would be tricky, since we could expect a large degree or variability in each company and over time.
  • There might be confounding factors that skew the results, like hidden or unacknowledged innovations (teams or individuals that use methods or practises outside of the methodology that confer better than expected outcomes).

The Self Reported Survey

This would allow for participant companies to answer a set of questionnaires over time, and these results would then be analysed to verify any trend. This would be an easier option, and often is the most practised by sending survey to IT managers or developers.

Problems with this approach:

  • Answers are often based on a subjective reading of events.
  • The survey is superficial and doesn't give enough context.

In Conclusion...

It is not my goal to promote a methodology or discourage you of using one, it is you that need to make a decision if what you are using is working for you. What I tried to do is to identify issues that lead to suboptimal outcomes. And, by acknowledging these, there might be a way to find a better path to successful and satisfying software development.

要查看或添加评论,请登录

Carlos Concei??o的更多文章

  • Making Insights By Exploring Simple Market Basket Analysis

    Making Insights By Exploring Simple Market Basket Analysis

    In this article I will give some examples of simple analysis that can be done if you have a dataset of purchases, and…

  • Supply Chain Reality Check

    Supply Chain Reality Check

    The coronavirus situation that is currently unfolding, and for which it might be too soon to call out any predictions…

  • Why Talent Is Often Overlooked

    Why Talent Is Often Overlooked

    Talent is a quality often spoken, many times lauded and mostly lacking, never was talent so said to be missing. And…

    1 条评论
  • Guide to Office Politics: Office Factions

    Guide to Office Politics: Office Factions

    I will introduce here some of the common political factions that can exist within a company, these will be categorized…

    1 条评论
  • How To Turn Around a Failing Software Project

    How To Turn Around a Failing Software Project

    In this article I will discuss some scenarios and situations that involve turning around a project and save it from the…

    5 条评论
  • The Life Cycle of a Tech Organization

    The Life Cycle of a Tech Organization

    The ideas and opinions that are here written cannot be fully generalized, much of this is specific to the countries and…

  • Are China’s Bets in AI Poised to Topple US Dominance?

    Are China’s Bets in AI Poised to Topple US Dominance?

    The theme of AI is becoming a talking issue when it comes to big powers competition, although the way it can affect the…

  • The Cosmification of Management Articles

    The Cosmification of Management Articles

    Have you ever noticed the similarities between the current crop of business articles that pop up in linkedin and the…

  • Machine Learning Road To Disappointment

    Machine Learning Road To Disappointment

    From the media hype around AI and ML (Machine Learning) we would feel that common usage is just around the corner, with…

社区洞察

其他会员也浏览了