Atomic User Stories:  SAFe and Risky

Atomic User Stories: SAFe and Risky

Note

The following was written using terminology based on the Scaled Agile Framework.?Some have compared SAFe to the Borg, usually due to heavy, uninformed application of the framework. The job of us working in the trenches is to make such frameworks useful to organizations that have decided to use them.?No tool is a silver bullet, and few are so intrinsically evil that their good parts can’t be used apart from their inferior parts.?So, don’t be put off right away if you are a SAFe-hater.

Executive Summary

When discussing splitting, a distracting topic is whether to take partial credit for work done in a sprint, even though the story wasn’t completed.?Opinions of industry leaders range from barely tolerant to hostile of this approach.?If this is done to evaluate velocity for successive sprints, then it creates a minor clerical headache.?In contract work situations, however, it encourages poor vendor behavior that becomes a major client headache.?Our recommendation is to disallow in-sprint story splitting, and simply move unfinished work to the next sprint.

Developing the skills to decompose stories rapidly can be developed by training, but only can be mastered as Product Owners work with development teams.?With practice, they will see how this process improves their communications with the developers and makes story acceptance easier.

Steps to break stories down to the atomic level may include the following (borrowed from the Scaled Agile Framework):

  1. Workflow steps – ensure each story represents a small, finite number of workflow steps
  2. Business rule variations – each variant on a business rule may call for a separate story
  3. Major effort – apply obvious splits, defer large difficult stories for possible splitting later
  4. Simple/complex – look for the simple core of a story, add complexities in new, related stories
  5. Variations in data – each variation may call for an additional, small slice of functionality
  6. Data entry methods – seek the basic, valuable data entry method, split the rest into follow-on stories
  7. Deferred system qualities – defer or split out non-functional system abilities into separate stories, if reasonable to do so
  8. Operations (example:?Create Read Update Delete, or CRUD) – consider each subset of functionality as a potential story?
  9. Use-case scenarios – does viewpoint analysis reveal additional functionality based on who or what interacts with the story’s core intended functionality.
  10. Break-out spike?- if still unsure, run a research spike

As product owners and developers become proficient creating lean and thorough Acceptance Criteria, their ability to break stories down to the atomic level will improve.?Product owners tend to provide more impetus for breaking work into Features and stories; Developers naturally tend to understand and need atomic-level stories, and will provide more drive for decomposition efforts at that level.

The following pages provide additional details on this practice, as well as real-life examples of teams working towards atomic story definition.?

Introductory Concepts

In about 400 B.C., the Greek philosopher Democritus formulated an idea that, if you had an infinitely thin knife, one could slice matter into smaller and smaller bits until a limit was reached.?That limit, the smallest particle that can’t be physically separated, would be the atom (from the Greek “atomos” meaning “indivisible” or “uncuttable.”)?Although today we know that an atom can be further divided into components (electronics, neutrons, protons), they are not stable, independent units of matter.?

We use the atom as a simile for breaking work down into small, meaningful chunks.?In Agile development work, we consider User Stories?to ideally lie at that ‘atomic’ level of detail -- the smallest demonstrable, working piece of useful functionality.?The art of breaking down User Stories to the level where they can’t reasonably be broken down further can be referred to as “splitting,” “slicing,” “decomposition,” and other such terms.?

The User Story statement and Description are the primary overarching definition of the story – the ‘atom’ overall.?Like sub-atomic particles, there are sub-components within that User Story (normally called “tasks”), but they only have relevance as part of the User Story. Test cases, notes, interface identification, and other pieces are sub-components, much as neutrons, electrons, and protons are part of every atom.?

[We will concede that Technical Stories may be necessary, but we find that they usually wind up being turned into tasks under a User Story. In this paper, consider "User Story" to encompass tech stories as well.]

The idea of Atomic stories matters (pun intended)?because many issues with application software development stem from taking on too much functionality at one time – complex molecules of work, if you will.?Taking on too much at one time can result in large, convoluted code blocks that are difficult to maintain and navigate.?It encourages poor coding practices such as excessive nested “ifs” and direct calls through successive blocks of code without using object oriented discipline .?Developing against compound stories complicates testing and debugging, making it difficult to determine which variable in the code, data store, or environment is responsible for how the software is behaving.?Less obvious, but still critical, is the tendency for a single individual to become the sole keeper of a compound, complex requirement.?This makes the code opaque to other developers, creating a development bottleneck and single point of failure.?User Story decomposition to the atomic level helps manage these basic development risks.?

[Sixty years after Edsger Dijkstra published his letter on the dangers of unstructured GoTo calls, spaghetti code continues to be slung -- partly due to poor User Story decomposition and poor coding standards.]

The key Story splitting skill is determining when a User Story is at the smallest practical level of decomposition.?One hint for this evaluation is to think like a developer who has to test her own code:?if she is writing tests and test data that are normalized to make sure a single piece of functionality works, then the story may be at the atomic level.?If those tests are evaluating many interconnecting dependencies within the story, then the story may be a candidate to break out into smaller stories.

As mentioned earlier, breaking down stories can encourage good developer behavior.?Take, for example, a classic data entry screen with Create, Read, Update, and Delete (CRUD) functionalities.?Many developers consider such a screen to be a single component, and approach it as if they are developing it as a unit.?However, this example represents separate functionalities.?The functionality to add data is slightly different from the functionality to edit, which is different from the functionality to delete.?Competent developers will develop and unit test these elements separately; it’s just good practice.?Developing and testing these entities separately reduces debugging time and promotes efficiency through compartmentalized and reusable code.?

Atoms, as you know, are not all alike.?For example, while a hydrogen atom contains only one proton, uranium has 92.?During backlog grooming, we absolutely do not want stories with 92 pieces of sub-functionality; such stories should be considered dangerous -- radioactive -- and not allowed into the iteration .?Our ideal model resembles Hydrogen, the simplest atom with only one proton – our goal is to have only one piece of functionality in each User Story.

[Throughout this paper I use the term “iteration” and “sprint” interchangeably.]

Vertical Slicing

If we took each component of a story (each particle of the atom) and stacked them vertically, they would represent combination of the technical and development components and activities necessary to have a piece of product completed, tested, accepted, packaged, and ready to place into production.?There are many versions of what that stack would look contain; here is one:

Using the thin vertical slices?discipline, each unit of shippable product should contain most or all of these layers.?Some items (such as data design and user experience design) may be implemented early, in a technical spike, but their outputs would be integrated into all relevant subsequent User and Technical Stories.?

Thin vertical slices is a difficult concept for some; we tend to think in broad, horizontal batches and specialties.?Those big batches feel more efficient to us, despite over a century of assembly lines, concurrent engineering, and small team work demonstrating otherwise.?Horizontal slices (batches) may be more efficient for creating niche components, but they hold up delivering the vertical slice of functionality (which is where the real value lies.)?

For example, breaking out a broad “horizontal” slice such as User Experience into a separate Feature for the duration of the work has been proven to create bottlenecks.?Some organizations amplify this error by creating segregated specialty teams, which increases over-specialization and bottlenecks.?

Using the thin slicing concept addresses the common error of splitting along technical boundaries, which can result in “backend” versus “frontend” stories.?Such stories may not be atomic, they could be too small, sub-atomic; often they are tasks, not stories.?Sometimes a tech story will come into being because it isn’t clear how it works into a vertical slice of functionality.?Don’t worry too much about this happening, just be sure to keep an eye on tech stories to make sure most of them eventually roll up into User Stories.

A common challenge in the “vertical slice” approach is how to manage testing.?We tend to develop and manage tests as large batch activities late in the life cycle.?This stems from several sources:

  • Believing functional tests at the end of development can find all defects;
  • Over-specialization of testing as a separate discipline from the Development team skills; and
  • Using points and velocity quotas as a measure of success, which can pressure teams to downsize and delay testing.

[The Atom visualization doesn’t work as well when we discuss “vertical slicing.”?Our illustration looks like lasagna; Alistair Cockburn uses a carpaccio analogy, Gojko Adzic uses the “hamburger method.”?Years of subsisting on high-carb food as software developers may explain why we gravitate to such similes.]

But testing, like everything else, needs to be done in-sprint so that the Sprint Demo contains fully done product (‘Done-Done’ in eXtreme Programming lingo) that could be placed into production the moment the demo is over – or even before then.?If testing has to happen after the story has been demonstrated, it indicates a dysfunction.

Thin vertical slicing aligns to DevOps, which grew from efforts make possible the “continuous deployment” model of eXtreme Programming.?It involves a technical aspect of ensuring the development and test environments mirror Production, then promoting product into use “untouched by human hand” actions.?

Just as important as this tool-focused side of DevOps, the human aspect involves a tight collaboration between key players from inception through successful operation of a piece of functionality.?That tight collaboration applies to creating complete, thin vertical slices.?Each small slice of valuable functionality has been created with the input of those who will support in Production, those who will use it in the Customer community, and all those involved in moving it through the deployment pipeline.?This collaboration avoids having product considered complete, only to have a relevant stakeholder pop up at the last minute and say, “Wait, that can’t go into production!?Why wasn’t I consulted about this part?”

Who Splits Stories?

The Product Owner (PO) and the Developers are primarily, jointly responsible for decomposing business needs progressively into stories.?Leadership of story decomposition may shift as stories get more atomic, however; this dynamic is natural, almost subconscious, on self-organizing teams.?The Product Owner is ultimately responsible for accepting the completed work, however, and so should always be sure that someone is taking the lead, and that Story splitting is getting done.

Product Owners will participate at all levels of splitting, but they typically have a less granular view of systems.?Once Features have been defined, they may need prompting and questions from the Developers to help them understand why and how stories need additional breakout.?The Product Owners’ strategic focus, balanced by the Developer’s deep technical focus, supports the progression from high level needs down to detailed technical slices, and subsequently aggregating back up during development to a working system that satisfies customer needs.

(Hopefully, this early work will involve challenging existing assumptions, since automating a bad system can just help counterproductive things happen more often and more expensively.)

Product Owners, like system development sponsors throughout the history of IT, have to be willing to step up to the job. Failures in this area led Mark Schwartz to comment, in his book " The Art of Business Value," that Product Owners sometimes can be blockers to development success.

During early decisions to automate a manual or mechanical process, the system sponsors and subject matter experts may already have a “tree structure” view of what the system should contain.?This tree structure may be referred to as a Work Breakdown Structure , and is the foundation for progressive breakouts into Features and Stories.?This decomposition continues all the way down to the task level.?Other roles such as business analysts may provide support, but the POs are responsible for participating in this early breakout and for considering how to convey the system needs to the Development team once it is engaged.?Most importantly, the POs are responsible for ensuring the Babacklog grooming takes place effectively for the duration of the system development effort. (Microsoft Project has in the past has had the label “WBS” in the upper left corner of its Schedule view.??MS-Project’s schedule is not a good example of a WBS, it is a better example of a Gantt chart.)

When the developers are brought onto the program team available, their first task should be to engage quickly and closely with the system sponsors.?They should evaluate whether initial functional decomposition or design ideas have created potential problems.?Rarely, they will find that the sponsorship team has deep knowledge of existing systems, and will have already broken some needs out into atomic-sized stories.?This can give teams a head start, but requires sifting through design assumptions being pushed to the development team.?

Early engagement with the POs and other stakeholders give the developers understanding of what really is needed; they will continue to refine that understanding until the final build is shipped and?development is considered complete.?Ever-evolving understanding will enable the development team to proactively, energetically help refine the backlog.??

Energetic engagement is vital.?Developers must not act passively, like bored fast food workers taking orders.?They must elicit system needs from the product owners and subject matter experts, using various techniques to understand how a set of problems can be converted to testable, maintainable, reliable components.?Some development teams employ business analysts to support elicitation and documentation of stories.?This can be useful, but must not interfere with the evolution of the developers’ ability to perform elicitation.

Agile blogger and coach George Dinwiddie first published a description of the “Three Amigos” approach to backlog grooming.??His backlog grooming sessions as Nationwide Insurance involved, at a minimum, three roles:?the product owner, the development lead, and the principal test designer.??Mr. Dinwiddie later stated that those three roles are considered a minimum; others may be invited to any given grooming session to provide input.

The developers usually drive story decomposition down from the “pretty small” level to the “atomic” level.?This is normal, since often the rationale for splitting at this level are technical.?the ability to code discrete, useful units of product that meet the definition of “done.”?The PO team validates these atomic stories as being useful pieces of the larger product, and validates the defined stories as being sufficiently defined to allow their review and acceptance.?

In summary,?

  • The Product Owners and Subject Matter Experts are ultimately responsible for the stories, but need help as stories get more technically detailed.
  • The developers provide technical skills to make breakdowns support coding, data design, environment setup, interface management, and other tasks.?These skills support their ability to elicit and provide guidance.
  • Business analysts (if they are present) help with elicitation and elaboration, but do not own responsibility for these activities.

Despite shifts in involvement throughout the life of a Story, no one may foist Story decomposition completely onto another role.?It remains a collaborative effort.

Models and Steps for Splitting Stories

The Scaled Agile Framework (SAFe) recommends using ten different viewpoint to determine whether to split a story, in a rough order of progression.?The splitting evaluation points are:

  • Workflow steps
  • Business rule variations
  • Major effort; Simple/complex
  • Variations in data; Data entry methods
  • Deferred system qualities
  • Operations (example:?Create Read Update Delete, or CRUD)
  • Use-case scenarios
  • Break-out a spike.

Richard Lawrence has summarized these steps in a diagram to help teams understand context and considerations of these steps.???These look like a lot of steps; add on a list of “ilities” to support step 7, and you can wind up with big check list.?To make this work, teams must focus on a subset of splitting factors and master it.???And then, master a couple more.?Only practice and mastery will make this list seem like an ordinary part of work, reaching that point where people shift from thinking, “we have to do all THAT?” to thinking “who on earth wouldn’t do this?”?With mastery, these things become part of the natural flow of agile work.

We’ll revisit these evaluation points, in more detail, later in this document.?Notice that none of the ten points discussed by SAFe include failure to finish work.??XP and Scrum (underlying methods for SAFe) don’t allow stories to span iterations.?Agile thought leaders also discourage in-spring splitting.?

Story Splitting Anti-Patterns

“Coding patterns” is a term coined by Martin Fowler to describe good coding practices that emerge as patterns; the underlying languages may differ, but the basic practices remain sound.?The term “anti-pattern” emerged to describe the opposite: bad habits that crop up in various situations.?Story splitting anti-patterns are behaviors that look like splitting stories, but tend to have negative effects instead of the usual benefits.?Some anti-patterns are:

  • Splitting stories along specialty lines by default (data architecture, UX, testing, and others.)?This can hamper communications and build delays and rework into the effort.
  • Using wireframes as specifications, and consistently discovering mid-iteration that any given wireframe can imply a larger amount of functionality than previously considered.?Wireframes should be used as visual layout guides and to help surface functionality; they should not be treated as User or Technical Story equivalents.
  • Elaborating stories too seldom and too late, resulting in compound stories escaping into iterations.?This is like letting defects escape into production; in fact, escaped defects from chaotic work can result from this.
  • Failure to consider dependencies before the story is pulled into the iteration.?Resulting “panic splitting” can be made worse by counterproductive productivity/velocity demands.?Such stories should simply be placed back on the backlog for additional grooming, and an alternative piece of work pulled into the iteration.
  • Inability to pull an alternative story due to a poorly elaborated backlog, and therefore splitting a story that is for whatever reason not feasible to finish in the current iteration.?Such stories should be moved into the next iteration’s backlog and re-pointed for remaining effort.
  • Splitting stories lazily based on a presupposition of who will work on them.?A Story should be a unique piece of functionality, not just more work added to “Joe’s to-do list.”[1]?Furthermore, a given story should be elaborated as though anyone on the team could possibly end up developing it.
  • Splitting mid-sprint, caused by struggling to account and get credit for effort spent on an incomplete story (calories burned, essentially.)?This is not what the agile community at large means by “splitting,” is a distraction, and is non-productive.?(Some agile support tools unfortunately encourage this anti-pattern through their ‘story splitting’ functionality, which degrades into gaming and hair-splitting.)?The goal is to complete the story appropriately, not account for every minute spent against it.?A good Scrum Master will take resulting velocity anomalies into account for planning; striving to have a perfectly consistent velocity curve from iteration to iteration is wasteful.
  • Spillover, often from having over-sized stories.??


[1] In the process making Joe a bottleneck.?Read “The Phoenix Project” for additional insights.


Besides oversized stories, additional causes of spillover include:

  • Quotas for velocity pushing the focus to filling capacity instead of creating valuable product.
  • There is too much Work In Progress (WIP), either by quota pressure, poor iteration planning, or poor work management. Lots of user stories get started, not all of them get finished; people are working on too many things at once.?Stories get split as a spillover reaction, not because the stories inherently are too big.
  • The developers are not operating as a team, but as a group of individuals. There’s not a lot of work sharing, hence not a lot of help being asked for or received.?Everyone is working on their own individual user stories solo, resulting in low productivity.
  • Dependencies were not considered or aligned before starting. This relates to following a Definition of Ready.?One legitimate reason to split a story mid-sprint is discovery of complexities that were difficult to detect earlier.?Dependencies are a high-risk, high-focus aspects; extra attention should be devoted to technical and personal dependencies early and often.?Teams should avoid taking a Story into a sprint when they suspect related dependencies haven’t been considered adequately.
  • Distractions are pulling focus away from the user stories. These distractions could include pulling people for other projects, too many meetings, or something else. The main question is this: Are we able to focus on the user stories without undue interruption?

Splitting Tradeoffs

Splitting stories provides many benefits, which we have mentioned.?Like any technology development activity, there are some risks to consider and mitigate when deciding when and how to split:

(I shamelessly swiped the term "daily reputation management" from Michael Church.)

These concerns around story splitting illustrate that a development team also needs good system visualization skills, strong testing expertise, a good grasp of configuration management (not just version control), and strong design and architecture abilities.?Lack of these skills can inhibit the ability to split stories effectively and benefit from atomic-level stories.?


Evolving Story Splitting Skills

A Reasonable Pace for Decomposition

How soon should teams get severe about breaking stories down??During initial backlog grooming, during sprint grooming, or mid-sprint??When do we go for “atomic?”

The answer is to be patient and not rush.?New POs may not have a strong sense of decomposing stories at first, and their initial ones may actually be Features (molecule-size to boulder-size) that could use first-pass decomposition into stories.?On the other extreme, we have worked with technically savvy POs who write “Features” that are actually sub-atomic tasks, then have to work back to the User Story level.?Either way, allow for experimentation and learning.


Whether you decompose top-down or aggregate bottom-up, by the time a User Story is pulled into an Iteration Planning session it should be pretty close to ‘atomic’ and yet be complete enough to represent a valuable piece of functionality.?During sprint planning, the Development Team should ensure that any remaining compound stories are decomposed or put off to the following sprint.?(We’ll talk about how to detect compound stories later.)?A story may be further split during the iteration for technically practical reasons, but that should happen in the first couple of days of the iteration and should not involve much research and discovery.?Too many stories being split for “A-ha!” reasons indicates lack of understanding or a lax approach when grooming; it creates chaos.

Common First Mis-Step: Waterfall Elaboration

In the agile world, we constantly seek the balance between over-specification too early, and fixing sloppy User Story elaboration too late.?Often we err on the side of too much precision early, get emotionally attached to it (or simply exhausted), and never want to change our work.?This error can lead to large User Stories with sloppy Acceptance criteria mixed with a slew of “technical” user stories that never evolve into their best elaboration until mid-sprint (which can be disruptive.)?The root cause is failure to become comfortable with iterating to a workable level of story details.?That discomfort can be aggravated by pressure to enter all data into an Agile Lifeycle Management (ALM) tool such as Jira or Rally, and to get it all perfect at first pass.?

Such one-pass ALM-stuffing is taking a waterfall approach to creating an agile work product.?In Waterfall, we tend to go for comprehensive finished correctness as early as we can; this can lead to unrealistic expectations that the User Story definition process will go linearly, as shown below:

Team members usually have varying levels of expertise in elaborating User Stories.?Furthermore, business needs usually are evolving while stories are being elaborated.?When POs have to do significant work before the development team comes on board, the process can get especially uneven.?So, initially, the path of User Story decomposition may look more like the following diagram (please, do not use the diagram as a definitive prescription of how stories evolve!)?PO teams that are working out their?understanding of what constitutes an Epic, Feature, User Story, or Task will find themselves doing things in a supposedly out-of-order way.?Features and User Stories may be defined that turn out to be “sub-atomic” level tasks.??

Maturing the Cycle

Recursion and iteration is normal and even desirable, but you do want to minimize the amount of story refinement necessary once development starts; that can be a problem.?As teams mature, team members will normalize on a common understanding of what constitutes an Epic, Feature, User Story, and Task.?They will also develop a sense of how technical stories tend to become subordinate to User Stories.?The large, early refinement loops will move more to the right, and become tighter so that the path to atomic stories resembles the following flow:

When Tools Skew Story Splitting Behavior

Jira, Agile Central, Agile Craft, and other such tools have their place, but each one brings its own quirks and complexities.?We often find ourselves trying to enter User Story data perfectly, meeting all definitions of ready, before we are even sure what we want the system to do.?This struggling to appease a “Tool Tiki” can add stress and waste time.?

As our PO team evolved the Stories, they moved away from electronic tool.?They migrated to using white boards; they tracked their work using stickie notes on physical Kanban posters; they taped sheets of paper with Epics, Features, and Stories to the wall and connected them with yarn.??Later they moved back to using the tool, working in pairs to convert their “analog” collaboration work into User Stories within the tool.?When refining Acceptance Criteria, they reverted back to projecting Agile Central contents on the wall so they could collectively understand how to write acceptance criteria.?They recognize the hazards of doing this for too long; once pending business rules are finalized, this team will go back to small groups working on remaining User Stories both at the marker board and in the electronic tool.?Most critically, when developers come on board this PO team will be able to spot when tool use dampens progress rather than supporting it.

PO teams learned to resist the perfection paralysis trap; as they iterate through the Features and Stories they will move towards perfection, but with the right timing.?Upon approaching each iteration, when it’s time to start developing, you do want your?Epic --> Feature --> Story --> Task[1] hierarchy worked out; early on, however, it’s going to be a bit messy because your understanding of the desired functionality still will be messy.

[1] Many agile proponents hate this breakdown.?It’s from the Scaled Agile Framework (SAFe), which uses the term “Epic” to mean something big above a Feature.?Our case studies for this paper were projects using SAFe, and their tool (Agile Central) also uses ‘epics’ this way; we were stuck.?Our apologies to the agile community at large.

Another common problem is that tools can provide lovely, detailed metrics, graphics, charts, and reports.?These measures look impressive and statistically valid, but can drive destructive behaviors.?Productivity metrics can be the most damaging.?Teams will spend inordinate amounts of time ensuring their cumulative story points meet capacity levels and velocity requirements, rather than simply getting the work done.?A key failing of the conventional Software Development Life Cycle (SDLC) world is jumping through hoops to prove how accurate initial schedule and cost estimates were – even when we know at the outset that their accuracy is suspect.??

“There are three kinds of lies: Lies, [infernal] lies, and statistics.”?
~ attributed to Disraili

The metrics capabilities of some of these tools tempts some teams to use them as proxy time sheets, a way to prove how busily productive individuals are.?This worsens the temptation to split stories in-sprint, and also distracts from the purpose of the agile management tool. Worse, it detracts from the benefits of agile approaches by putting the focus back on tracking nits to validate a schedule or budget, and taking the focus away from delivering useful product.?

Setting points and velocity quotas is the same bad behavior applied to agile iterations.?Under this kind of pressure, teams will resist decomposing stories to the atomic level because the reward will be having more stories pushed into the iteration.?Larger, more opaque stories become preferred to provide some protection from this “velocity abuse” by customers.?Root causes for these behaviors must be fixed to enable effective story breakdown cycles.?

During my appraisal and auditing years, I frequently saw organizations rely entirely on a new tools to fix their broken internal methods.?Since they hadn’t figured out what made them succeed or fail, they were relying completely on whoever developed that tool to address their unique problems.?They rarely considered that some tool companies were more interested in making sales than in fixing their client’s problems.?Those organizations lacked context to use the parts of the tool that helped, and avoid the parts that were useless or broken.?One unkind comment I often heard about such groups was, “a fool with a tool is still a fool…”?

Rather than blindly adopt a tool, teams should master the art of lean, agile, effective creation of value.?Then they can evaluate whether tools are helping them effectively split stories, or distracting them into non-value added complexity.



Story Splitting Model: Details and Examples

Recap of the Splitting Model Points

  1. Workflow steps – ensure each story represents a small, finite number of workflow steps
  2. Business rule variations – each variant on a business rule may call for a separate story
  3. Major effort – apply obvious splits, defer large difficult stories for possible splitting later
  4. Simple/complex – look for the simple core of a story, add complexities as related stories
  5. Variations in data – each variation may call for an additional, small slice of functionality
  6. Data entry methods – seek the basic, valuable data entry method, split the rest into follow-on stories
  7. Deferred system qualities – defer or split out “ilities” into separate stories, if reasonable to do so
  8. Operations (example:?Create Read Update Delete, or CRUD) – consider each subset of functionality as a potential story
  9. Use-case scenarios – does viewpoint analysis boil out different functionality based on who or what interacts with the story’s core intended functionality?
  10. Break-out spike?- if still unsure, run a research spike

1 and 2 - Workflow Steps and Business Rule Variations

The first two breakdown areas of consideration (workflow steps and business rule variations) are often related.?Workflow steps tend to reflect various business rules that a system has to carry out.?Variants in business rules, and the workflow steps needed to embody those rules, often are the first User Story breakout considerations.?They also are usually the most obvious.

With workflow steps, consider whether each story represents only a few steps that logically are clustered.For both business rules and workflow steps, consider who or what is impacted by the business rules and workflow -- including other systems.?Each interaction or dependency can provide insights into how the system may evolve, and therefore how to break out stories.?Failure to get down to atomic stories sometimes results from having too many interests represented in a single user story.

3 - Major Effort

The “Major Effort” principle is less a splitting technique than it is a discipline of holding off working on stories that are not atomic.?Tackling smaller efforts that have been broken down to atomic levels allows time to understand the bigger ones.?This tactic is referred to as holding off work until the “last responsible moment” so that it doesn’t have to be scrapped and done over.

The risk of scrap, rework, and technical debt can be massive, and accounts for many systems development efforts failing.?Large, difficult-to-split stories can represent major blocks of effort.?These represent unknowns, unresolved dependencies, and hidden design needs requiring “thought time” to resolve them.?They also represent risk, as tackling a large block of problems[1] generates high rework and technical debt.?Such complex problems obviously are not atomic; they often wind up containing multiple features, not just multiple Stories.

The Major Effort situation also impacts how Backlog Grooming is done.?People try to solve huge problems all up front to avoid waste and rework later.?But they can exhaust themselves and create difficult-to-maintain, difficult-to-use suites of complex stories.?Working on smaller stories while creating throwaway prototypes can give better insight into the system while developing architecture and design knowledge.

[1] Referred to as a “Big Ball of Mud” by Foote and Yoder, https://www.laputan.org/mud/

4 - Simple/Complex

Stories often have a simple, core functionality that provides significant value.?Additional parameters or branches to that functionality may be valuable, but are much more complex and difficult to develop.?Often we see a classic Pareto’s Law split – 80% of the value can be developed with 20% of the effort.?When that remaining 20% of value could be put off until later, you have a strong candidate for a story split.

5 and 6 - Variations in Data, Variations in Data Methods

When evaluating a User Story, the team’s design principles will drive towards using methods to perform similar operations on different kinds of data.?Splitting that story to work on one type of data, and successively add other ones in separate stories will allow for unit and regression testing, as well and encouraging good code architecture.?

When a story implies the same data will be acted upon in slightly different ways, then you have another opportunity to split that story.?You may be creating variations on functionality, or developing separate functionality to manage the data.

7 - Splitting Based on Deferred System Qualitites, or "ilities"

User Stories and Requirements deal with two layers of attributes, sometimes called “the ilities” because most of them share that ending – as in “functionality,” “security,” “usability,” or “maintainability.”?One layer deals with the system’s abilities other than functionality -- non-functional requirements -- which is where we use terms such as “recoverability” and “maintainability.”?The other layer deals with the elaborated User Story itself, and includes such terms as “testability,” “clarity,” “stability,” “stable,” “design-free,” and “complete/ready.”?A key attribute of a Story is whether it has been broken down to the smallest chunk of valuable functionality, which we here refer to as “atomic.”[2]

Breaking down stories based on -ilisties relates to the 7th step in the SAFe list, deferring system qualities/performance aspects.?With this splitting approach, the core functionality of a story may be developed under one broken-out User Story, with other attributes (such as error trapping) added in a subsequent story.?(This approach can be similar to breaking out stories based on deferred complexity, as some “ilities” represent added sophistication.)

Using non-functional requirement check-lists can help remember what the “ilities” are, but beware of some common risks:

  1. Some “ilities” (such as Security and Maintainability) are difficult and expensive to “bolt on” in later iterations, rather than building them into the product up front.?Using non-functional requirements as a basis of splitting requires an understanding of tradeoffs.?SAFe referring to them as "deferred system qualities" is misleading; you would be foolish to defer a good many of them. Defer with knowledge and thought. For example, some organizations will put off some regulatory requirements because they just seem to be a bother. When this catches up to them, however, they will expend iteration after iteration playing catch-up; little new functionality gets created, sponsors get upset, and it becomes a drag on morale.
  2. Splitting based on “ilities” too frequently can lead to laziness during elaboration.?“Don’t worry about those, we always tack them on later,” is a risky practice.
  3. Following a checklist of “ilities” for every User Story may ensure they are considered and built in.?But using such a checklist frequently can create “confirmation bias,” a tendency to make hasty splitting decisions to justify the creation of the checklist.?The checklist is only a tool; it’s not a mandate, and it’s not an entity to be appeased.??

[2] Wikipedia has a long list of “ilities;” the my coaching team found that list to be daunting, and has created a much shorter list of key non-functional requirements as a job aid.?

Splitting on Operational Lines: Stepping Through an Example

Let’s continue with our Create, Read, Update and Delete (CRUD) interface example.?We tend to take the Read part for granted, but that is usually the first step in creating a user interface:?designing a usable screen to view data as it is being read or manipulated.?Laying out the screen in order to read the retrieved data, along with the ability to find and display that data, may merit a separate User Story.

Often the layout, look, and feel of one of the CRUD interfaces is inherited by all the rest.?This creates a consistent User experience, making the system more intuitive.?Developers can fall into the trap of developing “a screen,” however, rather than focusing on the functionality under the interface.?This “screen think” is one way we wind up with compound User Stories; while developing that screen, we actually are developing four (or more) distinct pieces of functionality that will happen to use it.??

The ability to select an option to enter new information is a separate piece of functionality that could be developed, tested, and demonstrated separately.?When demonstrating to the customer what was developed for Create functionality, the product being shown is not “a screen,” it is the capability to add data to storage without incorrectly corrupting, duplicating, overwriting, or unintentionally damaging existing data.?This may also include data entry validation and error trapping to ensure the new data conforms to basic rules.?That is plenty to start with.?Acceptable interface design matters, but should not be confused with the functionality.

When editing data already in the data store, some of the same validation rules could be re-applied from the code to create a new entry.?That’s economical coding, but it should be done separately to avoid hasty errors and wasted time hunting down unnecessary bugs.?Additional rules for record locking and de-duplications have to be coded if the data base environment does not natively provide such protections.?If it turns out that the “use case” of the system turns out to need such protections to be added to the Create functionality, then that could be another User Story.?It should be coded after testing the Update functionality.

The Delete functionality appears to be the simplest.?If one is creating absolutely primitive, non-validated CRUD, it may seem OK to leave it in a single User Story with Update.?If, however, you include additional functionality such as soft deletes, “undelete/recover” capability, complex search-and-remove through dependent data stores, context-specific authority to delete (as opposed to blanket authority,) context specific criteria on whether data may be deleted at all, and context-sensitive help, then it might be best to split Delete out into a separate story.?(Some of these sub-functionalities relate to Usability, one of the non-functional requirements mentioned in the Introduction.)

If there are enough of this supporting sub-functionality, it may call for splitting the User Story into even thinner vertical slices.?There would be an implied progression:

  • Core Delete functionality is developed and unit tested, including checks on Authority to Delete and system contexts prohibiting the delete
  • User Interface developed (including 508 compliance); calls to Delete function unit tested
  • Feedback and ‘next navigation’ steps once Delete is successful or disallowed; unit tested
  • Cautionary prompts prior to Delete; unit tested and core Delete functionality regression tested
  • Context sensitive help to inform users of why a Delete is disallowed, such as user authorization, missed prevenient steps, or vestigal dependencies in other data.

In this example, a simple “Delete” functionality could reasonably be thin-sliced into five small, separate User Stories.?For teams that practice collective ownership, pair work and “swarming,” these five atomic stories could be completed faster than they could be by a single individual.?The quality would be higher, the code architecture could be improved, defects and rework could diminish, and the overall value-for-effort could increase.

9 - Use Case Scenarios and Acceptance Criteria

Acceptance criteria help provide specific, measurable conditions under which a piece of functionality will be accepted.?They also help clarify the situations – Use Cases – that the story is addressing.?While defining acceptance criteria, the need to split a given story can become clear.

The software community has realized over time that the real problems with requirements specifications were that they were too complex, too inflexible, and generally created too soon.?They also tended to wander into Design too often.?Acceptance criteria may be as stringent as good requirements specifications, but they are more compact and created “just in time, just enough” to allow for emerging needs, shifting environments, and evolving understanding of the system.

Acceptance Criteria provide a clear indicator of how well a User Story has evolved.?Remember that acceptance criteria have the following general format:

Scenario n

Given <technical or business situation>

When <user action or technical activity>

?Or <variant on user action or technical activity>


Then <system behavior>

?And <constraint, such as performance, accuracy, usability>

?And <secondary system behavior>

As you can see above, the language of fully elaborated acceptance criteria is similar to an outline for developing tests.?The more thorough the acceptance criteria, the more hints they can give as to whether a User Story may be compound and need to be split.?In the example above, the line reading And <secondary system behavior> may be a hint that a new User Story is in order.

To look for such hints in the Acceptance Criteria, run through the following checks:

□?Do the scenarios suggest different viewpoints or approaches to the same piece of functionality, or do they suggest different subsets of functionality?

□?Are the scenarios clearly delineating single specific actions/interactions, or do they suggest compound actions?

□?Do the “Given” and “When” statements identify specific, non-compound conditions and actions taking place?

□?Can a single test case (with concise, normalized test data) exercise all the parameters within the scenarios?

□?Do the “Then” statements include a limited range of functional outcomes??(The range of non-functional outcomes such as “how fast,” “how accurate,” et cetera will vary by User Story.)?Evaluate whether <and> and <or> conditions merit separate tests before being integrated.

Striving for well-defined Acceptance Criteria as early as possible will make the need to decompose Stories clear earlier, and will make breaking out separate stories easier to do as work progresses.

The following is an example of acceptance criteria written about a small table needing to have data added to it, modified, or deleted; it is one of the table mentioned in our earlier Product Owner team case study.?As you read through it, consider your impressions regarding whether it represents one, or more, User Stories.

Do you think this set of acceptance criteria represents one story, or three??Or perhaps fifteen??The “And” statements for each of the “Then” sections look daunting, and seem to imply separate user stories.?At closer look, however, most of the “And” statements are identical between the Scenarios.?What do you think the PO team decided??One story??Or Scenarios 1 and 2 in one story, and Scenario 3 in a separate story?

Our PO team decided that three separate stories would be needed, later needing four additional ones for the validation functionality and one for a logical branch in the “delete” scenario.?They decided for now to leave all the functionality in a single story long enough for them to flesh out the various Scenarios.?Once the scenarios are harmonized, de-duplicated, and otherwise tweaked, it will be cleaner to break the story out into separate stories shortly before the iteration where they will be developed.

The Decision Process in our Case Study

In the process of creating the above Acceptance Criteria, the PO and business team made several decisions:

  • The original step in story definition had been to treat this as a Technical Story to create the tables.?The logic was that data entry would happen very infrequently (“slightly more often than never,” in their words) and could be done manually by Operations and Maintenance (O&M) personnel.?
  • The second step in the evolution of this story was to treat this as a User Story, and to leave all these scenarios together in that one story while the POs considered what functionality is needed to manage the manipulation of these tables.?They would split the story, but later.?The acceptance criteria shown above come from that part of the elaboration.
  • The third move was to revert to the original idea that this is a technical story.?Since updates will happen rarely, and the project budget is needed to cover other priorities, then no User Interface will be created to support table maintenance; changes will be done entirely via O&M support.?There will be no immediate entry validation, no error messaging, nothing of that sort.?The result was to demote the table management to tasks.
  • Later on, these tasks will be folded into a “Data Pull” User Story, which would describe how the tables would support data requests from an existing data repository.??
  • Finally, the PO team decided that a post-data pull validation routine will be written later, under a separate User Story.?Many of the acceptance criteria were moved into “placeholder stories,” which will support that data validation.

Demoting several stories to tasks worried the PO team at first, since they didn’t believe they were allowed to create tasks in the Agile Lifecycle Management tool.?They then realized that they were not creating and assigning tasks, they were just describing business and technical constraints to a service call that will be needed.?They went ahead with their decision to leave the table definitions in a User Story as a placeholder for now and delete the acceptance criteria.?Once the “pull” story is defined later, the placeholder stories will be deleted.?Creating the Acceptance Criteria was not wasted effort; it defined key attributes of the tables, how they must be populated, and some elements of how they would be used in the Pull functionality.

In considering non-functional requirements, the PO team had realized that real-time data validation while entering data into these data sets could trigger numerous search activities on the main data store.?That would require new interfaces, and would place additional performance loads on the main data store system.?The PO team made a design decision favoring simplicity:?validate the contents of the table using daily after-hours checks of the pulled data.?This functionality will be defined in a separate story triggered by the data pull story.

10 - Technical Spikes

Running technical spikes can help when the developers and Product Owners are not sure how the business needs match up with the potential technical solutions.?Technical spikes can help boil out pieces of functionality bit by bit.?They may involve some trial-and-error, and some work may wind up discarded.?This is not necessarily waste, it is often the necessary process of taking a complex real world and making it work in a complex digital world.

There are entire textbooks written about exploratory engineering, expendable versus incremental prototyping, elicitation techniques, and many other facets of technical spikes.?We won’t cover them all here.?We will recommend that you take care; prohibit technical spikes from being used as cover for blowing off analysis and design work, and from defaulting to hack-and-ship programming.?

In Conclusion

Once a Development team (which includes the Product Owner) masters the art of decomposing work into stories, its members will find that atomic decomposition allows rapid development and well-contained testing.?Atomic-level stories also promote solid development techniques that helps prevent defect escapes.?A by-product of these benefits is that refactoring is simpler and faster, allowing teams to improve the robustness and maintainability of their work, lowering the overall costs of ownership of a system.?Therefore, taking User Stories and Tech Stories to this level is an important skill to support quality, cost, and value of developed systems.

[This article talks about Atomic Stories' benefits and risks. Michael Church's blog post goes a bit deeper into the potential risks: https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/]

References

Scaled Agile Framework, Splitting Stories - https://v4.scaledagileframework.com/story/

Alistair Cockburn, Elephant Carpaccio -- https://alistair.cockburn.us/Elephant+carpaccio

Agile For All, How to Split a User Story -- https://agileforall.com/resources/how-to-split-a-user-story/

Agile for All, Cynefin and Story Splitting -- https://agileforall.com/cynefin-and-story-splitting/

George Dinwiddie, “If you don’t automate acceptance tests?” https://blog.gdinwiddie.com/2009/06/17/if-you-dont-automate-acceptance-tests/

Wikipedia, “Non-functional requirement,” https://en.wikipedia.org/wiki/Non-functional_requirement

Gene Kim, Kevin Behr, George Spafford; The Phoenix Project; IT Revolution Press, October 2014

George Leonard; Mastery, Plume Publishing, February 1992.

Kent Beck; Extreme Programming Explained:?Embrace Change; 2nd Edition; Addison-Wesley, November 2004.

Images

Three sombreros images purchased from Shutterstock, usage outside of this white paper is prohibited.

Pointing fingers image used under Creative Commons license

Tiki mug image created by Achim Schleuning, downloaded from Wikimedia Commons

All other graphics are original works by Shawn Presson, reuse permitted with attribution.


要查看或添加评论,请登录

Shawn Presson的更多文章

  • Efforting versus Working in Software Engineering

    Efforting versus Working in Software Engineering

    In physics, if you exert force on an object but the object doesn’t move, no work has been performed. Applied force or…

  • NPR Paper "Transforming Organizational Structure"

    NPR Paper "Transforming Organizational Structure"

    The original article may be found at Transforming Organizational Structure (UNT.edu).

  • Handling the Crossover Point between Requirements and Design Work

    Handling the Crossover Point between Requirements and Design Work

    Summary Backlog grooming may involve crossing over from the “what I want” mindset into the “how to do it” mindset. This…

    4 条评论
  • Propping Up Waterfall

    Propping Up Waterfall

    Microsoft, who years ago helped confuse a Gantt Chart with a WBS, has done it again. In their documentation of Visual…

  • CMMI?: Refactor it Mercilessly

    CMMI?: Refactor it Mercilessly

    The CMMI? Institute (and the model's previous steward, the SEI) have made great strides in embracing agile and other…

  • Litter at Scale

    Litter at Scale

    Whether you are working with SAFe, Scrum, XP, SDLC, Crystal, RUP, or any other approach, avoid the temptation to…

  • Workplace Design: Sideways Lessons in Agile Development

    Workplace Design: Sideways Lessons in Agile Development

    Who Moved My Cheesy Décor? I walked into the lobby of my bank this week and noticed a stark change. The carpets were…

    2 条评论
  • Shu-Ha-Ri in the IT World

    Shu-Ha-Ri in the IT World

    "If one declares that he knows everything, there will be no room for him to grow; to stand still in that fashion is…

    1 条评论
  • Modernization Challenges in the Federal Space

    Modernization Challenges in the Federal Space

    "The way to modernize our work is not to use a computer instead of a typewriter and call it innovative." ~~ Heidi Hayes…

  • SAFe - The Debate Continues

    SAFe - The Debate Continues

    One blogger recently wrote, "I suggest they put aside SAFe and instead focus on developing an Agile mindset within…

    4 条评论

社区洞察

其他会员也浏览了