Quality Engineering: Beyond the Traditional Pyramid
Pyramid of Giza from Above

Quality Engineering: Beyond the Traditional Pyramid

Functional Face of the Quality Pyramid.

The “traditional” pyramid

Often when there are discussions about the “Quality Pyramid” this is the picture that people often contemplate in their minds. However, there’s an issue; this doesn’t really paint the full picture of what goes into a fully well-rounded quality product.

Pyramids are 3 Dimensional

For starters, Pyramids are actually 3 dimensional objects with four (4) faces beyond the foundation. See the top-down view of Giza at the head of this article. The foundation itself is a topic for another day.

Faces

If we take the top-down view of a pyramid, we can apply a theme for each face.

The four faces of a full quality pyramid: Functional, Performance, Usability, Security

Each of the faces ask us a core set of questions:

Functional

  • Does it work? - The results of tests validate this. We have an idea of how it should work and validate that through gathering evidence.
  • Can we provide value? - This feedback comes from the customers.? We might have an idea that we’re building a good product and it solves some problem for our customers but until we get it out in front of our customers and they actually use it, we never really know. Agile says, let’s do this as soon as possible, and iterate on the feedback we get from them as quickly as possible.

Performance

  • Does it work quickly? - Speed is relatively easy to measure. Do you have the observability built in to know when the user’s perceived performance issue is a problem with your system or theirs? What’s fast enough for one person, might not be fast enough for another.?
  • Can we do a lot of it? - Now how many customers can you serve at once? Can you scale up to handle the load fast enough? Will performance degrade after an hour, a day, or week?

Usability

  • Does it work easily? For everyone? - This requires a diversity of perspective and abilities in evaluating your product.? Just because it works for me, or maybe even everyone on my staff, doesn’t mean it works for everyone. Don’t skip out on accessibility features because you think it’s a burden.? Making things easier for some, will make it easier for everyone and that increases engagement.
  • Can we ease customers’ burden? - Let’s say we have a good product, and it provides value.? But if it’s just as hard, or takes just as long as it would using the old way, why would anyone switch? I’ve had a couple products where we went through multiple full rebuild iterations to get this right.? Listen to your User testing. Listen to your customers when they tell you.? Notice when they don’t use your product, even if it’s free. It might not be a problem with the features, it might be your usability.

Security

  • Does it work safely? - Cybersecurity issues are always in the news and the threat landscape is growing increasingly complex.? Be sure that you make the right choices from the beginning to inspire trust in your product. Make good choices on data management and the plethora of technical decisions that have to go into providing a secure product.? Security, like usability should not be a bolt-on or an after-thought.? It should be included as a core pillar of your feature set from the beginning.
  • Can we inspire trust? - What’s your risk mitigation plan? How do we explain that to the customers? How can we ease any fears they might have? Security is not about restricting access to information. It’s about giving the right people the right information at the right time for the right reasons and duration.

Functional

Let’s start at the foundation of the Functional Quality face.? We think of the size of each slice as the volume/count of tests. Refer to the graphic earlier for reference.

Unit Tests

The foundation of functionality is validating the smallest set of code through unit tests. This is often internal to the service being built and the tests are run as part of the development and build processes.? In my experience, this is also where we can most easily get code-coverage metrics and manage (cyclomatic) complexity.?

This is where TDD practices can really pay off.? Even if you don’t do strict TDD by writing the test first, the idea of having a means to determine what goes in and what comes out of your code, so that you can test your expectations when things change unexpectedly. Having those tests ready for “future dev/qa” is a lifesaver.? You’re not writing them for present you.? You’re writing it for future you or whoever has to maintain the code when you’re long gone.

Integration Tests

The subsequent layers of tests generally happen after build time and once it’s deployed.? This is making sure that the promises you make with your public interfaces and how your ingestion and exhaust handling is working when actually in the system.

API

With API tests, generally we think of the promise made by a given service.? What are the calls that I can make going into it? What data does it accept and how does it respond? If you ship an SDK, you could think about running through all the possible ways someone might want to integrate with your SDK.

Inter-service

These types of tests overlap with API tests in that you are making calls into the service under test, however the objective might be different. Where API tests are concerned with a specific interface or service.? As you expand your scope, now you want to watch for how multiple services interact within your workflow.? This takes us to ingestion and exhaust pipelines through the system.

UI Tests

At this point as we climb the face of the pyramid, we find that we’re entering the realm of the most expensive tests to maintain from an automation perspective. The earlier tests are all relatively straightforward to maintain and have pretty much about the same maintenance overhead as the code under test. Make a change to the code, then a matching change to the test.? However, UI tests now start looking at the whole journey of the user, rather than the subset of functionality offered up by a method, service, or subsystem. All the complexities of persona mapping, external variables and compatibilities start coming into play.?

As the SDET, there is value in automating some of the UI cases. You want to have clear baselines for your user’s journeys through your application.? However, these are the most fragile and expensive to maintain, so don’t over-do it.? Find the balance that works for your team, budget, and timeline.

Exploratory or Ad-Hoc Testing

Finally, the smallest number of tests, because we’re leaving the plan.? In every other layer, there is a known expectation of functional results when tests are executed.? We also know that as we build ever more complex applications, the number of possible test permutations reaches infinity.? This is where the expertise and instincts of the quality professional comes in and is at it’s most valuable.? Anyone who’s been in this industry for some time will have that one tester, or even a handful of testers who always seem to find the really cool and interesting bugs.? These were not likely in the test plan.? These were often found based on seemingly unrelated observations or previous bugs that tell them, “Hey, I should look here.” Or “I wonder what happens when I do this.” Do not underestimate the power of instinct in these conditions.?

Performance

Performance Face of the Quality Pyramid.

Like with unit tests on the Functional face, the core performance tests look at the basic unit of the system or service under test.? How long does it take to do the thing once? What impact does that task have on the system at this basic level? What kinds of observability metrics should we be gathering? We want to ensure that the system is responding in the window of time and resource usage within acceptable margins.? What that means for your product is going to be different than others.? There is also validation that things are consistent with subsequent runs.? This of course gets more challenging as the complexity of the system increases.

Load

Alright, now let’s start turning up the heat.? How many users do we think we can handle? (And what can we prove.) What happens when the system starts having to work across those multiple parallel transactions? Is it reasonable to model that behavior to test our scaling hypothesis?

One of the things load testing can reveal, if you construct your tests with this in mind is proper handling of parallelization. Are there threading issues? And are those threads safe from cross-contamination.? You might not be able to simulate that in simple functional tests, so you may have to use load tests to get the system in the state where these cases manifest.

Stress

What is our tip-over point? How does the system react when it starts failing because of the excessive load? How long does it take to recover or adapt?

Now we’re trying to break things.? I mean really break things.? While load can reveal where there might be functional issues, you also want to know what happens as the system degrades.? With the right observations, you can start finding the bottlenecks and work to resolve them.?

Endurance

How long can it handle being at various load levels? What happens if you have an unknown memory leak that slowly consumes resources until the container needs to be reset? Is your endurance a time-based issue or number of transactions?

Like with so many things, what a good endurance metric means for your product will be specific for your product.? However, similar products should operate within similar thresholds. Also, how disruptive is an intermittent failure as your container or service must be reset because it’s run out of resources? Can you mitigate that with fall-over contingencies.? Maybe your customer base is very cyclical, and you can do a reset during those downtimes with a minimal amount of disruption.

Security

Security Face of the Quality Pyramid.

Threat Modeling / Risk Assessment

Every team and every system should know what it’s doing with the data it consumes and emits. It should also understand how someone might be able to use the system in harmful ways.? Performing Threat Modeling exercises and conducting a risk assessment of the results of that exercise can help shape architecture and development decisions.? Executing these tasks early can save the team potentially months of re-work and debugging should a vulnerability be found and exploited. In any case, you have a better understanding of what could go wrong and have a plan in place for how to deal with that.

Pre-Release Security Screen / Vulnerability Testing

While Threat Model and Risk Assessments can and should be done by the team closest to the code because they understand what they’re dealing with. However as things get more interesting, it helps to bring in specialists.? Generally you can start with some linting tools to help review code for common mistakes at check-in.? Those working in this space should had full access and be available for your team to lift the security mindset for everyone.

Penetration Testing / Ethical Hacking / Security Audit

Depending on the size of your staff, this can often be best left to bonded professionals.? They will have a good well-rounded set of tools to really ensure that you’re protecting yourself and your customers.? Like with anything, it takes time to develop expertise in this area so sometimes it’s worth the price to pay the experts.

Usability

Usability Face of the Quality Pyramid.

Customer Feedback

Your customers are those who you are serving.? It’s their problem that your product or service is meant to serve as a solution.? What do they have to say about what you’re building? The key here is also to make sure this feedback loop is tight.? As soon as issues come in, have a good way to get them in front of your team and prioritized in their backlog. This also means gathering the right information for what you’re going to build in the future.? It’s rare that you only have a single persona or single customer type.?

Balancing the competing priorities can be challenging, and while not every customer knows how to solve their problem in the best way possible, they will provide you with feedback to refine what you have learned so far and continue to get better.? Keep asking “What could be better?”

User Testing

While direct customer feedback is often for the feature list and often based on what you have already built, User Testing is meant to address the in-progress work.? Those involved at this stage are trusted beta-testers.? Individuals who may fill a particular niche or persona in your development plan and are willing to give of their time to provide feedback for some of your experiments.? There is as much to be learned from this process by what is directed said as well as what is not said.?

What struggles do they have as they work through your mock-ups and wireframes? Do they take longer than you would expect to navigate through the system? Are they clicking through the UI in unexpected ways? Are your “breadcrumbs” providing the path so they can “fail into the pit of success.”? As a colleague of mine used to say, make the right thing easy or effortless and make the wrong thing hard.

Availability

We talked about this earlier.? Make sure your system is available for use when your customer needs it.? To be specific, when ALL your customers need it, to the best of your ability.? If you are serving a particular segment of the population, that whole segment may be broken down into different tasks at different times.? If every business is open from 9-5, and every worker is working 9-5, when do you actually have customers?? In Edtech, students might need your tools both for schoolwork and homework.? And when are the teachers and administrators using it?? Maybe after their own kids have gone to sleep.? So, are you running all your big maintenance jobs after 10PM, right as those adults need access to the system for their reporting and planning? Be aware of the demographics involved in your usage statistics.?

Accessibility

Speaking of demographics, don’t just build what works for you.? Employ and get the feedback from people who might have different ways of looking at the world, literally and figuratively.? The obvious ways are things like color or total blindness, reading challenges, auditory challenges. Keep in mind that making things more accessible even helps those within “normal” ability ranges.? Yes, this requires additional resources.? Yes, these are additional features of your product. So, yes, they should be included in your estimates, and groomed in your backlog.? And ask yourself, if they aren’t “Musts” in the MoSCoW, why not?? What’s more important and when can these be added? It’s better to factor in these features as part of planning than trying to bolt them on later.

“Other”

A/B Testing is often stealth or unmoderated User Testing.? The key is having the right observability tools in place so that data you gather will tell you a realistic picture of user behavior, and not be over-sensitive to a particular sub-section.

Branding on the other hand often is considered by a team completely unrelated to QA. They don’t always have the same eye as a quality professional for how your new color palette or logo set schema will play when actually applied to the product at large.? What happens to your game and it’s very particular palette when viewed through high-contrast / color-blind filter or the logo when the fonts are at 400%? Not every QA has an eye for the artistry involved, but many do, and many more can be taught and become excellent at providing useful and constructive feedback.? Don’t leave them out in the cold on this one.

Conclusion

When I’ve talked about this with those I’ve coached and mentored in the past, it help them bring a broader perspective to different aspects that go into shaping a quality product. These conversations have occurred at every level from on-boarding fresh out of school grads to seasoned tech leaders who just haven’t had a good view into what goes into the quality side of the house.? However, there is one other piece that’s missing which deserves its own conversation.? This is the foundation of the whole enterprise: Quality Processes.? That’s the unseen face, that foundation I hinted at the beginning.? This is how movements like DevOps (which eventually turn into DevSecTestOps) grow.? This last part rounds out completing the enablement toolkit for your project large or small.

Jay Smayda

Chief Chaos Tamer | Executive Consultant | Leader of Teams

8 个月

This is a very comprehensive quality assurance program for software development. I have found that a missing element is often process oversight and monitoring, usually in the form of transaction monitoring or direct observation through surveillance or audit. It is a fantastic way to ensure that the processes that are put in place are integrated and being followed by all team members. #riskreduction #devsecops

要查看或添加评论,请登录

社区洞察

其他会员也浏览了