The Rise of TestOps
TestOps for DevOps

The Rise of TestOps

How do the most well-funded, sophisticated software test teams on the planet test everything? — they don’t! There is a new way to test that delivers great coverage, finds important bugs early, and ensures that core functionality works on every new release. This new way isn’t well described because it feels like a capitulation to some, but it is a direct response to delivering high-quality software that ships every month, week, even every day. The key to delivering modern software with new features and not breaking existing functionality is “TestOps”.

“TestOps”, much like DevOps, is the convergence of testing with operations and development. We know most testing today is too slow and unreliable to keep up with development and operations — even in the best of situations. Yes, DevOps often handles API testing and unit testing that quickly verify isolated portions of a program. Still, we all encounter major bugs and regressions in production software. What has been missing is quick, automated, end-to-end testing where the entire product is tested to make sure that it all works together and for the end-user.

The keys to successful TestOps is a combination of principles and often the application of Artificial Intelligence and Machine learning. The principles outline what to test, and the application of AI finally ensures that test automation is robust enough for operational testing.

The TestOps principles are

  1. Only automate the easiest, most reliable, and most important tests.
  2. Execute them as often as possible.
  3. Alert to any failures.

Only the most important tests should be automated, but not to a fault. Modern TestOps teams only automate the ‘easy’ and important tests. If the test is difficult to automate, it means that more resources will be spent on it that could have been spent on other test cases. Difficulty and complexity are the enemies of reliability. TestOps test cases need to be extremely reliable because they will be executed often and executed with a lot of visibility within the development and operations teams. For example, even if you are testing a shopping app and the process of sign-in is obviously important but difficult, it is far better to build very reliable tests for other aspects of the application that are easier to test. This can seem counterintuitive, but test reliability is more important than exactly what is tested because test failures randomize the entire team.

Like unit and API tests that are written by developers, the goal of TestOps is to have the tests run as often as possible, and by many parts of the development and operations teams, in their processes, to catch bugs as early as possible. Most often, on agile and modern app teams, test automation is run as an afterthought and background process that only testers monitor. Ideally, TestOps tests can execute on developer machines, but at minimum, they should execute on every new official ‘build’ or deployment, including production for monitoring. TestOps tests should strive to run everywhere to get the maximum value from the test automation investment. Fewer tests running often and reliably is a great thing.

Gone are the days of traditional test result dashboards — no one has the time to look at them. Not even testers want to stare at grids of green and red boxes, with donut charts and meaningless % test pass summaries. People only want to know what has broken that warrants blocking the next release. As TestOps only tests key functionality, by definition when these tests fail, the development and operations teams should want to know about it immediately. If a test fails but doesn’t warrant a text message to the development lead, or wake up some operations folks, the test isn’t important enough to be included in TestOps. TestOps failures should be shared in realtime with custom emails, text messages, or more formal escalations with tools like PagerDuty.

TestOps is only now a realistic option thanks to the introduction of AI into the world of testing. Most TestOps teams are leveraging AI for test development, test execution, or test verification steps. Traditional test automation is often hard-coded to specific user interface elements or specific user-flows. But, these test scripts are fragile and break whenever the look or flow of the application changes. Even more, most every team deploys on multiple platforms like iOS, Android, Desktop, Web, even embedded devices, — this requires a different script for every platform, and increased complexity for every test case. AI changes all that with the ability to make more resilient test case execution across platforms, UX changes, even differences between staging locations and A/B flights. Even better, AI is enabling folks who haven’t spent years debugging JAVA or Python code and learning frameworks to write far more reliable tests, and more quickly. It is difficult to get tests reliable enough to execute in a TestOps world with traditional tooling — AI is enabling a new, and smarter type of testing.

What about traditional testing and how does TestOps relate? Firstly, TestOps most often can and should be written by skilled testers. TestOps is a new way to think about testing. If a team is starting from scratch, it’s likely best to focus on TestOps first. Many large-scale, high-visibility projects have shipped recently with only a TestOps approach. Teams with existing manual or automated tests should prioritize a TestOps approach now, and only after TestOps is deployed to some level of satisfaction, return to automating lower priority test cases, more difficult test cases, or manual test cases, and continue to run those in the background. It's just not worth interrupting the latest agile sprint, or continuous deployment for a lower priority bug. Focus on TestOps first.

TestOps has organically appeared across large and small organizations over the past 18 months. Sometimes using the term, most often just realizing this is the best way to test modern applications. If your team is capable of writing incredibly simple and reliable tests using traditional programming environments, frameworks, and test infrastructure and services you are one of the lucky few, in fact, you are a unicorn. For everyone else, the good news is that AI-powered test automation, that even non-programmers can train, is a great leveling factor, and all of us can now quickly jump to AI-powered TestOps.

Note: test.ai is rolling out a Community Edition of our AI-based training system for TestOps. Connect with this new community and technology to bring TestOps to your team: https://www.test.ai/product/aitf

— Jason Arbon

Thomas No?

I will make your company the next leader in technical innovation | Software quality strategies and in-company training

4 年

I think this is indeed how many teams should perform their test approach. But its important that you have the right people the execute this. You can preach it what you want but it need to be executed properly. That is not that easy and takes time and tweaking. You need to have the right data and right emotional skills to be able to reach your goal.

回复

Thanks for the post, I found the points very well explained and also useful for me when explaining or discussing these concepts within the team. The only thing I didn't like, is the cover picture with the test phase highlighted. I would put this phase of running the most critical regression tests as "checking". Testing is something different, and it's not a phase, the testing activity happens during the whole development lifecycle of any software feature.

Ram Malapati

Test Delivery Specialist | Engineering Excellence & Quality at Scale

4 年

Great article thanks for sharing. Often, few tests are complex to automate but critical for business or to achieve release confidence, any pointers on how to approach complex to automate but critical tests in the world where release frequency matters.

要查看或添加评论,请登录

Jason Arbon的更多文章

  • AI Coding Agents--Websites Tested

    AI Coding Agents--Websites Tested

    The hottest demos these days are AI Coding Agents — and the competition among them is fierce. These AI coding…

    1 条评论
  • 2025: AI in QA Predictions

    2025: AI in QA Predictions

    Every QA professional will need to demonstrate how they’re leveraging AI in 2025. The technology is accelerating…

    17 条评论
  • AI and the Testing Triad

    AI and the Testing Triad

    ‘AI Testing’ represents a new category of software testing. Until now, testing has existed as a duopoly of Manual…

    21 条评论
  • AI and Testing Podcast: STARWest Keynote

    AI and Testing Podcast: STARWest Keynote

    Ever feel like you blink and the tech world's moved on without you? Yeah. Well, buckle up, buttercup.

    12 条评论
  • QnA: AI for your Testing Career

    QnA: AI for your Testing Career

    There were too many questions to get to during my session at #TestMuConf 2024 Conference presentation, that we couldn't…

    2 条评论
  • AI’s Testplan for CrowdStrike

    AI’s Testplan for CrowdStrike

    We’ve heard a lot about the Crowdstrike issue from human engineers and testers — but, surprisingly, we haven’t heard…

    4 条评论
  • AI Changed My Coding Style

    AI Changed My Coding Style

    I’ve found that coding with an AI partner has dramatically transformed my coding practices and style?—?some of those…

    4 条评论
  • GPT4o: Safety Tests Start to Fail

    GPT4o: Safety Tests Start to Fail

    A test suite monitoring AI safety just starting failing— not sure what to do. Test results are up at https://www.

    9 条评论
  • Testers Discuss AI

    Testers Discuss AI

    Thanks to Marcel Veselka , Juraj ?abka for a fun conversation that spanned AI, and the future of testing. makeITfun…

    7 条评论
  • AB Testing Podcast: AI (Part 2)

    AB Testing Podcast: AI (Part 2)

    Thanks again to Alan and Brent for inviting me to geek out with them--always a blast because they are amazingly open…

    7 条评论

社区洞察

其他会员也浏览了