What Are We Really Testing?

What Are We Really Testing?

Ever paused to ask yourself, what exactly are we testing when we run our scripts and hit that 'execute' button??

This week, we're peeling back the layers to explore the heart of testing. Beyond finding bugs or checking boxes, testing goes deeper—it’s about understanding behaviors, predicting failures, and ultimately delivering better experiences. Join us as we find the true purpose behind those green checkmarks and red flags. Let’s rethink what quality really means in this world of development.


News

1. Agile Traceability: Connecting the Dots Without Slowing Down – Part 1

In part 1 of this blog series, Ilam Padmanabhan explores how to maintain traceability in Agile without sacrificing speed and flexibility. He highlights the importance of clear connections between requirements, code, and testing to manage dependencies, ensure compliance, and reduce technical debt in dev environments. Stay tuned for practical insights in part two!

2. Microservices Testing: Feature Flags vs. Preview Environments

Arjun Iyer discusses the pros and cons of feature flags and preview environments for testing microservices rollouts. He explains how combining both methods helps balance early bug detection with controlled production releases, ensuring better reliability without compromising speed.

3. Measuring developer experience with the HEART Framework: A guide for platform engineers

Darren Evans explains how platform engineers can use Google's HEART framework to measure and improve developer experience (DX). He highlights key metrics like happiness, engagement, and retention to create a more productive and satisfying development environment.

4. Investigation of a Workbench UI Latency Issue

Hechao Li and Marcelo Mayworm share how Netflix's team investigated and resolved a latency issue affecting the JupyterLab UI within the Workbench platform. They walk through the debugging process, from UI performance to Linux kernel-level analysis.

5. Open Source and In-House: How Uber Optimizes LLM Training

In their recent blog, the Uber team gives insights into how they optimize LLM training by leveraging both open-source models like Meta Llama 2 and in-house fine-tuning techniques. This approach enhances AI-driven services such as @Uber Eats recommendations and customer support, ensuring scalability, speed, and efficiency at Uber's vast operational scale.


AI

6. How We Generated Millions of Content Annotations

Dana Puleo , Meghana Seetharam , and Kasia Drzyzga discuss how Spotify scaled their annotation platform to support ML and GenAI by automating workflows, improving human expertise, and building flexible infrastructure. This approach increased annotation capacity by tenfold, significantly enhancing model training efficiency and quality.

7. The technology behind Amazon’s GenAI-powered shopping assistant, Rufus

Trishul Chilimbi explains how Amazon's GenAI-powered shopping assistant, Rufus, uses a custom large language model, AWS chips, and retrieval-augmented generation (RAG) to deliver quick, accurate responses to customer questions. By leveraging reinforcement learning and advanced streaming architecture, Rufus continuously improves and enhances the online shopping experience.


Automation

8. Automated Exploratory Testing: Reality or Dream?

Gil Zur highlights the potential of AI to revolutionize automated exploratory testing by shifting the focus from manual test writing to dynamic AI-driven testing. He emphasizes the need for a mindset shift in automation architecture to fully leverage AI's capabilities and speed up test development without relying on traditional models like the Page Object Model.

9. What Are We Really Testing?

?? Gil Zilberfeld TestinGil ?? emphasizes the importance of understanding what we are really testing in unit tests, pointing out that tests should provide valuable information rather than false confidence. He advocates for focusing on the purpose of each test, especially when testing simple code, to avoid wasting time on unnecessary or uninformative tests.

10. Best Practices for Designing a Test Automation Framework

Govinda S. outlines best practices for designing an effective test automation framework, emphasizing simplicity, modular design, and the importance of avoiding over-engineering. He advocates for leveraging design patterns, managing test data efficiently, and ensuring maintainability through regular reviews and adherence to principles like DRY and SOLID for optimal performance and scalability.

11. Make your Playwright tests run faster by using the Playwright API to wait

Optimize your Playwright tests by using specific waiting functions from the Playwright API instead of fixed time waits. Mike Harris CITP FBCS highlights methods like waitFor(), waitForResponse(), waitForEvent(), and waitForFunction() to ensure tests wait only as long as necessary for conditions to be met, reducing flakiness and improving the execution speed.


Tools

12. Playwright vs Puppeteer: Choosing the Right Browser Automation Tool in 2024

Compare Playwright and Puppeteer to find the right browser automation tool for your project. Shanika Wickramasinghe highlights their origins, key differences, and strengths, focusing on Playwright's cross-browser support and Puppeteer's Chrome integration. The article also addresses their performance in web scraping and community resources.

13. Assert with Grace: Custom Soft Assertions using AssertJ for Cleaner Code

Enhance your testing strategy with custom soft assertions using AssertJ, as Elias Nogueira expands on creating custom assertions and introduces a custom soft assertion class, allowing for cleaner, more readable tests without sacrificing effectiveness. Learn how to implement this approach for better error management in your unit and integration tests.


Other

14. Video: Leading the Charge in Software Quality with Zero Bug Revolution

Join Rupesh Garg ?? , CEO & Chief Architect at Frugal Testing, and Kavya Nair as they discuss leading a zero-bug revolution in software quality. They share strategies to minimize defects, boost product reliability, and speed up delivery. Don’t miss these insights—listen now!

15. Podcast: No Testers, No Problem?

In this episode of the Testing Peers podcast, Chris, Russell, Callum Akehurst-Ryan , and Leigh Rathbone dive into the provocative topic of “No Testers, No Problem.” They explore industry trends regarding the shift away from hiring testers, discussing perceptions, responsibilities, and the cultural impact of these changes. Tune in for their insights—listen now!


Events?

16.? Event: Testing United 2024

Join the TESTING UNITED CONFERENCE I Milan I 13 - 14 November, 2025 Conference on November 7-8, 2024, at Palais Wertheim in Vienna, where the theme is "AI Augmented QA: Challenges, Opportunities, and Lessons from the Past." This conference brings together 18 international experts to share insights, engage in interactive workshops, and explore the transformative impact of AI on the testing community. Don't miss the chance to network with industry professionals and enhance your skills—register now to secure your spot!

Govinda S.

Test Automation Specialist | SDET Lead | Salesforce | Automation Architect

1 个月

Thank you, LambdaTest, for adding my article in your latest issue.

Mike Harris CITP FBCS

Tester, Geckoboard | Vice-Chair, BCS SIGiST | Co-Author of "How Can I Test This ?"| Blogger

1 个月

Thank you for including my blog post in the latest issue

Gil Zur

Senior Automation Engineer

1 个月

Thanks for sharing!

要查看或添加评论,请登录

LambdaTest的更多文章