Attack of the Clones, ChatGPT Test Data, WebDriver.io BiDi and more

Attack of the Clones, ChatGPT Test Data, WebDriver.io BiDi and more

Have you seen the new WebDriverIO Bidi Protocol Feature in action?

What can CoPilots' earliest users teach us about Generative AI at work?

And how can ChatGPT be used as a test generation tool?

Find out in this newsletter recap of The Test Guild News Show for the week of November 19th.

So grab your favorite cup of coffee or tea and let's do this.

Visual Validation Testing Must Have

Are you looking to take your automation projects to the next level? Look no further than Applitools and their Visual AI Validation testing platform. Trust me; I've used it, and it is a game-changer. Plus, you could try it out by creating a free account using this ?? special link now ??.

NightwatchJS 3.3.0

David Burnes announced the release of NightwatchJS v3.3.0 was released last week.

And the update includes how parallelism works in mobile testing and it also helps remove upper limits for parallelism. This also updates a number of dependencies that NightwatchJS uses internally, especially Selenium.

You can get the latest goodness from this project, including updates to the Selenium manager. So if you're using NightwatchJS, it's highly recommended that you update to the latest version and get it as soon as possible.

WebdriverIO harnessing the powerful capabilities of the new WebDriver Bidi protocol

Christian Bromann, recenlty announce that WebDriverIO now harnesses the power of capabilities of the new WebDriverBidi.

He shared a video demonstrates using Google Maps as a playground for this innovative feature. Users can click a button to change their geolocation to anywhere in the world – in this case, the Pyramids of Giza. This is achieved through a new emulate command in the browser API, which allows users to set specific geolocation coordinates.

Users can also alter the color scheme of websites they visit, switch between light and dark modes, and even set custom user agents. This means that the browser can mimic different devices or operating systems, offering a more tailored browsing experience.

An exciting aspect of this technology is its ability to emulate online or offline status without actually disconnecting the browser from the internet. This feature is particularly useful for developers testing applications' behavior under different network conditions.

AI for Playwright (Attack of the Clone)

Last week I talked about a cool project that uses AI for Playwright but it kind of stirred up a little controversy.

Someone contacted me later to point me to this next article that goes over something you need to know about if you tried that other project last week.

And so in this post, Todd McNeal talks about how their project was cloned and then reused without proper attribution. And this is an example of challenges I think many folks in the tech space face. And the team has ZeroStep experience. The highs and lows of launching this new project only to see it cloned based on this article within 48 hours.

If you don't know ZeroStep is an AI-based library designed to enhance the Playwright test framework, which was launched a few weeks ago, which mocks a significant step towards AI assistant testing.

ZeroStep, it is a unique combination of open-source JavaScript libraries and a proprietary backend. It uses an AI interpreter to convert plain text prompts and convert them into actual commands for browser automation and testing. And this technology, initially part of their low code testing platform reflect was spun out to reach a broader audience of developers.

So what happened?

Well, just a few days after the launch of ZeroStep, the team discovered a Reddit post by a user promoting a project similar to theirs. And this clone that appeared not only replicates the functionality of ZeroStep, but it also copied portions of its readme and marketing copy presenting it as an original creation.

As you know, this is problematic.

Even though the JavaScript library of ZeroStep is open-sourced under the MIT license allowing for legitimate forks and modifications. But Todd and the team believe that this person's intent was not to provide an alternative, but a coopt of ZeroStep project and market it as its own.

This raises significant ethical questions about the boundaries of open-source licensing and the responsibilities of developers and respect for original creations. They remain committed to evolving ZeroStep and continue to invite developers to try their innovative solution for AI-based testing in Playwright.

So thank you, Todd, for this innovation. And if you haven't tried it, definitely check it out in that first comment down below and really up your Playwright tests with AI-assisted testing.

CodePilot Special Report

One of the AI technologies I think is helping a lot of testers or automation engineers when they develop their scripts is CodePilot.

So I found this article by Microsoft Codepilot that goes over some other key aspects of how it's really changing the landscape of productivity and creativity for developers and testers. They have a special report they did on how Codepilot early users could teach you about Generative AI at work and some of the key findings was productivity and efficiency.

A staggering 70% of Codepilot users reported increased productivity

with 68% noting improved work quality and users experienced 29% increase in speed across tasks like searching, writing, and summarizing.

Also CodePilot users were 44% more accurate in cybersecurity and 26% faster in identifying cyber threats as well.

ChatGPT for generating test data

Based on that last study by Microsoft, what are some examples of how Generative AI can assist testers?

Well, I found an article that highlights this key aspect for you as well. I found an example of how this is done with test generation and ChatGPT.

So from my past experience as a tester, I often found creating effective test data to be a challenging task. Luckily, testing expert Wayne Roseberry, who's been on the show before, recently shared a compelling success story on his blog post about leveraging AI, specifically ChatGPT for generating test data.

The Challenge: Generating Test Data for Duplicate Failure Detection

Wayne describes how he was developing a tool to detect duplicate failures in test automation, which required test data with similar but not identical content. Traditional methods of creating such data are tedious and time-consuming. This is where AI stepped in as a game changer.

AI to the Rescue: ChatGPT's Role

Wayne turned to ChatGPT, a large language model, to generate the needed test data. He provided the AI with news stories and asked it to rewrite them in various ways, maintaining the essence but altering the words, sentence structure, and grammar. The AI's output was impressively efficient and perfectly met the project's requirements.

Why AI Worked Well

  • Flexibility in Output: The AI-generated data was effective because the project didn't require highly precise or rigorously structured data.
  • Efficiency and Speed: ChatGPT produced the required content much faster than manual methods.
  • Low Cost of Verification: The data was easy to check for correctness, saving significant time and effort.

The Larger Implication

This experience underscores AI's potential to simplify and streamline the test data generation process. It highlights how AI can be a powerful tool in scenarios where the definition of correct output is broad, and the cost of verifying the output is low. Wayne's story is a testament to the practical applications of AI in software testing, paving the way for more innovative uses in the field.

One Performance Specialist's experience with software optimization

Laurance Tratt, a seasoned programmer, delves into the intricate optimization world in his latest blog post, "Four Kinds of Optimisation." Laurence's extensive experience leads him to challenge common assumptions and offer a nuanced perspective on making software run faster and more efficiently.

The Fourfold Path to Optimization

  1. Use a Better Algorithm: This emphasizes the importance of understanding the context in which an algorithm operates. Laurance illustrates this with examples like bubble sort and selection sort, highlighting how a seemingly simple choice can profoundly affect performance.
  2. Use a Better Data Structure: The choice of data structures is crucial, and is demonstrated with examples of existence checks in lists. He suggests that the most effective optimizations often come from selecting the right data structure for the task at hand.
  3. Use a Lower-Level System: Tratt discusses the common approach of rewriting code in a lower-level language for performance gains. However, he cautions against overlooking more straightforward solutions, such as using different language implementations or compiler optimizations.
  4. Accept a Less Precise Solution: Finally, the post explores the trade-off between precision and performance. It notes that sometimes accepting a "good enough" solution can be more practical than striving for theoretical perfection.

The Balancing Act of Optimization

These?insights reveal a delicate balance in software optimization. Laurance argues that while it's tempting to pursue complex solutions, often the most effective optimizations are the simplest ones. He also stresses the importance of understanding the broader context of the software system and the nature of the data it handles.

A Cautionary Note on Complexity

The blog post serves as a cautionary tale against overcomplicating optimization efforts. Tratt's experiences suggest that a deep understanding of the basics and a pragmatic approach often yield the best results. This perspective is particularly relevant in a world of ever-increasing software complexity and high-performance pressure.

Grafana Labs has acquired AssertsAI

Grafana Labs announced that is has acquired Assert AI, a technology that promises to help users understand and interact with observability data was recently acquired by Grafana Labs and is set to enhance Grafana's cloud.

If you don't know, Asserts AI's technology provides a contextual layer for Prometheus metrics, offering a set of alerts and dashboards for effective root cause analysis and quicker issue resolution. And this integration is inspected to significantly reduce the meantime to resolution of complex application problems.

OWASP TOP 10 for LLMs

Integrating Large Language Models (LLMs) into applications and systems is becoming increasingly prevalent. However, this advancement brings with it a host of cybersecurity challenges.

The good news is that I just noticed that The Open Web Application Security Project (OWASP) has released a comprehensive guide addressing the top 10 security risks specifically for LLM applications, a crucial resource for developers and cybersecurity professionals.

Key Vulnerabilities in LLM Applications

  1. Prompt Injection: This leading vulnerability involves attackers manipulating LLMs through crafted inputs, leading to unintended actions or data breaches.
  2. Insecure Output Handling: Trusting LLM outputs without validation can lead to security issues like Cross-Site Scripting (XSS) or remote code execution.
  3. Training Data Poisoning: When training data is tampered with, it can introduce biases and affect the model's effectiveness, potentially harming the brand's reputation.
  4. Model Denial of Service: Attackers can manipulate model resource consumption, causing service degradation and incurring high costs.
  5. Supply Chain Vulnerabilities: Issues in training data, deployment platforms, and third-party solutions can lead to biased results and security breaches.
  6. Sensitive Information Disclosure: LLM applications can inadvertently leak sensitive data, violating user privacy.
  7. Insecure Plugin Design: Developing LLM plugins without proper controls can lead to harmful actions like remote code execution.
  8. Excessive Agency: Giving LLM systems too much functionality or independence can lead to unforeseen and potentially harmful results.
  9. Overreliance: Users overly trusting LLM outputs without verification can lead to spreading misinformation or legal consequences.
  10. Model Theft: Leakage or copying of the model can result in loss of competitive advantage.

Automation Awesomeness Book Now Available in PDF Format

My new book, Automation Awesomeness: 260 Actionable Affirmations to Improve Your QA and Automation Testing Skills, is now in PDF format.

I hope to inspire you through the insights of some of the smartest testers I've had the privilege to interview over nine years on my Test Guild podcasts.

I want you to know that I aim for you to glean an actionable tip, tool, best practice, or mindset every business day that you can apply to your daily software testing and career. Drawing from over 600 interviews conducted across nine years, I've distilled bite-sized, actionable daily advice for your benefit.

That's a Wrap

So that's it for this Test Guild News Show Newsletter edition.

I'm Joe Colantonio, and my mission is to help you succeed in creating end-to-end full-stack DevSecops automation awesomeness.

As always, test everything and keep the good. Cheers!


要查看或添加评论,请登录

社区洞察

其他会员也浏览了