week 52 - Test Code Refactoring Unveiled, An Improvement to TDD Efficiency and Large Language Models in Detecting Test Smells
Test Code Refactoring Unveiled: Where and How Does It Affect Test Code Quality and Effectiveness?
Refactoring has been widely investigated in the past in relation to production code quality, yet still little is known on how developers apply refactoring on test code. Specifically, there is still a lack of investigation into how developers typically refactor test code and its effects on test code quality and effectiveness. This paper presents a research agenda aimed to bridge this gap of knowledge by investigating (1) whether test refactoring actually targets test classes affected by quality and effectiveness concerns and (2) the extent to which refactoring contributes to the improvement of test code quality and effectiveness. We plan to conduct an exploratory mining software repository study to collect test refactoring data of open-source Java projects from GitHub and statistically analyze them in combination with quality metrics, test smells, and code/mutation coverage indicators. Furthermore, we will measure how refactoring operations impact the quality and effectiveness of test code.
See full paper via ResearchGate (last accessed 14 Dec, 2024)
Testability-Driven Development: An Improvement to the TDD Efficiency
Test-first development (TFD) is a software development approach involving automated tests before writing the actual code. TFD offers many benefits, such as improving code quality, reducing debugging time, and enabling easier refactoring. However, TFD also poses challenges and limitations, requiring more effort and time to write and maintain test cases, especially for large and complex projects. Refactoring for testability is improving the internal structure of source code to make it easier to test. Refactoring for testability can reduce the cost and complexity of software testing and speed up the test-first life cycle. However, measuring testability is a vital step before refactoring for testability, as it provides a baseline for evaluating the current state of the software and identifying the areas that need improvement. This paper proposes a mathematical model for calculating class testability based on test effectiveness and effort and a machine-learning regression model that predicts testability using source code metrics. It also introduces a testability-driven development (TsDD) method that conducts the TFD process toward developing testable code. TsDD focuses on improving testability and reducing testing costs by measuring testability frequently and refactoring to increase testability without running the program. Our testability prediction model has a mean squared error of 0.0311 and an R 2 score of 0.6285. We illustrate the usefulness of TsDD by applying it to 50 Java classes from three open-source projects. TsDD achieves an average of 77.81% improvement in the testability of these classes. Experts’ manual evaluation confirms the potential of TsDD in accelerating the TDD process.
See full paper via ResearchGate (last accessed 14 Dec, 2024)
领英推荐
Evaluating Large Language Models in Detecting Test Smells
Test smells are coding issues that typically arise from inadequate practices, a lack of knowledge about effective testing, or deadline pressures to complete projects. The presence of test smells can negatively impact the maintainability and reliability of software. While there are tools that use advanced static analysis or machine learning techniques to detect test smells, these tools often require effort to be used. This study aims to evaluate the capability of Large Language Models (LLMs) in automatically detecting test smells. We evaluated ChatGPT-4, Mistral Large, and Gemini Advanced using 30 types of test smells across codebases in seven different programming languages collected from the literature. ChatGPT-4 identified 21 types of test smells. Gemini Advanced identified 17 types, while Mistral Large detected 15 types of test smells. The LLMs demonstrated potential as a valuable tool in identifying test smells.
See full paper via ResearchGate (last accessed 14 Dec, 2024)
Unveiling Cognitive Biases in Software Testing Insights from a Survey and Controlled Experiment
Biases are hard-wired behaviours that influence software testers. Understanding how these biases affect testers’ everyday behaviour is crucial for developing practical software tools and strategies to help testers avoid the pitfalls of cognitive biases.This research aims to assess the extent to which software testers know the influence of cognitive biases on their work. Our study was conducted in two incremental steps: a survey and a controlled experiment. Firstly, we developed a questionnaire survey designed to reveal the extent of software testers’ knowledge about cognitive biases and their awareness of these biases’ influence on testing. We contacted software professionals in different environments and gathered valid data from 60 practitioners. The survey results suggest that software professionals are aware of biases, specifically preconceptions such as confirmation bias, fixation, and convenience. Additionally, biases like optimism, ownership, and blissful ignorance were commonly recognized. In line with other research, we observed that software professionals tend to identify more cognitive biases in others than in their judgments and actions, indicating a vulnerability to bias blind spot. To build on these findings, we performed a controlled experiment with 12 participants to investigate the behaviour and biases exhibited by humans when attempting to solve a hypothetical test problem. Through thematic analysis, we identified prevalent biases such as confirmation bias, pattern recognition and overreliance, sunk cost fallacy, and anchoring bias among participants. Additionally, we found that collaborative problem-solving was a prominent feature, often leading to biases like groupthink
See full paper via mdu.se (last accessed 14 Dec, 2024)
Examining The Impact Of Software Testing Practices On Software Quality In Batam Software Houses
This research aimed to investigate the impact of software testing practices on software quality in software companies in Batam, Indonesia. It focused on identifying key factors such as Software Testing Knowledge, Software Testing Approach, and Software Testing Complexity and analysing their correlation with software quality. Data was collected from 48 respondents, including project managers, developers, and QA teams, using a questionnaire distributed via Google Forms and convenience sampling.The questionnaire was designed based on related studies to ensure relevance to the respondents’ roles. Regression analysis identified significant impacts of testing complexity, approach (p = 0.000), and knowledge (p = 0.003) on software quality. The F-test result (F = 32.622) confirmed a strong relationship between testing practices and software quality. These findings emphasise the critical role of robust testing strategies in enhancing software quality. For companies in Batam, the study offers actionable insights, including adopting structured frameworks, and preferable action on testing approach. Implementing these strategies can help organisations improve software outcomes and maintain competitiveness in the evolving software development landscape.
See full paper via jurnal.polbeng (last accessed 15 Dec, 2024)