Browsing by Autor "Carla Bezerra"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item type: Item , Identifying and Addressing Test Smells in JavaScript: A Developer-Centric Study(2025) Jamille Carmo Oliveira; Luigi Mateus; Gabriel Amaral; Tássio Virgínio; Carla Bezerra; Ivan Machado; Larissa RochaTest smells are poor practices in test code that can compromise maintainability, reliability, and clarity. While the concept has been widely studied in languages such as Java and Python, research on test smells in JavaScript remains limited—despite its prominence in modern development. To address this gap, we conducted a focus group study with JavaScript developers of varying experience levels to explore their perceptions of seven test smells. These smells—Anonymous Test, Comments Only Test, Overcommented Test, General Fixture, Test Without Description, Transcripting Test, and Sensitive Equality—are particularly relevant to the JavaScript ecosystem and had not been systematically examined in this context prior to our study. We applied thematic analysis to transcribed discussions, uncovering developers’ concerns, recognition patterns, and proposed mitigation strategies. Our results show that experience level strongly influences the ability to detect and refactor test smells, with junior developers often struggling to identify more subtle patterns. To the best of our knowledge, this is the first study to investigate JavaScript developers’ perceptions of test smells using a qualitative approach. Our findings reveal key challenges, offer practical insights for test improvement, and support the development of better training and tooling for JavaScript test quality.Item type: Item , Improving JavaScript Test Quality with Large Language Models: Lessons from Test Smell Refactoring(2025) Gabriel Amaral; Henrique L. Gomes; Eduardo Figueiredo; Carla Bezerra; Larissa RochaTest smells—poor design choices in test code—can hinder test maintainability, clarity, and reliability. Prior studies have proposed rulebased detection tools and manual refactoring strategies, most focus on statically typed languages such as Java. In this paper, we investigate the potential of Large Language Models (LLMs) to automatically refactor test smells in JavaScript, a dynamically typed and widely used language with limited prior research in this area. We conducted an empirical study using GitHub Copilot Chat and Amazon CodeWhisperer to refactor 148 test smell instances across 10 real-world JavaScript projects. Our evaluation assessed smell removal effectiveness, behavioral preservation, introduction of new smells, and structural code quality based on six software metrics. Results show that Copilot removed 58.78% of the smells successfully, outperforming Whisperer’s 47.30%, while both tools preserved test behavior in most cases. However, both also introduced new smells, highlighting current limitations. Our findings reveal the strengths and trade-offs of LLM-based refactoring and provide insights for building more reliable and smell-aware testing tools for JavaScript.