Browsing by Autor "Alexandre Bergel"
Now showing 1 - 16 of 16
- Results Per Page
- Sort Options
Item type: Item , A domain-specific language to visualize software evolution(Elsevier BV, 2018) Alison Fernandez Blanco; Alexandre BergelItem type: Item , Analyzing dynamic information with spy and roassal: an experience report(2015) Alison Fernandez Blanco; Diego Gabriel Nunez Duran; Alejandro Infante; Alexandre BergelDynamic analyses tools are seldom crafted by practitioners. This paper discusses the benefits of supporting the practitioners to build their ad-hoc tool and presents our experience to lower the barrier to gather dynamic information. The experience we present is driven by the combination of the Spy profiling framework and the Roassal visualization engine, two frameworks used in industry and academia. We conclude with two question to discuss at the workshop.Item type: Item , Assessing textual source code comparison: split or unified?(2020) Alejandra Cossio Chavalier; Juan Pablo Sandoval Alcocer; Alexandre BergelEvaluating source code differences is an important task in software engineering. Unified and split are two popular textual representations supported by clients for source code management. Whether these representations differ in supporting source code commit assessment is still unknown, despite its ubiquity in software production environments.Item type: Item , Effective Visualization of Object Allocation Sites(2018) Alison Fernandez Blanco; Juan Pablo Sandoval Alcocer; Alexandre BergelProfiling the memory consumption of a software execution is usually carried out by characterizing calling-context trees. However, the plurality nature of this data-structure makes it difficult to adequately and efficiently exploit in practice. As a consequence, most of anomalies in memory footprints are addressed either manually or in an ad-hoc way. We propose an interactive visualization of the execution context related to object productions. Our visualization augments the traditional calling-context tree with visual cues to characterize object allocation sites.We performed a qualitative study involving eight software engineers conducting a software execution memory assessment. As a result, we found that participants find our visualization as beneficial to characterizing a memory consumption and to reducing the overall memory footprint.Item type: Item , Enhancing Commit Graphs with Visual Runtime Clues(2019) Juan Pablo Sandoval Alcocer; Harold Camacho Jaimes; Diego Elias Costa; Alexandre Bergel; Fabian BeckMonitoring software performance evolution is a daunting and challenging task. This paper proposes a lightweight visualization technique that contrasts source code variation with the memory consumption and execution time of a particular benchmark. The visualization fully integrates with the commit graph as common in many software repository managers. We illustrate the usefulness of our approach with two application examples. We expect our technique to be beneficial for practitioners who wish to easily review the impact of source code commits on software performance.Item type: Item , How Do Developers Use the Java Stream API?(Springer Science+Business Media, 2021) Joshua Nostas; Juan Pablo Sandoval Alcocer; Diego Elias Costa; Alexandre BergelItem type: Item , Improving the success rate of applying the extract method refactoring(Elsevier BV, 2020) Juan Pablo Sandoval Alcocer; Alejandra Siles Antezana; Gustavo Santos; Alexandre BergelItem type: Item , On the use of statistical machine translation for suggesting variable names for decompiled code: The Pharo case(Elsevier BV, 2024) Juan Pablo Sandoval Alcocer; Harold Camacho-Jaimes; Geraldine Galindo-Gutierrez; Andrés Neyem; Alexandre Bergel; Sté́phane DucasseItem type: Item , Performance Evolution Matrix: Visualizing Performance Variations Along Software Versions(2019) Juan Pablo Sandoval Alcocer; Fabian Beck; Alexandre BergelSoftware performance may be significantly affected by source code modifications. Understanding the effect of these changes along different software versions is a challenging and necessary activity to debug performance failures. It is not sufficiently supported by existing profiling tools and visualization approaches. Practitioners would need to manually compare calling context trees and call graphs. We aim at better supporting the comparison of benchmark executions along multiple software versions. We propose Performance Evolution Matrix, an interactive visualization technique that contrasts runtime metrics to source code changes. It combines a comparison of time series data and execution graphs in a matrix layout, showing performance and source code metrics at different levels of granularity. The approach guides practitioners from the high-level identification of a performance regression to the changes that might have caused the issue. We conducted a controlled experiment with 12 participants to provide empirical evidence of the viability of our method. The results indicate that our approach can reduce the effort for identifying sources of performance regressions compared to traditional profiling visualizations.Item type: Item , Performance Evolution Matrix: Visualizing Performance Variations along Software Versions(Figshare (United Kingdom), 2019) Juan Pablo Sandoval Alcocer; Fabian Beck; Alexandre Bergel# Performance Evolution Matrix<br> This repository contains the artifacts needed to replicate our experiment in the paper "Performance Evolution Matrix". # Video Demo<br> [download](https://github.com/jpsandoval/PerfEvoMatrix/blob/master/MatrixMovie.mp4)<br> <br> # XMLSupport and GraphET Examples To open the XMLSupport and GraphET Examples (which appears in the paper) execute the following commands in a Terminal. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../XMLSupportExample.image<br> ``` **Open the Visualization.**<br> Please select the following code, then execute it using the green play button (at the top right of the window).<br> ```<br> ToadBuilder xmlSupportExample.<br> ```<br> or <br> ```<br> ToadBuilder graphETExample.<br> ```<br> **Note.** There are two buttons at the panel top left In (zoom in) and Out (zoom out). To move the visualization just drag the moves over the panel. # Experiment<br> This subsection describe how to execute the tools, for replicating our experiment. ## Baseline<br> The baseline contains the tools and the project-dataset to realize the tasks described in the paper (identifying and understanding performance variations). ## Open the Baseline **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Baseline execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Baseline.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Baseline.image<br> ``` ## Open a Project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> TProfileVersion openRoassal.<br> ``` **XML**<br> ```<br> TProfileVersion openXML.<br> ```<br> **Grapher**<br> ```<br> TProfileVersion openGrapher.<br> ``` ## Baseline Options<br> For each project, we provide a UI which contains all the tools we use as a baseline. Each item in the list is a version of the selected project. <img src="images/baseline.png" width="300"> - Browse: open a standard window to inspect the code of the project in the selected version.<br> - Profile: open a window with a call context tree for the selected version.<br> - Source Diff: open a window with the code differences between the selected version and the previous one.<br> - Execution Diff: open a window with the merge call context tree gathered from the selected version and the previous one. **Note.** All these options require you select first a item in the list. # Matrix ## Open Matrix Image. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Matrix.image<br> ``` ## Open a project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> ToadBuilder roassal.<br> ``` **XML**<br> ```<br> ToadBuilder xml.<br> ```<br> **Grapher**<br> ```<br> ToadBuilder grapher.<br> ``` # Data Gathering Before each participant starts a task we execute the following script in Smalltalk. For executing a script press Cmd-d or right-click and press do it. It allows us to track the time that a user starts the experiment and how many mouse clicks, movements.<br> ```<br> UProfiler newSession.<br> UProfiler current start.<br> ``` After finishing the task we executed the following script. It stop recording the mouse events and save the stops time.<br> ```<br> UProfiler current end.<br> ``` The last script generates a file with the following information: start time, end time, number of clicks, number of mouse movements, and the number of mouse drags (we do not use this last one).<br> ```<br> 11:34:52.5205 am,11:34:56.38016 am,14,75,0 ```<br> # Quit<br> To close the artifact, just close the window or press click in any free space of the window and select quit.Item type: Item , Performance Evolution Matrix: Visualizing Performance Variations along Software Versions(European Organization for Nuclear Research, 2019) Juan Pablo Sandoval Alcocer; Fabian Beck; Alexandre Bergel# Performance Evolution Matrix<br> This repository contains the artifacts needed to replicate our experiment in the paper "Performance Evolution Matrix". # Video Demo<br> [download](https://github.com/jpsandoval/PerfEvoMatrix/blob/master/MatrixMovie.mp4)<br> <br> # XMLSupport and GraphET Examples To open the XMLSupport and GraphET Examples (which appears in the paper) execute the following commands in a Terminal. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../XMLSupportExample.image<br> ``` **Open the Visualization.**<br> Please select the following code, then execute it using the green play button (at the top right of the window).<br> ```<br> ToadBuilder xmlSupportExample.<br> ```<br> or <br> ```<br> ToadBuilder graphETExample.<br> ```<br> **Note.** There are two buttons at the panel top left In (zoom in) and Out (zoom out). To move the visualization just drag the moves over the panel. # Experiment<br> This subsection describe how to execute the tools, for replicating our experiment. ## Baseline<br> The baseline contains the tools and the project-dataset to realize the tasks described in the paper (identifying and understanding performance variations). ## Open the Baseline **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Baseline execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Baseline.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Baseline.image<br> ``` ## Open a Project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> TProfileVersion openRoassal.<br> ``` **XML**<br> ```<br> TProfileVersion openXML.<br> ```<br> **Grapher**<br> ```<br> TProfileVersion openGrapher.<br> ``` ## Baseline Options<br> For each project, we provide a UI which contains all the tools we use as a baseline. Each item in the list is a version of the selected project. <img src="images/baseline.png" width="300"> - Browse: open a standard window to inspect the code of the project in the selected version.<br> - Profile: open a window with a call context tree for the selected version.<br> - Source Diff: open a window with the code differences between the selected version and the previous one.<br> - Execution Diff: open a window with the merge call context tree gathered from the selected version and the previous one. **Note.** All these options require you select first a item in the list. # Matrix ## Open Matrix Image. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Matrix.image<br> ``` ## Open a project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> ToadBuilder roassal.<br> ``` **XML**<br> ```<br> ToadBuilder xml.<br> ```<br> **Grapher**<br> ```<br> ToadBuilder grapher.<br> ``` # Data Gathering Before each participant starts a task we execute the following script in Smalltalk. For executing a script press Cmd-d or right-click and press do it. It allows us to track the time that a user starts the experiment and how many mouse clicks, movements.<br> ```<br> UProfiler newSession.<br> UProfiler current start.<br> ``` After finishing the task we executed the following script. It stop recording the mouse events and save the stops time.<br> ```<br> UProfiler current end.<br> ``` The last script generates a file with the following information: start time, end time, number of clicks, number of mouse movements, and the number of mouse drags (we do not use this last one).<br> ```<br> 11:34:52.5205 am,11:34:56.38016 am,14,75,0 ```<br> # Quit<br> To close the artifact, just close the window or press click in any free space of the window and select quit.Item type: Item , Prioritizing versions for performance regression testing: The Pharo case(Elsevier BV, 2020) Juan Pablo Sandoval Alcocer; Alexandre Bergel; Marco Túlio ValenteItem type: Item , Quality Histories of Past Extract Method Refactorings(Springer Science+Business Media, 2021) Abel Mamani Taqui; Juan Pablo Sandoval Alcocer; G. Hecht; Alexandre BergelItem type: Item , Reducing resource consumption of expandable collections: The Pharo case(Elsevier BV, 2018) Alexandre Bergel; Alejandro Infante; Sergio Maass; Juan Pablo Sandoval AlcocerItem type: Item , TestEvoViz: Visual Introspection for Genetically-Based Test Coverage Evolution(2020) Andreina Cota Vidaurre; Evelyn Cusi López; Juan Pablo Sandoval Alcocer; Alexandre BergelGenetic algorithms are an efficient mechanism to generate unit tests. Automatically generated unit tests are known to be an important asset to identify software defects and define oracles. However, configuring the test generation is a tedious activity for a practitioner due to the inherent difficulty to adequately tuning the generation process. This paper presents TestEvoViz, a visual technique to introspect the generation of unit tests using genetic algorithms. TestEvoViz offers the practitioners a visual support to expose some of the decisions made by the test generation. A number of case studies are presented to illustrate the expressiveness of TestEvoViz to understand the effect of the algorithm configuration.Artifact - https://github.com/andreina-covi/ArtifactSSG.Item type: Item , Visually Exploring Object Mutation(2016) Rodrigo Schulz; Fabian Beck; Jhonny Wilder Cerezo Felipez; Alexandre BergelObject-oriented programming supports object mutation during a program execution. A mutation occurs whenever a value is assigned to an object field. Analyzing the evolution of object mutation is known to be difficult. Unfortunately, classical code debuggers painfully support the analysis of object mutations. Object Evolution Blueprint is a visualization dedicated to exploring object mutation over time. Our blueprint visually and concisely represents sequences of field mutations. The history of each field is adequately shown with respect to the dynamic value types. We have observed the use of our blueprint with three practitioners. Our visualization has been well received and accepted to complete two different software comprehension tasks. Moreover, our user study shows that the visualization is both intuitive and simple to learn.