Repository logo
Andean Publishing ↗
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Autor "Juan Pablo Sandoval Alcocer"

Filter results by typing the first few letters
Now showing 1 - 20 of 20
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Assessing textual source code comparison: split or unified?
    (2020) Alejandra Cossio Chavalier; Juan Pablo Sandoval Alcocer; Alexandre Bergel
    Evaluating source code differences is an important task in software engineering. Unified and split are two popular textual representations supported by clients for source code management. Whether these representations differ in supporting source code commit assessment is still unknown, despite its ubiquity in software production environments.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Blockly Voice: un entorno de programación guiado por voz
    (2019) César Iván Delgado Silva; Juan Pablo Sandoval Alcocer; Wendoline Arteaga Sabja
  • Loading...
    Thumbnail Image
    Item type: Item ,
    DGT-AR: Visualizing Code Dependencies in AR
    (2023) Dussan Freire-Pozo; Kevin Céspedes-Arancibia; Leonel Merino; Alison Fernandez-Blanco; Andrés Neyem; Juan Pablo Sandoval Alcocer
    Analyzing source code dependencies between components within a program is an essential activity in software development. While various software visualization tools have been proposed to aid in this activity, most are limited to desktop applications. As a result, the potential impact of augmented reality (AR) on improving dependency analysis remains largely unexplored. In this paper, we present DGT-AR, a node-link visualization tool for code dependencies in immersive augmented reality. DG T-AR extends the physical screen space of IDEs to the infinite virtual space. That is, developers neither have to sacrifice screen space nor leave the IDE and use third-party applications. We present the preliminary results of a pilot user study along with four key lessons learned. Additionally, we have made DGT-AR publicly available.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Effective Visualization of Object Allocation Sites
    (2018) Alison Fernandez Blanco; Juan Pablo Sandoval Alcocer; Alexandre Bergel
    Profiling the memory consumption of a software execution is usually carried out by characterizing calling-context trees. However, the plurality nature of this data-structure makes it difficult to adequately and efficiently exploit in practice. As a consequence, most of anomalies in memory footprints are addressed either manually or in an ad-hoc way. We propose an interactive visualization of the execution context related to object productions. Our visualization augments the traditional calling-context tree with visual cues to characterize object allocation sites.We performed a qualitative study involving eight software engineers conducting a software execution memory assessment. As a result, we found that participants find our visualization as beneficial to characterizing a memory consumption and to reducing the overall memory footprint.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Enhancing Commit Graphs with Visual Runtime Clues
    (2019) Juan Pablo Sandoval Alcocer; Harold Camacho Jaimes; Diego Elias Costa; Alexandre Bergel; Fabian Beck
    Monitoring software performance evolution is a daunting and challenging task. This paper proposes a lightweight visualization technique that contrasts source code variation with the memory consumption and execution time of a particular benchmark. The visualization fully integrates with the commit graph as common in many software repository managers. We illustrate the usefulness of our approach with two application examples. We expect our technique to be beneficial for practitioners who wish to easily review the impact of source code commits on software performance.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    FlameGraph AR: Immersive Visualization of CPU Profiles in Augmented Reality
    (2025) Tiara Rojas-Stambuk; Luis Fernando Gil-Gareca; Juan Pablo Sandoval Alcocer; Leonel Merino; David Moreno-Lumbreras
    Performance analysis is essential to identify bottlenecks and improve software responsiveness. Flame graphs are widely used for this purpose, offering compact summaries of stack traces and execution times. However, as applications grow, flame graphs become large and dense, competing for space within IDEs already crowded with code editors and panels. We propose FlameGraph AR, a tool that offloads flame graph visualizations from the IDE to the physical environment using augmented reality. By integrating a Visual Studio Code extension with an AR application, developers can arrange interactive flame graphs on desks, walls, or in peripheral view. This immersive setup expands visualization space, supports gesturebased interaction, and enables parallel performance analysis without disrupting the coding flow.Video URL: https://vimeo.com/1089364433/e41cfa13c4
  • Loading...
    Thumbnail Image
    Item type: Item ,
    How Do Developers Use the Java Stream API?
    (Springer Science+Business Media, 2021) Joshua Nostas; Juan Pablo Sandoval Alcocer; Diego Elias Costa; Alexandre Bergel
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Improving the success rate of applying the extract method refactoring
    (Elsevier BV, 2020) Juan Pablo Sandoval Alcocer; Alejandra Siles Antezana; Gustavo Santos; Alexandre Bergel
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Mejorando la búsqueda de información de contacto en Facebook analizando emociones
    (2019) Daniel Alejandro Illanes Peredo; Wendoline Arteaga Sabja; Juan Pablo Sandoval Alcocer
  • Loading...
    Thumbnail Image
    Item type: Item ,
    On the use of extended reality to support software development activities: A systematic literature review
    (Elsevier BV, 2025) Tiara Rojas-Stambuk; Juan Pablo Sandoval Alcocer; Leonel Merino; Andrés Neyem
  • Loading...
    Thumbnail Image
    Item type: Item ,
    On the use of statistical machine translation for suggesting variable names for decompiled code: The Pharo case
    (Elsevier BV, 2024) Juan Pablo Sandoval Alcocer; Harold Camacho-Jaimes; Geraldine Galindo-Gutierrez; Andrés Neyem; Alexandre Bergel; Sté́phane Ducasse
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Performance Evolution Matrix: Visualizing Performance Variations Along Software Versions
    (2019) Juan Pablo Sandoval Alcocer; Fabian Beck; Alexandre Bergel
    Software performance may be significantly affected by source code modifications. Understanding the effect of these changes along different software versions is a challenging and necessary activity to debug performance failures. It is not sufficiently supported by existing profiling tools and visualization approaches. Practitioners would need to manually compare calling context trees and call graphs. We aim at better supporting the comparison of benchmark executions along multiple software versions. We propose Performance Evolution Matrix, an interactive visualization technique that contrasts runtime metrics to source code changes. It combines a comparison of time series data and execution graphs in a matrix layout, showing performance and source code metrics at different levels of granularity. The approach guides practitioners from the high-level identification of a performance regression to the changes that might have caused the issue. We conducted a controlled experiment with 12 participants to provide empirical evidence of the viability of our method. The results indicate that our approach can reduce the effort for identifying sources of performance regressions compared to traditional profiling visualizations.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Performance Evolution Matrix: Visualizing Performance Variations along Software Versions
    (Figshare (United Kingdom), 2019) Juan Pablo Sandoval Alcocer; Fabian Beck; Alexandre Bergel
    # Performance Evolution Matrix<br> This repository contains the artifacts needed to replicate our experiment in the paper "Performance Evolution Matrix". # Video Demo<br> [download](https://github.com/jpsandoval/PerfEvoMatrix/blob/master/MatrixMovie.mp4)<br> <br> # XMLSupport and GraphET Examples To open the XMLSupport and GraphET Examples (which appears in the paper) execute the following commands in a Terminal. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../XMLSupportExample.image<br> ``` **Open the Visualization.**<br> Please select the following code, then execute it using the green play button (at the top right of the window).<br> ```<br> ToadBuilder xmlSupportExample.<br> ```<br> or <br> ```<br> ToadBuilder graphETExample.<br> ```<br> **Note.** There are two buttons at the panel top left In (zoom in) and Out (zoom out). To move the visualization just drag the moves over the panel. # Experiment<br> This subsection describe how to execute the tools, for replicating our experiment. ## Baseline<br> The baseline contains the tools and the project-dataset to realize the tasks described in the paper (identifying and understanding performance variations). ## Open the Baseline **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Baseline execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Baseline.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Baseline.image<br> ``` ## Open a Project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> TProfileVersion openRoassal.<br> ``` **XML**<br> ```<br> TProfileVersion openXML.<br> ```<br> **Grapher**<br> ```<br> TProfileVersion openGrapher.<br> ``` ## Baseline Options<br> For each project, we provide a UI which contains all the tools we use as a baseline. Each item in the list is a version of the selected project. &lt;img src="images/baseline.png" width="300"&gt; - Browse: open a standard window to inspect the code of the project in the selected version.<br> - Profile: open a window with a call context tree for the selected version.<br> - Source Diff: open a window with the code differences between the selected version and the previous one.<br> - Execution Diff: open a window with the merge call context tree gathered from the selected version and the previous one. **Note.** All these options require you select first a item in the list. # Matrix ## Open Matrix Image. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Matrix.image<br> ``` ## Open a project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> ToadBuilder roassal.<br> ``` **XML**<br> ```<br> ToadBuilder xml.<br> ```<br> **Grapher**<br> ```<br> ToadBuilder grapher.<br> ``` # Data Gathering Before each participant starts a task we execute the following script in Smalltalk. For executing a script press Cmd-d or right-click and press do it. It allows us to track the time that a user starts the experiment and how many mouse clicks, movements.<br> ```<br> UProfiler newSession.<br> UProfiler current start.<br> ``` After finishing the task we executed the following script. It stop recording the mouse events and save the stops time.<br> ```<br> UProfiler current end.<br> ``` The last script generates a file with the following information: start time, end time, number of clicks, number of mouse movements, and the number of mouse drags (we do not use this last one).<br> ```<br> 11:34:52.5205 am,11:34:56.38016 am,14,75,0 ```<br> # Quit<br> To close the artifact, just close the window or press click in any free space of the window and select quit.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Performance Evolution Matrix: Visualizing Performance Variations along Software Versions
    (European Organization for Nuclear Research, 2019) Juan Pablo Sandoval Alcocer; Fabian Beck; Alexandre Bergel
    # Performance Evolution Matrix<br> This repository contains the artifacts needed to replicate our experiment in the paper "Performance Evolution Matrix". # Video Demo<br> [download](https://github.com/jpsandoval/PerfEvoMatrix/blob/master/MatrixMovie.mp4)<br> <br> # XMLSupport and GraphET Examples To open the XMLSupport and GraphET Examples (which appears in the paper) execute the following commands in a Terminal. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../XMLSupportExample.image<br> ``` **Open the Visualization.**<br> Please select the following code, then execute it using the green play button (at the top right of the window).<br> ```<br> ToadBuilder xmlSupportExample.<br> ```<br> or <br> ```<br> ToadBuilder graphETExample.<br> ```<br> **Note.** There are two buttons at the panel top left In (zoom in) and Out (zoom out). To move the visualization just drag the moves over the panel. # Experiment<br> This subsection describe how to execute the tools, for replicating our experiment. ## Baseline<br> The baseline contains the tools and the project-dataset to realize the tasks described in the paper (identifying and understanding performance variations). ## Open the Baseline **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Baseline execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Baseline.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Baseline.image<br> ``` ## Open a Project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> TProfileVersion openRoassal.<br> ``` **XML**<br> ```<br> TProfileVersion openXML.<br> ```<br> **Grapher**<br> ```<br> TProfileVersion openGrapher.<br> ``` ## Baseline Options<br> For each project, we provide a UI which contains all the tools we use as a baseline. Each item in the list is a version of the selected project. &lt;img src="images/baseline.png" width="300"&gt; - Browse: open a standard window to inspect the code of the project in the selected version.<br> - Profile: open a window with a call context tree for the selected version.<br> - Source Diff: open a window with the code differences between the selected version and the previous one.<br> - Execution Diff: open a window with the merge call context tree gathered from the selected version and the previous one. **Note.** All these options require you select first a item in the list. # Matrix ## Open Matrix Image. **MacOSX.** We do all the experiments in a Mac Book Pro. To open the Matrix execute the following command in the folder where this project was downloaded. ```<br> ./Pharo-OSX/Pharo.app/Contents/MacOS/Pharo Matrix.image<br> ``` **Windows.**<br> You may also run the experiment in Windows, but depending on the windows version you have installed it may be some some UI bugs.<br> ```<br> cd Pharo-Windows<br> Pharo.exe ../Matrix.image<br> ``` ## Open a project There are three projects under study, depending on the project you wanna use for the task, you may execute one of the following scripts. For executing a script press Cmd-d or right-click and press do it. **Roassal**<br> ```<br> ToadBuilder roassal.<br> ``` **XML**<br> ```<br> ToadBuilder xml.<br> ```<br> **Grapher**<br> ```<br> ToadBuilder grapher.<br> ``` # Data Gathering Before each participant starts a task we execute the following script in Smalltalk. For executing a script press Cmd-d or right-click and press do it. It allows us to track the time that a user starts the experiment and how many mouse clicks, movements.<br> ```<br> UProfiler newSession.<br> UProfiler current start.<br> ``` After finishing the task we executed the following script. It stop recording the mouse events and save the stops time.<br> ```<br> UProfiler current end.<br> ``` The last script generates a file with the following information: start time, end time, number of clicks, number of mouse movements, and the number of mouse drags (we do not use this last one).<br> ```<br> 11:34:52.5205 am,11:34:56.38016 am,14,75,0 ```<br> # Quit<br> To close the artifact, just close the window or press click in any free space of the window and select quit.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Prioritizing versions for performance regression testing: The Pharo case
    (Elsevier BV, 2020) Juan Pablo Sandoval Alcocer; Alexandre Bergel; Marco Túlio Valente
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Quality Histories of Past Extract Method Refactorings
    (Springer Science+Business Media, 2021) Abel Mamani Taqui; Juan Pablo Sandoval Alcocer; G. Hecht; Alexandre Bergel
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Reducing resource consumption of expandable collections: The Pharo case
    (Elsevier BV, 2018) Alexandre Bergel; Alejandro Infante; Sergio Maass; Juan Pablo Sandoval Alcocer
  • Loading...
    Thumbnail Image
    Item type: Item ,
    TestEvoViz: Visual Introspection for Genetically-Based Test Coverage Evolution
    (2020) Andreina Cota Vidaurre; Evelyn Cusi López; Juan Pablo Sandoval Alcocer; Alexandre Bergel
    Genetic algorithms are an efficient mechanism to generate unit tests. Automatically generated unit tests are known to be an important asset to identify software defects and define oracles. However, configuring the test generation is a tedious activity for a practitioner due to the inherent difficulty to adequately tuning the generation process. This paper presents TestEvoViz, a visual technique to introspect the generation of unit tests using genetic algorithms. TestEvoViz offers the practitioners a visual support to expose some of the decisions made by the test generation. A number of case studies are presented to illustrate the expressiveness of TestEvoViz to understand the effect of the algorithm configuration.Artifact - https://github.com/andreina-covi/ArtifactSSG.
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Una técnica de muestreo para categorizar videos
    (2018) Denis Leandro Guardia Vaca; Juan Pablo Sandoval Alcocer
  • Loading...
    Thumbnail Image
    Item type: Item ,
    Visualizing The Linux Kernel Performance with FlameGraph AR
    (2025) Tiara Rojas-Stambuk; Luis Fernando Gil-Gareca; Juan Pablo Sandoval Alcocer; Leonel Merino; David Moreno-Lumbreras
    In this challenge, we explore the evolution of the Linux kernel’s performance during compilation by comparing versions 5.19.17 and 6.14 through sampling-based CPU profiling. We collect profiling data using perf, transform into Chromecompatible .cpuprofile format, and analyze through a novel spatial visualization called FlameGraph AR.FlameGraph AR extends traditional flamegraphs beyond the limitations of IDE panels and conventional screens by rendering visualizations with augmented reality on a Microsoft HoloLens 2 device. By offloading the flamegraph to physical space, the FlameGraph AR tool enables developers to walk through wide and deeply nested call stacks, examine function frames through gesture-based interactions, and gain spatial awareness of the runtime behavior of a software system.In effect, we found immersive visualization especially valuable for analyzing architectural changes between the two kernel versions. We found that version 6.14 exhibits a significantly higher number of samples in several functions, such as native_write_msr, indicating intensified low-level CPU interactions. In addition, functions such as intel_pmu_enable_all and x86_pmu_enable also increased in frequency, suggesting increased reliance on performance monitoring. The stack depth analysis revealed that certain functions in version 6.14, including fpregs_assert_state_consistent and account_user_time, appear at significantly deeper levels than in earlier versions. Indeed, some reach the maximum stack trace depth of the profiling tool. The results indicate a growth in both modularity and the depth of instrumentation within the kernel execution paths.Multiple performance changes become visible and interactive with Flamegraph AR. For example, time-consuming functions show up as wide frames that span over desks or walls, and deep call stacks are explored physically by approaching or gazing upward. By mapping performance traces into the spatial domain, our tool provides a compelling method for understanding systemic evolution in large-scale software like the Linux kernel.Video URL: https://vimeo.com/1092935027/7d09676a83

Andean Library © 2026 · Andean Publishing

  • Accessibility settings
  • Privacy policy
  • End User Agreement
  • Send Feedback