Monthly Archives: November 2016

“Reducing Redundancies in Multi-Revision Code Analysis” @ SANER’17

We’re happy to announce that the paper

“Reducing Redundancies in Multi-Revision Code Analysis”

written by Carol V. Alexandru, Sebastiano Panichella and Harald C. Gall, has been accepted into the technical research track of SANER 2017.

Abstract:

Software engineering research often requires analyzing multiple revisions of several software projects, be it to make and test predictions or to observe and identify patterns in how software evolves. However, code analysis tools are almost exclusively designed for the analysis of one specific version of the code, and the time and resources requirements grow linearly with each additional revision to be analyzed. Thus, code studies often observe a relatively small number of revisions and projects. Furthermore, each programming ecosystem provides dedicated tools, hence researchers typically only analyze code of one language, even when researching topics that should generalize to other ecosystems. To alleviate these issues, frameworks and models have been developed to combine analysis tools or automate the analysis of multiple revisions, but little research has gone into actually removing redundancies in multi-revision, multi-language code analysis. We present a novel end-to-end approach that systematically avoids redundancies every step of the way: when reading sources from version control, during parsing, in the internal code representation, and during the actual analysis. We evaluate our open-source implementation, LISA, on the full history of 300 projects, written in 3 different programming languages, computing basic code metrics for over 1.1 million program revisions. When analyzing many revisions, LISA requires less than a second on average to compute basic code metrics for all files in a single revision, even for projects consisting of millions of lines of code.

Use and extend LISA: https://bitbucket.org/sealuzh/lisa

Or try out LISA using a simple template: https://bitbucket.org/sealuzh/lisa-quickstart

 

Advertisement

“Analyzing Reviews and Code of Mobile Apps for better Release Planning” @ SANER 2017

We’re happy to announce that the paper “Analyzing Reviews and Code of Mobile Apps for better Release Planning” has been accepted into SANER 2017 as a full paper. The authors of the paper are:   Adelina Ciurumelea, Andreas Schaufelbühl, Sebastiano Panichella and Harald Gall.

Abstract:

The mobile applications industry experiences an unprecedented high growth, developers working in this context face a fierce competition in acquiring and retaining users. They have to quickly implement new features and fix bugs, or risks losing their users to the competition. To achieve this goal they must closely monitor and analyze the user feedback they receive in form of reviews. However, successful apps can receive up to several thousands of reviews per day,  manually analysing each of them is a time consuming task.

urr

To help developers deal with the large amount of available data, we manually analyzed the text of 1566 user reviews and defined a high and low level taxonomy containing mobile specific categories (e.g. performance, resources, battery, memory, etc.) highly relevant for developers during the planning of maintenance and evolution activities. Then we built the User Request Referencer (URR) prototype, using Machine Learning and Information Retrieval techniques, to automatically classify reviews according to our taxonomy and recommend for a particular review what are the source code artifacts that need to be modified to handle the issue described in the user review. We evaluated our approach through an empirical study involving the reviews and code of 39 mobile applications. Our results show a high precision and recall of URR in organising reviews according to the defined taxonomy. Furthermore, we discovered during the evaluation that using information about the specific structure of mobile software projects (e.g. how to find source code implementing the UI) improves the source code localization results”.