Monthly Archives: June 2015

“The Making of Cloud Applications” accepted at ESEC/FSE!

Our mixed-method study on software development for the cloud has been accepted as a full research paper for ESEC/FSE 2015! In this paper, titled “The Making of Cloud Applications – An Empirical Study on Software Development for the Cloud”, we interviewed 25 developers from startups, SMEs, and large enterprises on the opportunities and challenges of building and deploying software in a cloud-based environment. Additionally, we report on the results of an email survey with close to 300 respondents.

You can already read a pre-print version of this paper online.

Advertisement

Paper accepted at ICSME ’15

We are excited to announce that our paper “How Can I Improve My App? Classifying User Reviews for Software Maintenance and Evolution” has been accepted at the 31st International Conference on Software Maintenance and Evolution (ICSME 2015) in Bremen, Germany.

App Stores, such as Google Play or the Apple Store, allow users to give feedback on apps by posting review comments and giving star ratings. These platforms constitute a useful electronic mean in which application developers and users can productively exchange information about apps.
Previous research showed that users feedback contains usage scenarios, bug reports and feature requests, that can help app developers accomplish software maintenance and evolution tasks. However, in the case of the most popular apps, the large amount of received feedback, its unstructured nature and varying quality can make the identification of useful user feedback a very challenging task.
In this paper we present a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques: (1) Natural Language Processing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify app reviews into the proposed categories. We show that the combined use of these techniques allows to achieve better results (a precision of 75% and a recall of 74%) than results obtained using each technique individually (precision of 70% and a recall of 67%).

Approach for Classifying User Reviews for Software Maintenance and Evolution