Tuesday, February 21, 2012
Paper Final
Monday, February 13, 2012
Aspects of Replayability Paper
Monday, February 6, 2012
Sunday, September 27, 2009
New draft: Diagnosis of Mission-Critical Failures

New draft: Finding Robust Solutions in Requirements Models

New draft: Controlling Randomized Unit Testing With Genetic Algorithms

Tuesday, August 11, 2009
Paper accepted to ICSM 2009
Gregory Gay, Sonia Haiduc, Andrian Marcus, Tim Menzies
Concept location is a critical activity during software evolution as it produces the location where a change is to start in response to a modification request, such as, a bug report or a new feature request. Lexical based concept location techniques rely on matching the text embedded in the source code to queries formulated by the developers. The efficiency of such techniques is strongly dependent on the ability of the developer to write good queries. We propose an approach to augment information retrieval (IR) based concept location via an explicit relevance feedback (RF) mechanism. RF is a two-part process in which the developer judges existing results returned by a search and the IR system uses this information to perform a new search, returning more relevant information to the user. A set of case studies performed on open source software systems reveals the impact of RF on the IR based concept location.
Note: ICSM has a 21.6% acceptance rate.
Paper accepted to ISSRE'09
Yue Jiang, Bojan Cukic, Tim Menzies
Prediction of fault prone software components is one of the most researched problems in software engineering. Many statistical techniques have been proposed but there is no consensus on the methodology to select the "best model" for the specific project. In this paper, we introduce and discuss the merits of cost curve analysis of fault prediction models. Cost curves allow software quality engineers to introduce project-specific cost of module misclassification into model evaluation. Classifying a software module as fault-prone implies the application of some verification activities, thus adding to the development cost. Misclassifying a module as fault free carries the risk of system failure, also associated with cost implications. Through the analysis of sixteen projects from public repositories, we observe that software quality does not necessarily benefit from the prediction of fault prone components. The inclusion of misclassification cost in model evaluation may indicate that even the "best" models achieve performance no better than trivial classification. Our results support a recommendation favoring the use of cost curves in practice with the hope they will become a standard tool for software quality model performance evaluation.
(Short) Paper accepted to ASE'09
Bryan Lemon, Aaron Riesbeck, Tim Menzies, Justin Price, Joseph D’Alessandro, Rikard Carlsson, Tomi Prifiti, Fayola Peters, Hiuhua Lu, Dan Port
We implemented Boehm-Turner’s model of agile and plan-based software development. That tool is augmented with an AI search engine to find the key factors that predict for the success of agile or traditional plan-based software developments. According to our simulations and AI search engine: (1) in no case did agile methods perform worse than plan-based approaches; (2) in some cases, agile performed best. Hence, we recommend that the default development practice for organizations be an agile method. The simplicity of this style of analysis begs the question: why is so much time wasted on evidence-less debates on software process when a simple combination of simulation plus automatic search can mature the dialogue much faster?
Paper accepted to ASE'09
Understanding the Value of Software Engineering Technologies .
Phillip Green II, Tim Menzies, Steven Williams, Oussama El-Rawas
SEESAW combines AI search tools, a Monte Carlo simulator, and some software process models. We show here that, when selecting technologies for a software project, SEESAW out-performs a variety of other search engines. SEESAW’s recommendations are greatly affected by the business context of its use. For example, the automatic defect reduction tools explored by the ASE community are only relevant to a subset of software projects, and only according to certain value criteria. Therefore, when arguing for the value of a particular technology, that argument should include a description of the value function of the target user community.
Note: ASE has a 17% acceptance rate.