Monday, September 2, 2013

Learning Project Management Decisions: A Case Study with Case-Based Reasoning Versus Data Farming

Accepted to TSE

Tim Menzies Adam Brady, Jacky Keung
Jairus Hihn, Steven Williams, Oussama El-Rawas
Phillip Green, Barry Boehm

Download 485K pdf

Abstract


  • BACKGROUND: Given information on just a few prior projects, how to learn best and fewest changes for current projects?
  • AIM: To conduct a case study comparing two ways to recommend project changes. (1) Data farmers use Monte Carlo sampling to survey and summarize the space of possible outcomes. (2) Case-Based Reasoners (CBR) explore the neighborhood around test instances.
  • METHOD: We applied a state-of-the data farmer (SEESAW) and a CBR tool (W2) to software project data.
  • RESULTS: CBR with W2 was more effective than SEESAW’s data farming for learning best and recommend project changes, effectively reduces runtime, effort and defects. Further, CBR with W2 was comparably easier to build, maintain, and apply in novel domains especially on noisy data sets.
  • CONCLUSION: Use CBR tools like W2 when data is scarce or noisy or when project data can not be expressed in the required form of a data farmer.
  • FUTURE WORK: This study applied our own CBR tool to several small data sets. Future work could apply other CBR tools and data farmers to other data (perhaps to explore other goals such as, say, minimizing maintenance effort).

Introduction

In the age of Big Data and cloud computing, it is tempting to tackle problems using:

  • A data-intensive Google-style collection of gigabytes of data; or, when that data is missing ...
  • A CPU-intensive data farming analysis; i.e. Monte Carlo sampling [1] to survey and summarize the space of possible outcomes (for details on data farming, see §2).

For example, consider a software project manager trying to

  • Reduce project defects in the delivered software;
  • •Reduce project development effort

How can a manager find and assess different ways to address these goals? It may not be possible to answer this question via data-intensive methods. Such data is inherently hard to access. For example, as discussed in §2.2, we may never have access to large amounts of software process data.

As to the cpu-intensive approaches, we have been exploring data farming for a decade [2] and, more recently, cloud computing. Experience shows that cpu-intensive methods may not be appropriate for all kinds of problems and may introduce spurious correlation under certain situations.
In this paper, we document that experience. The experiments of this paper benchmark our SEESAW data farming tool proposed in against a lightweight case-based reasoner (CBR) called W2.

If we over-analyze scarce data (such as  software process data) then we run into the risk of drawing conclusions based on insufficient background supporting data. Such conclusions will perform poorly on future examples.  For example, from the following data we might generate the RHS blanket. But note we'd be standing on thin ice if we move away from the densest region of the training data.



Our experience shows that the SEESAW data farming tool suffers from many “optimization failures” where if some test set is treated with SEESAW’s recommendations, then some aspect of that treated data actually gets worse. On the contrary W2 has far fewer optimization failures.

So this is like an "anti-MOEA" paper. Algorithms are great but if they over-extrapolate the data, they just produce crap.

Based on those experiments, this paper will conclude that when reasoning about changes to software projects:

  1. Use data farming in data rich-domains (e.g. when rea- soning about thousands of inspection reports on millions of lines of code [15]) and when the data is not noisy and when the software project data can be expressed in the same form as the model inputs;
  2. Otherwise, use CBR methods such as our W2 tool.

Back story

This paper took four years to complete. In 2009, I was visiting Jacky Keung in Sydney. At that time I was all excited by SEESAW, a model-based data farming tool based on some software process models from USC (COCOMO, etc). Jacky was an instance-based reasoning guy and, as a what-if, I speculated how to do something like SEESAW without COCOMO.

A few months later, I was staying in Naples Florida for a few days and my fingers strayed to a keyboard to try the CBR thing. Took a few hours but the result was "W" ("W" was short for the decider- which is an old joke about the then president at that time) .

W0 [13] was a initial quick proof-of-concept prototype that performed no better than a traditional simulated annealing (SA) algorithm. W1 [14] improved W0’s ranking scheme with a Bayesian method. With this improvement, W1 performed at least as well as a state-of the art model-based method (the SEESAW algorithm discussed below). W2 improved W1’s method for selecting related examples. With that change, W2 now out- performs state-of-the art model-based methods.

  • [13] A. Brady, T. Menzies, O. El-Rawas, E. Kocaguneli, and J. Keung, “Case-based reasoning for reducing software development effort,” Journal of Software Engineering and Applications, December 2010.
  • [14] A. Brady and T. Menzies, “Case-based reasoning vs parametric models for software quality optimization,” in PROMISE ’10, 2010, pp. 1–10.
The paper was initially rejected, based on some incorrect reviewer conclusions due to some poor writing on my paper. So that delayed the paper 15 months. So lesson one is "stay with it, you'll get there in the end".


No comments:

Post a Comment