Friday, June 13, 2014

Thesis Planning

9/18/14- Possible problems with "defaults only"

Here is the NB+RF result I was waiting on. Looks like we now have a convincing argument that tuning works, right?

 Let's confirm by checking RF only. Not as nice of a picture:
What about NB only?
Now we can see that the low performance in the defaults only case merely reflects that RF is a better choice of learner than NB. (not a big surprise) The "defaults only" case works just as well as out-of-set tuning.

8/15/14-- Bayes Test Confirms Combination Suspicions, Proposed Experiments

So here's the old results from the combination of three Baysean classifiers (stupid combination method):
Rranking Param: pD,pF AUC
1: 0.76 defaults only -- current xval
1: 0.75 best on current -- current xval
1: 0.74 best on prev -- current xval
0: 0.63 defaults only -- prev to current full
0: 0.63 best on current -- prev to current full
0: 0.62 best on current -- prev to current xval
0: 0.61 best on prev -- prev to current full
0: 0.61 defaults only -- prev to current xval
0: 0.60 best on prev -- prev to current xval

And here are the new results from the same three classifiers, same params, new linear score-weighted combination method:
Rranking Param: pD,pF AUC
1: 0.83 best on current -- current xval
1: 0.83 best on prev -- current xval
1: 0.83 defaults only -- current xval
0: 0.67 best on current -- prev to current xval
0: 0.67 defaults only -- prev to current full
0: 0.67 best on prev -- prev to current full
0: 0.67 best on prev -- prev to current xval
0: 0.65 best on current -- prev to current full
0: 0.63 defaults only -- prev to current xval

Clearly this shows that my old combination method is inferior and gives us a hint that choice of combination method may have a sizable effect on performance.

As per our discussion the other day about using the RQs to guide the experiments, here's my current plan:


8/10/14 -- All Learners Results, Flowcharts, Suggested Changes

To start off, here's the results from the same experiment as before, but with all learners:
(Gaussian NB, Bernoulli Bayes, Multionomial Bayes, Random Forest, Logistic Regression)

Scott-Knott Rank: pD,pF AUC 
1: 0.84 best on current -- current xval
1: 0.84 best on prev -- current xval
1: 0.81 defaults only -- current xval
0: 0.71 best on current -- prev to current full
0: 0.70 best on prev -- prev to current full
0: 0.70 best on current -- prev to current xval
0: 0.69 best on prev -- prev to current xval
0: 0.64 defaults only -- prev to current full
0: 0.64 defaults only -- prev to current xval


As you can see, the pD and pF here are inferior to the results that we saw for RF or LR, but slightly better than what we saw for NB. This is contrary to what we would expect to see. After reading Thomas et al. on classier combination, I don't think my combination method is smart enough. I'm doing the equivalent of an unweighted voting by combining the results from multiple learners post-hock. Whereas I think the strategy best suited to this is score-based weighted voting from Thomas et al.

I've also realized that the process I'm doing is going to be very difficult to explain in a way that makes sense, so I've started on a couple of flowcharts. I know these doesn't entirely conform to the actual definition of the various symbols (using "document" instead of "data" for datasets, etc.), but are these flowcharts, with a few captions sufficient to convey what's being done in the experiment? Suggestions?

Experiment Overview: Shows how data flows through the two-step experiment for each combination of tuning method and evaluation method. The tuning step and evaluation step will be shown in more detail in the next chart.

Tuning Step: This step can be in one of three modes: defaults only, best on prev, best on current. This step accepts learner objects, a previous and a current dataset, and the tuning method and passes the datasets and a list of "tuned" learners objects with their default params overridden with optimized params to the next step.

Note: Here is where the proposed change to reflect the score-based method discussed in Thomas et al.
Rather than returning a list of non-dominated learners, this section should return a single ensemble learner object with has all the non-dominated learners as constituents. The ensemble learner will also carry two weights for each constituent learner based on its precision and negative predictive value from the tuning study. The precision weight will be applied to each positive classification from a learner, and the negative predictive value weight will be applied to its negative classifications. After weighting has been applied, the ensemble learner will determine a consensus though voting and report only on the consensus classifications. For the "Defaults Only" case, the ensemble learner will contain one of each type of learner with weights of 1 for each learner with the exception of Baysean learners which will receive a weight of 1/3 because three different Baysean learners are used as opposed to one of each other scheme.

Evaluation Step: This step evaluates the "tuned" learners which are passed from the previous step in one of three ways: prev to current xval, current xval, or prev to current full set. This step generates a result which constitutes a single point on each of the pD, pF plots.
(I know this is probably a little small in the blog, but opening the image in a new tab should get you the full resolution.)


 Result Structure: The "flow" part of "flowchart" doesn't really apply here, but this is the structure of the results generated after the experiment is finished. For Each combination of tuning method and evaluation method, there is a list of individual results, one for each dataset.

8/04/14 -- Logistic Regression

Same story, different learner.

Scott-Knott Rank: pD,pF AUC
1: 0.86 defaults only -- current xval
1: 0.85 best on prev -- current xval
1: 0.85 best on current -- current xval
0: 0.73 best on prev -- prev to current xval
0: 0.72 best on current -- prev to current full
0: 0.72 defaults only -- prev to current full
0: 0.72 best on prev -- prev to current full
0: 0.71 defaults only -- prev to current xval
0: 0.71 best on current -- prev to current xval



8/02/14 -- Random Forests

Doing the same thing as before, but with RF instead of NB.

Step 1: Choose one tuning strategy from:

  • Defaults Only- no tuning occurs, only the default parameters are used
  • Best on Prev- parameters are tuned for best performance on previous version's data
  • Best on Current-parameters are tuned by peeking at this version's data (must know current version's class)
Step 2: Choose one evaluation method from:


  • Current Xval- current version split into test/train groups for 5x5 cross-validation
  • Prev to Current Xval- like above, but training on previous version and testing on the current
  • Prev to Current Full- Entire previous set is used for training, entire current set used for testing

Scikit-Learn's Random Forrest was used as the sole learner and parameter tuning was conducted within the following parameter space:


params={
'n_estimators':['values', 3, 5, 10],
'criterion':['values', "gini", "entropy"],
'max_features':['values', 'sqrt', 'log2', None],
'max_depth':['values', None, 4, 8],
'min_samples_split':['values', 4, 8],
'min_samples_leaf':['values', 2, 4],
'bootstrap':['values', True, False],
}


default_params={
'n_estimators':10,
'criterion':'gini',
'max_features':"sqrt",
'max_depth':None,
'min_samples_split':2,
'min_samples_leaf':1,
'bootstrap':True,
}


All nine combinations of tuning strategy and evaluation method were tried on every non-0th-version dataset from the usual group. (Ant, Camel, Ivy, Jedit, Log4J, LUcene, Synapse, Velocity, Xalan, Xerces)

datasets*usable versions=26
Runtime~= 22hrs

Scott-Knott Rank: pD,pF AUC
1: 0.99 defaults only -- current xval
1: 0.99 best on current -- current xval
1: 0.98 best on prev -- current xval
0: 0.74 defaults only -- prev to current full
0: 0.73 best on current -- prev to current full
0: 0.73 best on prev -- prev to current full
0: 0.72 best on prev -- prev to current xval
0: 0.72 defaults only -- prev to current xval
0: 0.70 best on current -- prev to current xval

Pd/Pf plots with each dot representing an individual dataset version:
Note: colors above != colors below

Looks like we're seeing a little bit more pronounced version of the same effect we saw with naive Bayes. Stay tuned for logistic regression.

Update 7/28/14

Three styles of parameter tuning and three styles of test->train setup were compared. They are defined as follows:

Defaults Only- no tuning occurs, only the default parameters are used
Best on Prev- parameters are tuned for best performance on previous version's data
Best on Current-parameters are tuned by peeking at this version's data (must know current version's class)

Current Xval- current version split into test/train groups for 5x5 cross-validation
Prev to Current Xval- like above, but training on previous version and testing on the current
Prev to Current Full- Entire previous set is used for training, entire current set used for testing

All nine combinations were tried on every non-0th-version dataset from the usual group. (Ant, Camel, Ivy, Jedit, Log4J, LUcene, Synapse, Velocity, Xalan, Xerces)

datasets*usable versions=26

Pd/Pf plots with each dot representing an individual dataset version:



and if we rank all 9 treatments by pd/pf AUC using Scott Knott:

1: 0.76 defaults only -- current xval
1: 0.75 best on current -- current xval
1: 0.74 best on prev -- current xval
0: 0.63 defaults only -- prev to current full
0: 0.63 best on current -- prev to current full
0: 0.62 best on current -- prev to current xval
0: 0.61 best on prev -- prev to current full
0: 0.61 defaults only -- prev to current xval
0: 0.60 best on prev -- prev to current xval

We find , unsurprisingly, that results look better when both the train and test data come from the same dataset. Other than that, nothing else appears to matter too much.

Update 6/16/14

Outline of how I see these topics being presented in referance to the Sheppard, Bowes, Hall and Hall et al. results. (to make sure we're on the same page)

  • Parameter Tuning Style
    • parameter tuning practices fall into the reasercher group "basket" of concepts and prior knowledge mentioned by Sheppard, Bowes, and Hall
    • Perhaps parameter tuning can explain some of the variance in literature
  • Train -> Test Style
    • Much consideration and discussion of error in results focuses on sound experimental design. One design element often suspect is the style of segregating training and testing data.
    • This was not examined in Sheppard,  Bowes, Hall or Hall et al. but if this truly has a large effect on results, perhaps it could explain some of the variance.
  • History Inclusion
    • This doesn't really fit. Perhaps we should drop it?
  • Learn by Cluster
    • What if some authors are only using a subset of data that preforms the best?
    • Comparing the performance of clusters should tell us if this is even worth considering
    • (Spoiler alert: It's probably not)
  • Other Things I think may be appropriate:
    • Since ~70% of the studies in meta-studies used NASA or Eclipse datasets, I probably should pick those up as well and make them the primary focus
    • I should probably also include the Matthews Correlation Coefficient for comparison even if we keep pD, pF and pD/pF-AUC as our primary means of comparison. 
    • I should probably also include more learners eventually.

Original Post:

Things with which to experiment:
  • Parameter Tuning Style
    • Default parameters only
    • Global best tuning
    • best parameters from x-val on previous versions
    • best parameters from x-val on current versions
  • Train->Test Style
    • current->current (standard x-val or leave-one-out)
    • prev->current full
    • prev->current with subsampling
  • History Included
    • No historical deltas
    • Historical deltas to previous version
    • Historical deltas to all (or n) previous versions
  • Learn by Cluster
    • No
    • Yes
Questions:
  • Shepperd, Hall, Bowes results
    • I've seen authorship in Shepperd's ppt, but not in the Hall, Bowes paper
    • Is there a paper to go with the embarrassing result?
  • Previous->Current :: Train->Test
    • The same as incremental learning or not quite? 
    • More papers?

Wednesday, June 11, 2014

PLAN C: Prune trees

1a) delete data from any leaf containing things from > 1 cluster.
Did this one . Check http://unbox.org/things/var/nave/lpj/out/10_June_2014/dats/pruned_tree.dat

for pruned tree and actual tree. 
To check if the code is working or not see: http://unbox.org/things/var/nave/lpj/out/10_June_2014/dats/short_example.dat

1b) descend the trees generated from CART looking for sub-trees that
whose items with cluster ID have HIGHER entropy than the parents, then
delete all items in those sub-trees-of-confusion

1c) for all sub-trees built by CART, compute the entropy of the leaf
items in that tree. sort those entropies to find "too much confusion"
e.g. half way down that list. delete all sub-trees with MORE than "too
much confusion"

then after step1, rebuild the trees using the reduced data sets.


Apparently scikit-learn dtrees do not maintain samples in their trees. meaning. at the leafs or at the nodes there is no sample data. It runs through the sample data and gathers values(results) but not store samples themselves. So at a leaf, I can only find stats of samples, like
[  7.   8.   3.   0.   0.   0.   0.   0.   3.   0.   5.   2.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.]

indicating there are 7 samples of first cluster, 8 of second and so on.

So I cannot actually go back to dataset remove those rows and rebuild the tree.
Instead I can calculate entropies using above array at each leaf/node and prune ones using 1b and 1c.

PS: One more problem, those arrays of values are not maintained at nodes! there are only available at leaves, so I have to traverse down entire branch beneath a node and add those arrays together to get that above array at that node.

Monday, June 2, 2014

results Contrast set learner

55555 rounding
Before rounding:

 Techniques         -effort         -months        -defects          -risks    #
           T0 m              29              67              14               2    #
  T3 C25 N100 m               2              10               3               0    #
      T9:j/j_ m               0              35               0              21    #
-------------------------------------------------------------------------------------
           T0 q              26              16              20               6    #
  T3 C25 N100 q               3               0               5               1    #
      T9:j/j_ q               3              23               4              38    #
-------------------------------------------------------------------------------------
           T0 w             100             100             100              19    #
  T3 C25 N100 w              32              21              49               4    #
      T9:j/j_ w              32             100              56             100    #
-------------------------------------------------------------------------------------
            100         2687.38            41.4        17612.38             8.6    #
              0          117.21            1.93          381.83            0.05    #

After rounding:
Dataset rounded to integers:
 Techniques         -effort         -months        -defects          -risks    #
           T0 m              29              67              14               0    #
  T3 C25 N100 m               3              10              10               1    #
      T9:j/j_ m               0              35               0              22    #
-------------------------------------------------------------------------------------
           T0 q              28              17              24               2    #
  T3 C25 N100 q               3               0              13               2    #
      T9:j/j_ q               3              23               4              38    #
-------------------------------------------------------------------------------------
           T0 w             100              99             100              15    #
  T3 C25 N100 w              26              19              78               4    #
      T9:j/j_ w              32             100              59             100    #
-------------------------------------------------------------------------------------
            100          2695.0            41.4        16771.11             8.6    #
              0          117.21            1.85          381.83             0.0    # 
 
7777 Trees of all
 
http://unbox.org/things/var/nave/lpj/out/02_June_2014/dats/tree*
 
88888 Results for all models after rounding (makes sense):
 
flight:

     Techniques         -effort         -months        -defects          -risks    #
           T0 m              27              67              12               4    #
  T3 C25 N100 m               0              10               0               0    #
    T9:j/jall m               4              46               1              16    #
-------------------------------------------------------------------------------------
           T0 q              24              16              18              10    #
  T3 C25 N100 q               0               0               2               1    #
    T9:j/jall q              10              29               6              43    #
-------------------------------------------------------------------------------------
           T0 w             100             100             100              35    #
  T3 C25 N100 w              31              21              47               7    #
    T9:j/jall w              32              97              35             100    #
-------------------------------------------------------------------------------------
            100         2687.38           41.35        17612.38             4.8    #
              0          172.58            1.92          892.87            0.04    #
 
ground:
 
Techniques         -effort         -months        -defects          -risks    #
           T0 m              15              55               8              10    #
  T3 C25 N100 m               2              12               6               0    #
 T9:j/jground m               4              42               0               9    #
-------------------------------------------------------------------------------------
           T0 q              12              13              13              24    #
  T3 C25 N100 q               0               0               6               0    #
 T9:j/jground q              15              47               2              15    #
-------------------------------------------------------------------------------------
           T0 w              68              84             100              54    #
  T3 C25 N100 w              26              21              44               4    #
 T9:j/jground w             100             100              98             100    #
-------------------------------------------------------------------------------------
            100          2819.5            43.9        27175.92             4.8    #
              0          155.25            1.55          806.33            0.12    # 
 
 
osp:
 
Techniques         -effort         -months        -defects          -risks    #
           T0 m              30              54              38              37    #
  T3 C25 N100 m               0               8               2               0    #
    T9:j/josp m               4              47               0              11    #
-------------------------------------------------------------------------------------
           T0 q               9               5              21              29    #
  T3 C25 N100 q               0               0               4               0    #
    T9:j/josp q               4              30              10              30    #
-------------------------------------------------------------------------------------
           T0 w              54              67             100              64    #
  T3 C25 N100 w              13              16              55              10    #
    T9:j/josp w             100             100              41             100    #
-------------------------------------------------------------------------------------
            100          3063.6            41.7        14504.17             7.5    #
              0           89.69            1.47          722.41            0.26    #
 
 
 
osp2:
 
      Techniques         -effort         -months        -defects          -risks    #
           T0 m              49              67              11               5    #
  T3 C25 N100 m               1              12               1               0    #
   T9:j/josp2 m               2              44               0               9    #
-------------------------------------------------------------------------------------
           T0 q               9               5               2               7    #
  T3 C25 N100 q               0               0               4               2    #
   T9:j/josp2 q              16              21              18              28    #
-------------------------------------------------------------------------------------
           T0 w              90              82              34              15    #
  T3 C25 N100 w              44              23              93              17    #
   T9:j/josp2 w             100             100             100             100    #
-------------------------------------------------------------------------------------
            100          1198.0            33.4          7610.0             5.1    #
              0          112.45            1.57           637.4            0.21    #
 
all:

Techniques         -effort         -months        -defects          -risks    #
           T0 m              20              54              11              21    #
  T3 C25 N100 m               0              12               5               0    #
    T9:j/jall m               2              42               0              14    #
-------------------------------------------------------------------------------------
           T0 q              20              18              19              35    #
  T3 C25 N100 q               0               0               7               2    #
    T9:j/jall q               8              26               4              40    #
-------------------------------------------------------------------------------------
           T0 w             100             100             100             100    #
  T3 C25 N100 w              42              24              64              10    #
    T9:j/jall w              31              89              27              94    #
-------------------------------------------------------------------------------------
            100          2645.2           44.82        22300.72            5.08    #
              0          225.33            2.24         1028.96            0.12    #
 
99999 Cohen <=> Bootstrap
 
Bootstrap:
 
 Techniques         -effort         -months        -defects          -risks    #
           T0 m              27              67              12               4    #
  T3 C25 N100 m               0              10               0               0    #
    T9:j/jall m               4              46               1              16    #
-------------------------------------------------------------------------------------
           T0 q              24              16              18              10    #
  T3 C25 N100 q               0               0               2               1    #
    T9:j/jall q              10              29               6              43    #
-------------------------------------------------------------------------------------
           T0 w             100             100             100              35    #
  T3 C25 N100 w              31              21              47               7    #
    T9:j/jall w              32              97              35             100    #
-------------------------------------------------------------------------------------
            100         2687.38           41.35        17612.38             4.8    #
              0          172.58            1.92          892.87            0.04    #
 
Cohen:
 
  Techniques         -effort         -months        -defects          -risks    #
           T0 m              27              67              11               4    #
  T3 C25 N100 m               0              10               0               0    #
    T9:j/jall m               3              46               0              16    #
-------------------------------------------------------------------------------------
           T0 q              24              16              18              10    #
  T3 C25 N100 q               1               0               2               1    #
    T9:j/jall q               9              28               5              43    #
-------------------------------------------------------------------------------------
           T0 w             100             100             100              35    #
  T3 C25 N100 w              27              21              51               8    #
    T9:j/jall w              32              97              34             100    #
-------------------------------------------------------------------------------------
            100         2687.38           41.35        17612.38             4.8    #
              0          185.14            2.08          989.83            0.04    #
 
 
Runtimes:
 
Bootstrap:
real 1m51.292s
user 1m44.647s
sys 0m6.395s 
 
Cohen:
real 0m56.032s
user 0m52.087s
sys 0m3.737s
 

Summary

-555--Rounding dataset into integers makes more sense as it would be similar to the daataset we receive. Results indicate minor changes.

-777--Trees as indicated are very big. Working on axe to get it down.

-888--Results for other models are similar to flight. a little better.

-999--Swapping cohens to bootstrap gave little change to output. but runtime is reduced by half.