- Behavior of different pre-processor&algorithm combinations become more similar as the instances size gets smaller (e.g. cocomo81s and desharnaisL3 figures)
- Some algorithms have considerable less loss values, but there is no best algorithm for all datasets.
- The reduced variance datasets are reduced by the GAC tree (only instances in the nodes that have less than or equal to ten percent of the max variance).
- When reduction is applied 3 datasets reduce to only 2 instances: kemerer, nasa-center1, telecom1.
- Since reduction makes the dataset smaller, their results become more similar (both the plots look similar and the cases of all algorithms getting zero losses increase).
- The graphs for these experiments can be found at:http://unbox.org/wisp/var/ekrem/resultsVariance/Results/NORMAL-DATA RESULTS.zip andhttp://unbox.org/wisp/var/ekrem/resultsVariance/Results/REDUCED-DATA RESULTS.zip Related plots are at http://unbox.org/wisp/var/ekrem/resultsVariance/Results/resultsPlotterTexFiles/plots.pdf
Some more results regarding GAC-simulated datasets:
- Populated datasets attain very high MdMRE and Pred(25) values.
- There is more of a pattern regarding the best algorithm.
- A better check of GAC-simulation would be simulation&prediction with leave-one-out.
No comments:
Post a Comment