ai @ wvu
Modeling Intelligence Lab ("the MILL")
Wednesday, June 2, 2010
Random Pre-Processor + Algorithm Results for Normal and Reduced Datasets
The results for the experiments:
Behavior of different pre-processor&algorithm combinations become more similar as the instances size gets smaller (e.g. cocomo81s and desharnaisL3 figures)
Some algorithms have considerable less loss values, but there is no best algorithm for all datasets.
The reduced variance datasets are reduced by the GAC tree (only instances in the nodes that have less than or equal to ten percent of the max variance).
When reduction is applied 3 datasets reduce to only 2 instances: kemerer, nasa-center1, telecom1.
Since reduction makes the dataset smaller, their results become more similar (both the plots look similar and the cases of all algorithms getting zero losses increase).
The graphs for these experiments can be found at:
Related plots are at
Some more results regarding GAC-simulated datasets:
Populated datasets attain very high MdMRE and Pred(25) values.
There is more of a pattern regarding the best algorithm.
A better check of GAC-simulation would be simulation&prediction with leave-one-out.
Post a Comment
Post Comments (Atom)