Tried 2 methods:
1) Based on the ratio of (max allowed / current rule count), I randomly decide whether to include each point.
2) I sort by the first dimension, then add in in order every nth item to get to 100. (If n is not an integer, it selects every item that makes the "step" go over the next integer value.
Using algorithm one, I saw a fair amount of performance loss, as gaps began to appear in the frontier.
However, when I used the second algorithm, I saw mostly the same level of performance as the data where I didn't crowd prune the rules, and generated the results is minutes rather than a few hours.
For comparison, here's the fonseca curve at generation 9 without rule pruning.
No comments:
Post a Comment