Tuesday, August 28, 2012

New TSE Experiment Results

Privacy and Utility results

pom2 & fun in games

pom2 graph: http://i.imgur.com/ffATu.png

Plans for fun games:
 - Prepare cog. tests: i.e. What dungeon parameters lead to highest user play time
 - Collect data: game dumps user data to server
 - - - Dungeon parameters used, total play time, explore ratios, speed of movement, items collected, etc
 - Analyze data: keys2?

Tuesday, August 21, 2012

Maze Game & Turing Test of AI

AiMazed2D Project (Java Jar download below):

Tasks this week:
 1. Video record humans playing the game
 2. Video record AI playing the game
 3. Devise/research way to score results of formal turing test across gathered sample space from (1 & 2)

Tasks later down road:
 1. Improve score of turing test. (Study differences between humans and AI videos above)
 * Most prominently: humans have decision delays that must be imitated by the AI

Aptamer Predictions

Concentration Prediction Sample for Bromacil

Monday, August 20, 2012

Testing stability of FSS on NEMS dataset

NSF Workshop: Planning Future Directions in Artificial Intelligence and Software Engineering (AISE'12)

NSF Workshop: Planning Future Directions in Artificial Intelligence and Software Engineering (AISE'12)

Tim Menzies, Rachel Harrison, Sol Greenspan

NSF is sponsoring a one-day workshop to consider how ideas and technologies from Artificial Intelligence (AI) can help achieve the goals of Software Engineering (SE). The purpose of the workshop is to gather researchers from both communities who have a common interest in leveraging AI research to advance SE. The converse -- improving SE for AI applications and systems -- is also in scope. The objective is to assemble a meeting of researchers from both communities to formulate a fruitful research agenda. After the workshop, the organizers will invite some attendees to co-author a report entitled Future Directions in Software Engineering and Artificial Intelligence Research.

Participation is by invitation only. Prospective participants should submit a research vision statement written from one or more of the following perspectives:
  • Improving SE through AI -- including but not limited to knowledge acquisition / representation / reasoning, agents, machine learning, machine-human interaction, planning and search, natural language understanding, problem solving and decision-making, understanding and automation of human cognitive tasks, AI programming languages, reasoning about uncertainty, new logics, statistical reasoning, etc. 
  • Applying AI to SE activities -- including but not limited to requirements, design, specification, traceability, program understanding, model-driven development, testing and quality assurance, domain-specific software engineering, adaptive systems, software evolution, etc. 
  • SE for AI -- including but not limited to AI programming languages, program derivation techniques in AI domains, platforms and programmability, software architectures, rapid prototyping and scripting for AI techniques, software engineering infrastructure for reflective and self-sustaining systems, etc.

On the Value of User Preferences in Search-Based Software Engineering: A Case Study in Software Product Lines

On the Value of User Preferences in Search-Based Software Engineering: A Case Study in Software Product Lines

Abdel Salam Sayyad Tim Menzies Hany Ammar

Software design is a process of trading off competing objectives. If the user objective space is rich, then we use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering- particularly when studying complex decision spaces.


Better Cross Company Defect Prediction

Fayola Peters, Tim Menzies, Andrian Marcus Abstract— How can we find data for quality prediction? Early in the lifecycle, projects may lack the data needed to build such predictors. Prior work assumed that relevant training data was found nearest to the local project. But is this the best approach? This paper introduces the Peters filter that is based on the following conjecture. When local project data is scarce, there is more information in other projects than locally. Accordinging, this filter selects training data via the structure of the other projects. We tested the Peters filter on 21 small data set looking for training data in 35 larger data sets. In the majority case (67%), the Peters filter builds much better defect predictors that the current-state-of-the-art methods. Hence, we recommend the Peters filter for cross-company learning.


Thursday, August 9, 2012

Bromacil Cumulative Percentage by round

Atrazine Cumulative Percentage by round

Hmm, can you predict the cluster of the target?  Much simpler once the nearest neighbor heuristic was applied.