Difference between revisions of "Myersna-project"

From Earlham CS Department
Jump to navigation Jump to search
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
My Project will be focusing on AI and learning algorithms, particularly focused on game playing algorithms
+
== Project ==
  
 +
My project is inspired by one of my favorite past times: PC strategy games.  I am ultimately attempting to research ways in which AIs can change and adapt their strategies based upon the play history of a particular opponent, hopefully a human opponent.  To accomplish this, I am researching and implementing a subset of Machine Learning known as genetic algorithms.
  
 +
The test game that I'm working with is, unfortunately, not a full fledged strategy game, but is nonetheless interesting.  While I do no know the name of the game, the rules are as follows:
  
You can view my daily logs [[myersna-log|here]].
+
* Two players take an identical set of cards, each of which has it's own value. (Originally all the cards taken from a given suit from a standard deck of cards)
 +
* A third identical set of cards is shuffled and then played one at a time for each player to bid on using the cards in their hand.
 +
* Each player chooses a bid and submits it face down simultaneously, the higher bid wins the flop, and it's point value is added to that players score.
 +
* In the event of a tie, nobody gets the flop.
  
You can view my current bibliography [http://www.cs.earlham.edu/~myersna/bib.html here].
+
== Species Layout ==
  
 +
Current species layout is representing all game states (a combination of cards remaining in the stack, including the current flop, and the cards left in both my hand and my opponents) and mapping that to a table containing the flop card and what card to play.  This way there are two dimensions of cross-breeding possible, along the X and Y axis.  I'll post an example once I get more of this down.
 +
 +
== Fitness Function ==
 +
 +
This is kind of the crux of the whole algorithm.  The fitness function should in a perfect world be testing the policy representation of a player.  However if I could do that, then I would already be done I suspect.  So for now I'll be using a fairly non-descript and simple policy to test the algorithm on.
 +
 +
== Resources ==
 +
* [[myersna-log|Daily Logs]].
 +
* [http://www.cs.earlham.edu/~myersna/bib.html Current Bibliography].
 +
* [http://www.cs.earlham.edu/~myersna/AI.ps AI Survey Paper].
 +
* Coming Soon: Genetics Paper
  
 
That is all for now
 
That is all for now

Latest revision as of 18:52, 2 December 2007

Project

My project is inspired by one of my favorite past times: PC strategy games. I am ultimately attempting to research ways in which AIs can change and adapt their strategies based upon the play history of a particular opponent, hopefully a human opponent. To accomplish this, I am researching and implementing a subset of Machine Learning known as genetic algorithms.

The test game that I'm working with is, unfortunately, not a full fledged strategy game, but is nonetheless interesting. While I do no know the name of the game, the rules are as follows:

  • Two players take an identical set of cards, each of which has it's own value. (Originally all the cards taken from a given suit from a standard deck of cards)
  • A third identical set of cards is shuffled and then played one at a time for each player to bid on using the cards in their hand.
  • Each player chooses a bid and submits it face down simultaneously, the higher bid wins the flop, and it's point value is added to that players score.
  • In the event of a tie, nobody gets the flop.

Species Layout

Current species layout is representing all game states (a combination of cards remaining in the stack, including the current flop, and the cards left in both my hand and my opponents) and mapping that to a table containing the flop card and what card to play. This way there are two dimensions of cross-breeding possible, along the X and Y axis. I'll post an example once I get more of this down.

Fitness Function

This is kind of the crux of the whole algorithm. The fitness function should in a perfect world be testing the policy representation of a player. However if I could do that, then I would already be done I suspect. So for now I'll be using a fairly non-descript and simple policy to test the algorithm on.

Resources

That is all for now