Difference between revisions of "Myersna-log"

From Earlham CS Department
Jump to navigation Jump to search
Line 1: Line 1:
 
These are my (semi) daily logs for my project.  You can find my project summary page [[myersna-project|here]]
 
These are my (semi) daily logs for my project.  You can find my project summary page [[myersna-project|here]]
== October 25 ==
+
 
Did some additional research that is more focused on what I want to do,  
+
 
finding research on AI and machine learning as applied to strategy games.  
+
== November 13==
Most of them pertain to chess, in a non-brute force method.
+
I've written up a crude Gant(?) chart stating what I need to do, first I need to get this minmax working correctly with output on the double.  I've been spending probably too much time on other projects. Also at the same time I need to look more into data formats that are compatible with genetic algorithms.
 +
 
 +
== November 7 ==
 +
* Got an implementation of minmax working, except that it's not minmax.  Going to change the structure to relfect minmax.
 +
 
 +
== November 3 ==
 +
* Continued work on Minmax
 +
* Created my project page
 +
 
 +
== October 31 ==
 +
Switched to a Wiki format from text file
 +
 
 +
== October 30 ==
 +
Continued work on Minmax
 +
 
 +
== October 29 ==
 +
Began work coding Minmax, basic framework laid out
 +
Began outline of Paper.
 +
 
 +
== October 28 ==
 +
Looked a bit at the other students logs, mine looks really bad in
 +
comparison...I need to get this to wrap properly.
 +
 
 +
== October 27 ==
 +
Continued reading new research, comparitivly I'm finding the chess based
 +
readings a little dry.
  
 
== October 26 ==
 
== October 26 ==
Line 15: Line 40:
 
and not relizing it.
 
and not relizing it.
  
== October 27 ==
+
== October 25 ==
Continued reading new research, comparitivly I'm finding the chess based
+
Did some additional research that is more focused on what I want to do,  
readings a little dry.
+
finding research on AI and machine learning as applied to strategy games.  
 
+
Most of them pertain to chess, in a non-brute force method.
== October 28 ==
 
Looked a bit at the other students logs, mine looks really bad in
 
comparison...I need to get this to wrap properly.
 
 
 
== October 29 ==
 
Began work coding Minmax, basic framework laid out
 
Began outline of Paper.
 
 
 
== October 30 ==
 
Continued work on Minmax
 
 
 
== October 31 ==
 
Switched to a Wiki format from text file
 
 
 
== November 3 ==
 
* Continued work on Minmax
 
* Created my project page
 
 
 
== November 7 ==
 
* Got an implementation of minmax working, except that it's not minmax.  Going to change the structure to relfect minmax.
 

Revision as of 02:40, 14 November 2007

These are my (semi) daily logs for my project. You can find my project summary page here


November 13

I've written up a crude Gant(?) chart stating what I need to do, first I need to get this minmax working correctly with output on the double. I've been spending probably too much time on other projects. Also at the same time I need to look more into data formats that are compatible with genetic algorithms.

November 7

  • Got an implementation of minmax working, except that it's not minmax. Going to change the structure to relfect minmax.

November 3

  • Continued work on Minmax
  • Created my project page

October 31

Switched to a Wiki format from text file

October 30

Continued work on Minmax

October 29

Began work coding Minmax, basic framework laid out Began outline of Paper.

October 28

Looked a bit at the other students logs, mine looks really bad in comparison...I need to get this to wrap properly.

October 27

Continued reading new research, comparitivly I'm finding the chess based readings a little dry.

October 26

Read up on some research conducted yesterday, An interesting article regarding believable AI in games found (VS actually intelligent AI). Talked about giving AI human-like limitation. Got me thinking about whether it would be possible to train a learning AI about a game merely by giving it large amounts of replay data from games and letting it reason strategies and tactics by matching already seen environments. This is counter to letting the AI actually play out the game. Perhaps I'm just talking about training data and not relizing it.

October 25

Did some additional research that is more focused on what I want to do, finding research on AI and machine learning as applied to strategy games. Most of them pertain to chess, in a non-brute force method.