From Earlham CS Department
Jump to navigation Jump to search

These are my (semi) daily logs for my project. You can find my project summary page here

December 11

So after thinking about it a little bit, I've realized that my time complexity stuff in my paper was way off, but I think I've come up with a way to reduce my time complexity from (N!)^3 to just linear time!, or in the case of playing against your peers instead of a static policy, N^2. This is done by switching to an algorithmic way of finding the correct index on searches, and by using mem copies instead of iteration. (Which I think optimization does for me)

December 8

Senior thesis completed, now all the remains is to clean up all my code and make it presentable. And prepare for the presentation I guess. I should probably look over my slides again and make sure that they are at least somewhat accessible to non-CS students. Or maybe that would be a bad thing to do, it's not like I have tons of time to explain everything in detail.

December 6

I've decided that henceforth a new feature freeze on my code will probably be beneficial. All remaining time will be spent cleaning up the code and making it presentable. Also as a note to any future CS majors, listen to Charlie when he tells you to write the paper over a long period of time.

December 5

Began serious work on the paper, and also continued adding features listed previously to the code. I also discovered that by oversight I was not doing mutations on my simulations, which may explain why tests I ran previously saw more complex (higher card count) simulations not achieving especially good values.

December 3

Met with Jim again, it was decided that I should probably not pursue threads anymore, and focus on adding more functionality to my code.

  • Modifications were made to both the cross-breeding and mutation functions to simplify and fix bugs.
  • Capabilities for saving and loading species
  • The ability to play interactively against a species.
  • The second iteration of the fitness function was discussed, shifting it from competing against a static policy, to having each species compete against its peers.
    • While this will make it so that the species will not evolve against a specific opponent, it should make it an overall much better player.
  • the ability to save a log of the overview for each generation of the evolution

Hopefully I'll be able to get them all implemented

December 2

After much debugging, my single threaded implementation of genetic learning in the card game appears to work. It's still pretty ugly, you have to recompile it to give it new parameters, and the code is a little hard to parse. I'll need to go back later and spruce it up a bit. While I've started work on a multithreaded implementation, I'm questioning how practical it is to spend time on threading instead of more productive work.

December 1

I've finished coding up my genetic algorithm for playing the card game. It now compiles, but doesn't execute. I'm debating how feasible it is to get it fully functional as the paper due date looms closer. I have as such at least partially switched focus to the paper and worked an outline. I'm debating whether to include my minmax implementation into the paper as I did spend a good chunk of time working on it, but it's pretty tangential to my actual project. As of right now I'm including it.

November 29

I've spent most of today thinking about the actual implementation of the genetic algorithm, and laid out the stubs and extensions that are needed to retrofit my minmax to become genetic. Hopefully I got all the math right.

November 26

Met with Jim, laid out a fairly exhaustive species representation for the game. Still need to get the dumps from minmax, should be easy but didn't get around to it today. My goal this week is to clear out all of my other work by Friday so that I can spend the remainder of the semester more or less focusing solely on my project.

November 25

Back from break. Over break I thought a lot about ways to express policies for the game genetically, I'm still working on a way to condense the information down so that meaningful progress is more likely to happen during cross-over and mutation. For the fitness function I decided it would be best to use a static policy that plays within random parameters. For example a policy that always plays a card within 10% of the value of the flop (if possible). Still, the biggest hurdle is creating the blueprints for the species.

November 13

I've written up a crude Gant(?) chart stating what I need to do, first I need to get this minmax working correctly with output on the double. I've been spending probably too much time on other projects. Also at the same time I need to look more into data formats that are compatible with genetic algorithms.

November 7

  • Got an implementation of minmax working, except that it's not minmax. Going to change the structure to relfect minmax.

November 3

  • Continued work on Minmax
  • Created my project page

October 31

Switched to a Wiki format from text file

October 30

Continued work on Minmax

October 29

Began work coding Minmax, basic framework laid out Began outline of Paper.

October 28

Looked a bit at the other students logs, mine looks really bad in comparison...I need to get this to wrap properly.

October 27

Continued reading new research, comparitivly I'm finding the chess based readings a little dry.

October 26

Read up on some research conducted yesterday, An interesting article regarding believable AI in games found (VS actually intelligent AI). Talked about giving AI human-like limitation. Got me thinking about whether it would be possible to train a learning AI about a game merely by giving it large amounts of replay data from games and letting it reason strategies and tactics by matching already seen environments. This is counter to letting the AI actually play out the game. Perhaps I'm just talking about training data and not relizing it.

October 25

Did some additional research that is more focused on what I want to do, finding research on AI and machine learning as applied to strategy games. Most of them pertain to chess, in a non-brute force method.