Difference between revisions of "Colin Kern"

From Earlham CS Department
Jump to navigation Jump to search
(added some feedback to 13 Sep 2005 Journal Entry)
Line 13: Line 13:
  
 
I'd probably be more interested in doing the second of these ideas.
 
I'd probably be more interested in doing the second of these ideas.
 +
 +
== Feedback for Colin, 13 Sep 2005 ==
 +
Colin, I would be very interested to see how your NN works with grammars as you have described.  Indeed, knowing next to nothing about your project, the second suggestion about grammars grabbed me much more than the first.
 +
 +
Again, I know not-a-lot about NN, but I can ostensibly see the training data for grammars being ''much'' easier to create on the fly or algorithmically.
 +
 +
A variation on the TTT idea might be to teach it 3d TTT.  I've seen this both with sets of 3x3x3 and 4x4x4 boards.  The way that Charlie described NNs last Wednesday, leads me to think that what you might do is let the NN take care of the individual steps, and just teach it what is a good or bad outcome.  In this manner you could simply have it play a heck of a lot of games with another program that you write that simply goes through all the possible iterations of a game of TTT (2d or 3d), and returns to your NN if the NN has won or lost each game.
 +
 +
--[[User:Hunteke|hunteke]] 23:57, 13 Sep 2005 (EST)
 +
 
----
 
----

Revision as of 23:58, 13 September 2005

Abstract

I am going to create neural net software and design a neural net that can learn to play Tic Tac Toe. The net will receive the game board as its input (probably nine nodes for each square) and output the square it will put its symbol in. This could be represented as either one node outputting a number corresponding to a certain square or nine nodes each outputting a number, the largest (or smallest) of which is the preferred square. I will have to experiment to see which of these methods (and perhaps others I can think of) work and how well they work. The performance of a neural net can be measured by how fast the net reaches its maximum learning capacity and how correct that capacity is.


Journal

September 13, 2005

I finished my abstract, but I'm having trouble getting excited about simply creating a neural net and teaching it to play Tic Tac Toe. I don't know how complicated that will turn out to be, so I don't know if I have the time to do anything more. I've had two ideas that might make the project more interesting.

First, I could try to make the neural net code I write able to support as many different kinds of neural nets as possible. I'd just write the basic data structures and algorithms, then write a program that takes a file specifying the shape of the neural net and creates the net from that. It can then take other input, such as the algorithm to use for training and a training script. A user would issue a command similar to "./neuralnet structure.txt algorithm.txt training.txt net.txt". This would create the neural net in structure.txt and train it using algorithm.txt and training.txt. The trained net would be output to net.txt. Another command, "./neuralnet net.txt", would load the trained net and stdin would be the input given to the net, whose output would be written to stdout.

The other idea is to experiment with a neural net's ability to learn grammars. Give it strings such as 'ab', 'aabb', 'aaabbb' that are in the grammar and 'a', 'abb', 'aba' that aren't, then see if it can correctly say whether other strings are also in the grammar. It would be interesting to see what grammars can be learned and how adding more layers to a perceptron would increase the complexity of the grammars that can be learned. A complication I see is how to input variable length strings. I can see either having a large set of input nodes, some of which aren't used, or giving the net the string one character at a time and the net saying "yes" or "no" for each character (and if it is still saying yes on the last character, it accepts the string).

I'd probably be more interested in doing the second of these ideas.

Feedback for Colin, 13 Sep 2005

Colin, I would be very interested to see how your NN works with grammars as you have described. Indeed, knowing next to nothing about your project, the second suggestion about grammars grabbed me much more than the first.

Again, I know not-a-lot about NN, but I can ostensibly see the training data for grammars being much easier to create on the fly or algorithmically.

A variation on the TTT idea might be to teach it 3d TTT. I've seen this both with sets of 3x3x3 and 4x4x4 boards. The way that Charlie described NNs last Wednesday, leads me to think that what you might do is let the NN take care of the individual steps, and just teach it what is a good or bad outcome. In this manner you could simply have it play a heck of a lot of games with another program that you write that simply goes through all the possible iterations of a game of TTT (2d or 3d), and returns to your NN if the NN has won or lost each game.

--hunteke 23:57, 13 Sep 2005 (EST)