Difference between revisions of "Colin Kern"

From Earlham CS Department
Jump to navigation Jump to search
Line 27: Line 27:
  
 
I just wrote some code for the basic data structures that would make up a neural net and some functions to set input and calculate the output.  I need to add methods to create the network structure and the training algorithm.  Those are going to be the hard parts.
 
I just wrote some code for the basic data structures that would make up a neural net and some functions to set input and calculate the output.  I need to add methods to create the network structure and the training algorithm.  Those are going to be the hard parts.
----
 
  
== Code ==
+
'''September 22, 2005'''
'''nnet.h'''
+
I have installed Octave in my Cygwin environmentNow I have to find the neural net plugin for it.
<pre>
 
struct link {
 
    struct neuron *destination;    /* Destination of link */
 
    double weight;                /* Weight associated with link */
 
}; 
 
 
 
struct neuron {
 
    double value;          /* Weighted sum */
 
    struct link *links;    /* Input linksNULL if input neuron */
 
    int num_links;          /* Number of links */
 
    int done;              /* 0 if sum hasn't been calculated, 1 if it has */
 
}; 
 
 
 
struct net {
 
    struct neuron *in;  /* Pointers to the input neurons */
 
    struct neuron *out;  /* Pointers to the output neurons */
 
    int num_in;          /* Number of input neurons */
 
    int num_out;        /* Number of output neurons */
 
}; 
 
 
 
typedef struct link link;
 
typedef struct neuron neuron;
 
typedef struct net net;
 
 
 
/*
 
  This function should only be used on the input neurons to put the
 
  initial input values into the input layer.
 
*/
 
void give_input(neuron *dest, double value);
 
 
 
/*
 
  This function asks a neuron to calculate its sum from the neurons it
 
  takes its input from.  If those neurons haven't calculated their own
 
  sum yet, it recurs on that neuron.  You can just call this on an output
 
  neuron and have it recur back through the entire net.  If you don't want
 
  the recurrence, call calc_input(neuron *n).
 
*/
 
void get_input(neuron *n);
 
 
 
/*
 
  This function takes all the inputs from its parent neurons and uses the
 
  weights associated with its links to calculate a sum.  If the parent
 
  neurons have not calculated their sum, an assertion failure will occur.
 
*/
 
void calc_input(neuron *n);
 
</pre>
 
 
 
'''nnet.c'''
 
<pre>
 
#include "nnet.h"
 
#include "assert.h"
 
   
 
void give_input(neuron *dest, double value)
 
{
 
    dest->value = value;
 
    dest->done = TRUE;     
 
 
   
 
void get_input(neuron *n) 
 
{
 
    int i;
 
 
 
    for (i = 0; i < n->num_links; ++i) {
 
        if (n->links[i]->destination->done == FALSE)
 
            get_input(n->links[i]->destination);
 
       
 
        n->value += n->links[i]->destination->value *
 
            n->links[i]->weight;
 
    }
 
 
 
    n->done = TRUE;
 
}
 
 
 
void calc_input(neuron *n)
 
{
 
    int i;
 
 
 
    for (i = 0; i < n->num_links; ++i) {
 
        assert(n->links[i]->destination->done);
 
 
 
        n->value += n->links[i]->destination->value *
 
            n->links[i]->weight;
 
    }
 
 
 
    n->done = TRUE;
 
}
 
</pre>
 

Revision as of 21:01, 22 September 2005

Abstract

I am going to create neural net software and design a neural net that can learn to play Tic Tac Toe. The net will receive the game board as its input (probably nine nodes for each square) and output the square it will put its symbol in. This could be represented as either one node outputting a number corresponding to a certain square or nine nodes each outputting a number, the largest (or smallest) of which is the preferred square. I will have to experiment to see which of these methods (and perhaps others I can think of) work and how well they work. The performance of a neural net can be measured by how fast the net reaches its maximum learning capacity and how correct that capacity is.


Journal

September 13, 2005

I finished my abstract, but I'm having trouble getting excited about simply creating a neural net and teaching it to play Tic Tac Toe. I don't know how complicated that will turn out to be, so I don't know if I have the time to do anything more. I've had two ideas that might make the project more interesting.

First, I could try to make the neural net code I write able to support as many different kinds of neural nets as possible. I'd just write the basic data structures and algorithms, then write a program that takes a file specifying the shape of the neural net and creates the net from that. It can then take other input, such as the algorithm to use for training and a training script. A user would issue a command similar to "./neuralnet structure.txt algorithm.txt training.txt net.txt". This would create the neural net in structure.txt and train it using algorithm.txt and training.txt. The trained net would be output to net.txt. Another command, "./neuralnet net.txt", would load the trained net and stdin would be the input given to the net, whose output would be written to stdout.

The other idea is to experiment with a neural net's ability to learn grammars. Give it strings such as 'ab', 'aabb', 'aaabbb' that are in the grammar and 'a', 'abb', 'aba' that aren't, then see if it can correctly say whether other strings are also in the grammar. It would be interesting to see what grammars can be learned and how adding more layers to a perceptron would increase the complexity of the grammars that can be learned. A complication I see is how to input variable length strings. I can see either having a large set of input nodes, some of which aren't used, or giving the net the string one character at a time and the net saying "yes" or "no" for each character (and if it is still saying yes on the last character, it accepts the string).

I'd probably be more interested in doing the second of these ideas.

Feedback for Colin, 13 Sep 2005

Colin, I would be very interested to see how your NN works with grammars as you have described. Indeed, knowing next to nothing about your project, the second suggestion about grammars grabbed me much more than the first.

Again, I know not-a-lot about NN, but I can ostensibly see the training data for grammars being much easier to create on the fly or algorithmically.

A variation on the TTT idea might be to teach it 3d TTT. I've seen this both with sets of 3x3x3 and 4x4x4 boards. The way that Charlie described NNs last Wednesday, leads me to think that what you might do is let the NN take care of the individual steps, and just teach it what is a good or bad outcome. In this manner you could simply have it play a heck of a lot of games with another program that you write that simply goes through all the possible iterations of a game of TTT (2d or 3d), and returns to your NN if the NN has won or lost each game.

--hunteke 23:57, 13 Sep 2005 (EST)

September 14, 2005

I just wrote some code for the basic data structures that would make up a neural net and some functions to set input and calculate the output. I need to add methods to create the network structure and the training algorithm. Those are going to be the hard parts.

September 22, 2005 I have installed Octave in my Cygwin environment. Now I have to find the neural net plugin for it.