Quick summaries of research papers around NEAT

I’m starting on a project around neural networks and as preparation I have been reading a lot of research papers and making quick summaries in case I need to go back to anything in them. I present them here, mostly for myself, but they may also be useful to others. Apologies in advance if you were involved in any of these and I have horribly butchered your work…

Efficient Evolution of Neural Networks through Complexification

  • Presents the NEAT, Neural Networks through Augmenting Topologies, approach to training neural networks.
  • Main website for neat is http://nn.cs.utexas.edu/?neat here is the C# source example http://sharpneat.sourceforge.net/
  • Weights for the network are learned using evolutionary algorithms the 3 main things that make NEAT special are:
    • Uses an approach that keeps track of the ids of connections and nodes so that there is more efficient competing of genes against equivalent genes(hard to explain this exactly quickly but if you read it, it is a good approach).
    • Uses speciation to protect innovation, the evolving agents that compete are divided into subgroups and only need to compete against other agent in there “species” in this way innovations are given time to improve before being wiped which allows them to develop for longer to the point where they may be useful.
    • Reduces dimensions through complexification. We start we simpler nets with fewer hidden nodes then over time evolve more, this makes early evolution quicker and allows for greater complexity over time.
  • Had very good results for a range of tasks.
  • The networks that evolve can have many layers and also be very nonstandard, e.g. connections from input layer direct to 3rd layer.
  • Worked with recurrent networks as well.
  • Here is a good blog post on using NeatSharp http://www.nashcoding.com/2010/07/17/tutorial-evolving-neural-networks-with-sharpneat-2-part-1/

Compositional Pattern Producing Networks

  • Rather than creating a neural network directly, we create a function which takes a set of inputs for 2 nodes and outputs the weight of a connection between those 2 nodes. We then call that function for every point of a space to generate a neural net. Here is the python pseudo code for the function:
def createNodeWeight(node1coordinates, node2coordinates):
     weightOfConnectionBetween2Nodes = #result of some chain of composed functions
     return weightOfConnectionBetween2Nodes
  • Why do this?
    • In nature we see that often that x^10 synapse connections are generated from x genes so we should look for similar mechanisms
    • In our task is related to images or learning anything with physical dimensions we can use the real world coordinates of the inputs to influence are results, e.g. with images we can place the inputs from each pixel of an image at that coordinate.
    • This allows to very easily scale to images of different resolutions.
    • Reduces the number of dimension you need to search through to find a solution. I don’t need to find the correct weight for every connection, just the smaller number of correct values to generate the connections.
  • The createNodeWeight function can itself use a neural network. This works very well generate the values for that network using NEAT, this is called hyperNEAT
  • Can create really interesting images
  • Here is the github for hyperNeatSharp and here is a good tutorial on using it.

Autonomous Evolution of Topographic Regularities in ArtificialNeural Networks

  • Applies hyperNEAT to learning checkers, the inputs exist across 2 dimension(checkers board)
  • Training is done by checking fitness against a standard min-max search algorithms with evaluation of the result with some randomness thrown in so the game is not to deterministic.
  • Results: hyperNeat was compared against Neat and also a NEAT-EI(Augmented version of NEAT) hyperNeat was able to evolve to beat the min-max with depth 4 and a lot quicker than NEAT-EI.
  • Generalization appears better than NEAT-EI as hyperNEAT is able to beat it in a direct game
  • Quite interesting stuff on why hyperNEAT performs well
  • Would love to see this applied to chess.

Evolving Adaptive Neural Networks with and without Adaptive Synapses

  • Experiment: A standard food foraging exercise with the twist that all the food may either be normal giving +points or be poison in which case -points. If the food is poison then when eaten a pain output neuron is stimulated. 
  • If the food is all poison then the optimum strategy is to then stop searching for food. There is no way a fixed optimum strategy can be evolved because if the food is poison or not is randomized so the agent must be able to adapt.
  • 2 sets of agents where evolved. Both used NEAT including the ability to generate recurrent connections.
  • 1 set also was able to evolve local learning rules. For either strengthening or weakening connections between nodes. The idea being that the agent would evolve to have a rule that would stop it foraging for food once it encountered the poison.
  • Results: Interestingly it turned out that the first agent could actually learn to stop foraging for the poison on it’s own through recurrent connections and learned to do this faster than the agent with adaptive rules could learn it, because of it’s lower number of dimensions.
  • It would be interesting to run a similar experiment on a game where the environment where both poison and food where present in the environment will some sensor for the agent to distinguish between, so once it had found poison it got extra points for still picking out the food, could a recurrent neural net still preform as well there?
More in part 2

Leave a Reply