Skip navigation
PDF (2.31 MB)
DOI: http://dx.doi.org/10.7551/978-0-262-31050-5-ch055
Pages 415-422
First published 2 July 2012

Evolved neural network controllers for physically simulated robots that hunt with an artificial visual cortex

Michael E. Palmer, Andrew Chou

Abstract

Using a rule-based system for growing artificial neural networks, we have evolved controllers for physically simulated robotic "spiders". The controllers take their input from an "artificial retina" that senses other spiders and inanimate barrier objects in the environment, and must provide output to dynamically control the 18 degrees of freedom of the six legs of the robot every time step. We perform evolutionary runs with two species of spider that interact in simulation with each other and with inanimate barrier objects. One species (the "predator") is selectively rewarded for "eating" (by physically colliding with) the other species, and the other (the "prey") is selectively penalized for being caught, and rewarded for "eating" the barriers. The two species evolve complex running gaits, with control inputs coming from their retinas that produce hunting or avoidance behavior. We suggest that predator-prey frequency dependent selection can provide a relatively long-term genetic memory of previously searched regions of phenotype space, enforcing a form of novelty search that may reduce duplicated evolutionary search effort.