First published 2 September 2013
Heterogeneous complexification strategies robustly outperform homogeneous strategies for incremental evolution
Adam Stanton, Alastair Channon
The evolution of naturalistic, embodied agents and behaviours has been a long-standing goal of Artificial Life since the initial, impressive work of Karl Sims. Incremental evolution has been used extensively to improve the quality of evolutionary search in many complex, non-linear problem spaces. This work sets out to disambiguate the lexicon around incremental evolution, advocating the term environmental complexification to represent the complexification of the problem domain. We then go on to analyse various complexification strategies in a structured, complexifiable and yet simple environment: a 3D agent-based obstacle task. We divide the strategies conceptually into homogeneous and heterogeneous; homogeneous strategies expose successive generations of the population to a single or tightly clustered range of objective functions while heterogeneous strategies present many, covering the range of complexity. It was found that widely-used homogeneous complexification techniques, for example direct presentation of difficult tasks or linearly-increased difficulty, fail due to either loss-of-gradient or temporally-local over-fitting (analogous to catastrophic forgetting in neural systems). Heterogeneous methods of complexification (including oscillatory strategies) that eliminate these issues are devised and tested. The heterogeneous category outperforms the homogeneous in all metrics, establishing a much more robust approach to the evolution of naturalistic embodied agents.