evolving artificial neural networks
play

Evolving Artificial Neural Networks Tim Kovacs Evolving ANNs 1 of - PowerPoint PPT Presentation

Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yaos Framework Conclusions Bibliography COMS M0305: Learning in Autonomous Systems Evolving Artificial Neural Networks Tim Kovacs Evolving ANNs 1 of 23


  1. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography COMS M0305: Learning in Autonomous Systems Evolving Artificial Neural Networks Tim Kovacs Evolving ANNs 1 of 23

  2. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Today Artificial Neural Networks Adapting: Weights Architectures Learning Rules Yao’s Framework for Evolving NNs Evolving ANNs 2 of 23

  3. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Artificial Neural Networks A typical NN consists of: A set of nodes In layers: input, output and hidden A set of directed connections between nodes Each connection has a weight Nodes compute by: Integrating their inputs using an activation function Passing on their activation as output NNs compute by: Accepting external inputs at input nodes Computing the activation of each node in turn Evolving ANNs 3 of 23

  4. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Node Activation A node integrates inputs with:   n � y i = f i w ij x ij − θ i   j =1 y i is the output of node i f i is the activation function (typically a sigmoid) n is the number of inputs to the node w ij is the connection weight between nodes i and j x ij is the j th input to node i θ i is a threshold (or bias) From the universal approximation theorem for neural networks : any continuous function can be approximated arbitrarily well by a NN with one hidden layer and a sigmoid activation function. Evolving ANNs 4 of 23

  5. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Evolving Neural Networks Evolution has been applied at 3 levels: Weights Architecture connectivity: which nodes are connected activation functions: how nodes compute outputs plasticity: which nodes can be updated Learning rules Evolving ANNs 5 of 23

  6. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Representations for Evolving NNs Direct encoding [18, 6] all details (connections and nodes) specified Indirect encoding [18, 6] only key details (e.g. number of hidden layers and nodes) a learning process determines the rest Developmental encoding [6] a developmental process is genetically encoded [10, 7, 12, 8, 13, 16] Uses: Implicit and developmental representations are more flexible tend to be used for evolving architectures Direct representations tend to be used for evolving weights alone Evolving ANNs 6 of 23

  7. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Learning Weights Repeat present input to NN compute output compute error of output update weights based on error Most NN learning algorithms are based on gradient descent Including the best known: backpropagation (BP) Many successful applications, but often get trapped in local minima [15, 17] Require a continuous and differentiable error function Evolving ANNs 7 of 23

  8. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Evolving Weights EC forms an outer loop to the NN EC generates weights Present many inputs to NN, compute outputs and overall error Use error as fitness in EC In figure I g , t – Input at generation g and time t O g , t – Output F g , t – Feedback (either NN error or fitness) EC doesn’t rely on gradients and can work on discrete fitness functions Much research has been done on evolution of weights Evolving ANNs 8 of 23

  9. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Fitness Functions for Evolving NNs Fitness functions typically penalise: NN error and complexity (number of hidden nodes) The expressive power of a NN depends on the number of hidden nodes Fewer nodes = less expressive = fits training data less More nodes = more expressive = fits data more Too few nodes: NN underfits data Too many nodes: NN overfits data Evolving ANNs 9 of 23

  10. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Evolving weights vs. gradient descent Evolution has advantages [18]: Does not require continuous differentiable functions Same method can be used for different types of network (feedforward, recurrent, higher order) Which is faster? No clear winner overall – depends on problem [18] Evolving weights AND architecture is better than weights alone (we’ll see why later) Evolution better for RL and recurrent networks [18] [6] suggests evolution is better for dynamic networks Happily we don’t have to choose between them . . . Evolving ANNs 10 of 23

  11. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Evolving AND learning weights Evolution: Good at finding a good basin of attraction Bad at finding optimum Gradient descent: Opposite of above To get the best of both: [18] Evolve initial weights, then train with gradient descent 2 orders of magnitude faster than random initial weights [6] Evolving ANNs 11 of 23

  12. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Evolving NN Architectures Arch. has important impact on results: can determine whether NN under- or over-fits Designing by hand is a tedious, expert trial-and-error process Alternative 1: Constructive NNs grow from a minimal network Destructive NNs shrink from a maximal network Both can get stuck in local optima and can only generate certain architectures [1] Alternative 2: Evolve them! Evolving ANNs 12 of 23

  13. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Reasons EC is suitable for architecture search space 1 “The surface is infinitely large since the number of possible nodes and connections is unbounded 2 the surface is nondifferentiable since changes in the number of nodes or connections are discrete and can have a discontinuous effect on EANN’s [Evolutionary Artificial NN] performance 3 the surface is complex and noisy since the mapping from an architecture to its performance is indirect, strongly epistatic 1 , and dependent on the evaluation method used; 4 the surface is deceptive 2 since similar architectures may have quite different performance; 5 the surface is multimodal 3 since different architectures may have similar performance.” [11] 1 fitness is not a linear function of genes 2 slope of the fitness landscape leads away from the optimum 3 landscape has multiple basins of attraction Evolving ANNs 13 of 23

  14. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Reasons to evolve architectures and weights simultaneously Learning with gradient descent: Many-to-1 mapping from NN genotypes to phenotypes [20] Random intial weights and stochastic learning lead to different results Result is noisy fitness evaluations Averaging needed – slow Evolving arch. and weights simultaneously: 1-to-1 genotype to phenotype mapping avoids above problem Result: faster learning Can co-optimise other parameters of the network: [6] [2] found best networks had very high learning rate may have been optimal due to many factors: initial weights, training order, amount of training Evolving ANNs 14 of 23

  15. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Evolving Learning Rules [18] There’s no one best learning rule for all architectures or problems Selecting rules by hand is difficult If we evolve the architecture (and even problem) then we don’t know what it will be a priori Solution: evolve the learning rule Note: training architectures and problems must represent the test set To get general rules: train on general problems/architectures, not just one kind To get rule for a specific arch./problem type, just train on that Evolving ANNs 15 of 23

  16. Introduction Adapting Weights Adapting Architectures Adapting Learning Rules Yao’s Framework Conclusions Bibliography Evolving Learning Rule Parameters [18] E.g. learning rate and momentum in backpropagation Adapts standard learning rule to arch/problem at hand Non-evolutionary methods of adapting them also exist [3] found evolving architecture, initial weights and rule parameters together as good or better than evolving only first two or third (for multi-layer perceptrons) Evolving ANNs 16 of 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend