particle swarm optimization
play

Particle Swarm Optimization Introduction Marco A. Montes de Oca - PowerPoint PPT Presentation

Particle Swarm Optimization Introduction Marco A. Montes de Oca IRIDIA-CoDE, Universit e Libre de Bruxelles (U.L.B.) May 7, 2007 Marco A. Montes de Oca Particle Swarm Optimization Presentation overview Origins The idea Continuous


  1. Particle Swarm Optimization Introduction Marco A. Montes de Oca IRIDIA-CoDE, Universit´ e Libre de Bruxelles (U.L.B.) May 7, 2007 Marco A. Montes de Oca Particle Swarm Optimization

  2. Presentation overview Origins The idea Continuous optimization The basic algorithm Main variants Parameter selection Research issues Our work at IRIDIA-CoDE Marco A. Montes de Oca Particle Swarm Optimization

  3. Particle swarm optimization: Origins How can birds or fish ex- hibit such a coordinated collective behavior? Marco A. Montes de Oca Particle Swarm Optimization

  4. Particle swarm optimization: Origins Reynolds [12] proposed a behavioral model in which each agent follows three rules: Separation. Each agent tries to move away from its neighbors if they are too close. Alignment. Each agent steers towards the average heading of its neighbors. Cohesion. Each agent tries to go towards the average position of its neighbors. Marco A. Montes de Oca Particle Swarm Optimization

  5. Particle swarm optimization: Origins Kennedy and Eberhart [6] included a ‘roost’ in a simplified Reynolds-like simulation so that: Each agent was attracted towards the location of the roost. Each agent ‘remembered’ where it was closer to the roost. Each agent shared information with its neighbors (originally, all other agents) about its closest location to the roost. Marco A. Montes de Oca Particle Swarm Optimization

  6. Particle swarm optimization: The idea Eventually, all agents ‘landed’ on the roost. What if the notion of distance to the roost is changed by an unknown function? Will the agents ‘land’ in the minimum? J. Kennedy R. Eberhart Marco A. Montes de Oca Particle Swarm Optimization

  7. Continuous Optimization The continuous optimization problem can be stated as follows: 100 80 60 40 20 0 -20 -40 -60 -4 -2 0 2 4 2 0 4 -2 -4 Find X ∗ ⊆ X ⊆ R n such that X ∗ = argmin f ( x ) = { x ∗ ∈ X : f ( x ∗ ) ≤ f ( x ) ∀ x ∈ X} x ∈X Marco A. Montes de Oca Particle Swarm Optimization

  8. Particle swarm optimization: The basic algorithm 1. Create a ‘population’ of agents (called particles ) uniformly distributed over X . Marco A. Montes de Oca Particle Swarm Optimization

  9. Particle swarm optimization: The basic algorithm 2. Evaluate each particle’s position according to the objective function. Marco A. Montes de Oca Particle Swarm Optimization

  10. Particle swarm optimization: The basic algorithm 3. If a particle’s current position is better than its previous best position, update it. Marco A. Montes de Oca Particle Swarm Optimization

  11. Particle swarm optimization: The basic algorithm 4. Determine the best particle (according to the particle’s previous best positions). Marco A. Montes de Oca Particle Swarm Optimization

  12. Particle swarm optimization: The basic algorithm 5. Update particles’ velocities according to 2 ( gb t − x t v t +1 i + ϕ 1 U t 1 ( pb t i ) + ϕ 2 U t = v t i − x t i ). i Marco A. Montes de Oca Particle Swarm Optimization

  13. Particle swarm optimization: The basic algorithm 6. Move particles to their new positions according to x t +1 i + v t +1 = x t . i i Marco A. Montes de Oca Particle Swarm Optimization

  14. Particle swarm optimization: The basic algorithm 7. Go to step 2 until stopping criteria are satisfied. Marco A. Montes de Oca Particle Swarm Optimization

  15. Particle swarm optimization: The basic algorithm 2. Evaluate each particle’s position according to the objective function. Marco A. Montes de Oca Particle Swarm Optimization

  16. Particle swarm optimization: The basic algorithm 3. If a particle’s current position is better than its previous best position, update it. Marco A. Montes de Oca Particle Swarm Optimization

  17. Particle swarm optimization: The basic algorithm 3. If a particle’s current position is better than its previous best position, update it. Marco A. Montes de Oca Particle Swarm Optimization

  18. Particle swarm optimization: The basic algorithm 4. Determine the best particle (according to the particle’s previous best positions). Marco A. Montes de Oca Particle Swarm Optimization

  19. Particle swarm optimization: The basic algorithm 5. Update particles’ velocities according to 2 ( gb t − x t v t +1 i + ϕ 1 U t 1 ( pb t i ) + ϕ 2 U t = v t i − x t i ). i Marco A. Montes de Oca Particle Swarm Optimization

  20. Particle swarm optimization: The basic algorithm 6. Move particles to their new positions according to x t +1 i + v t +1 = x t . i i Marco A. Montes de Oca Particle Swarm Optimization

  21. Particle swarm optimization: The basic algorithm 7. Go to step 2 until stopping criteria are satisfied. Marco A. Montes de Oca Particle Swarm Optimization

  22. Particle swarm optimization: Main variants Almost all modifications vary in some way the velocity-update rule: 2 ( gb t − x t v t +1 = v t i + ϕ 1 U t 1 ( pb t i − x t i ) + ϕ 2 U t i ) i Marco A. Montes de Oca Particle Swarm Optimization

  23. Particle swarm optimization: Main variants Almost all modifications vary in some way the velocity-update rule: 2 ( gb t − x t v t +1 = v t + ϕ 1 U t 1 ( pb t i − x t i ) + ϕ 2 U t i ) i i ���� inertia Marco A. Montes de Oca Particle Swarm Optimization

  24. Particle swarm optimization: Main variants Almost all modifications vary in some way the velocity-update rule: 2 ( gb t − x t v t +1 = v t i + ϕ 1 U t 1 ( pb t i − x t + ϕ 2 U t i ) i ) i � �� � personal influence Marco A. Montes de Oca Particle Swarm Optimization

  25. Particle swarm optimization: Main variants Almost all modifications vary in some way the velocity-update rule: 2 ( gb t − x t v t +1 = v t i + ϕ 1 U t 1 ( pb t i − x t i ) + ϕ 2 U t i ) i � �� � social influence Marco A. Montes de Oca Particle Swarm Optimization

  26. Particle swarm optimization: Different population topologies Every particle i has a neighborhood N i . v t +1 = v t i + ϕ 1 U t 1 ( pb t i − x t i ) + ϕ 2 U t 2 ( lb t i − x t i ) i Marco A. Montes de Oca Particle Swarm Optimization

  27. Particle swarm optimization: Inertia weight It adds a parameter called inertia weight so that the modified rule is: v t +1 = w v t i + ϕ 1 U t 1 ( pb t i − x t i ) + ϕ 2 U t 2 ( lb t i − x t i ) i It was proposed by Shi and Eberhart [14]. Marco A. Montes de Oca Particle Swarm Optimization

  28. Particle swarm optimization: Time-decreasing inertia weight The value of the inertia weight is decreased during a run It was proposed by Shi and Eberhart [15]. Marco A. Montes de Oca Particle Swarm Optimization

  29. Particle swarm optimization: Canonical PSO It is a special case of the inertia weight variant derived from: � � v t +1 v t i + ϕ 1 U t 1 ( pb t i − x t i ) + ϕ 2 U t 2 ( lb t i − x t = χ i ) , i where χ is called a “constriction factor” and is fixed. It has been very influential after its proposal by Clerc and Kennedy [3]. Marco A. Montes de Oca Particle Swarm Optimization

  30. Particle swarm optimization: Fully Informed PSO In the Fully Informed PSO, a particle is attracted by every other particle in its neighborhood:   � v t +1  v t ϕ k U t k ( pb t k − x t = χ i + i )  . i p k ∈N i It was proposed by Mendes et al. [9]. Marco A. Montes de Oca Particle Swarm Optimization

  31. Particle swarm optimization: Other variants There are many other variants reported in the literature. Among others: with dynamic neighborhood topologies (e.g., [16], [10]) with enhanced diversity (e.g., [2], [13] ) with different velocity update rules (e.g., [11], [8] ) with components from other approches (e.g., [1], [5] ) for discrete optimization problems (e.g., [7], [18] ) . . . Marco A. Montes de Oca Particle Swarm Optimization

  32. Particle swarm optimization: Parameter selection Consider a one-particle one-dimensional particle swarm. This particle’s velocity-update rule is v t +1 = av t + b 1 U t 1 ( pb t − x t ) + b 2 U t 2 ( gb t − x t ) Marco A. Montes de Oca Particle Swarm Optimization

  33. Particle swarm optimization: Parameter selection Additionally, if we make ∗ (0 , 1)] = 1 E [ U t 2 , b = b 1 + b 2 , 2 pb t +1 = pb t +1 , gb t +1 = gb t , and b 1 b 2 pb t + gb t . r = b 1 + b 2 b 1 + b 2 Marco A. Montes de Oca Particle Swarm Optimization

  34. Particle swarm optimization: Parameter selection Then, we can say that v t +1 = av t + b ( r − x t ) . Marco A. Montes de Oca Particle Swarm Optimization

  35. Particle swarm optimization: Parameter selection It can be shown that this system will behave in different ways depending on the value of a , b . Graph taken from Trelea [17]. Marco A. Montes de Oca Particle Swarm Optimization

  36. Particle swarm optimization: Parameter selection Some examples Graph taken from Trelea [17]. Marco A. Montes de Oca Particle Swarm Optimization

  37. Particle swarm optimization: Parameter selection Factors to consider when choosing a particular variant and/or a parameter set: The characteristics of the problem (”modality”, search ranges, dimension, etc) Available search time (wall clock or function evaluations) The solution quality threshold for defining a satisfactory solution Marco A. Montes de Oca Particle Swarm Optimization

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend