A Cooperative Approach to Particle Swarm Optimization Authors : - - PowerPoint PPT Presentation

a cooperative approach to particle swarm optimization
SMART_READER_LITE
LIVE PREVIEW

A Cooperative Approach to Particle Swarm Optimization Authors : - - PowerPoint PPT Presentation

A Cooperative Approach to Particle Swarm Optimization Authors : Frans van den Bergh, Andries P. Engelbrecht Journal : Transactions on Evolutionary Computation, vol 8, No. 3, June 2004 Presentation : Jose Manuel Lopez Guede Introduction


slide-1
SLIDE 1

A Cooperative Approach to Particle Swarm Optimization

Authors: Frans van den Bergh, Andries P. Engelbrecht Journal: Transactions on Evolutionary Computation, vol 8, No. 3, June 2004

Presentation: Jose Manuel Lopez Guede

slide-2
SLIDE 2

2 of 28

Introduction

  • “Curse of dimensionality”
  • PSO
  • CPSO
  • CPSO-Sk
  • CPSO-Hk
  • GA comparation
  • Results
slide-3
SLIDE 3

3 of 28

Particle Swarm Optimizers I

  • PSO:

– Stochastic optimization technique – Swarm: a population – During each iteration each particle accelerates influenced by:

  • Its own personal best position
  • Global best position
slide-4
SLIDE 4

4 of 28

Particle Swarm Optimizers II

– –

slide-5
SLIDE 5

5 of 28

Particle Swarm Optimizers III

– During each iteration, each particle is updated:

slide-6
SLIDE 6

6 of 28

Particle Swarm Optimizers III

– During each iteration, each particle is updated: – The global best position is updated:

slide-7
SLIDE 7

7 of 28

Cooperative Learning I

  • PSO:

– Each particle represents an n-dim vector that can be used as a potential solution.

position Best position

  • f the particle

Best position

  • f the swarm
slide-8
SLIDE 8

8 of 28

Cooperative Learning II

– Drawback:

  • Authors show a numerical example where PSO goes to a

worst value in an iteration.

  • Cause: error function is computed only after all the

components of the vector have been updated to their new values.

– Solution:

  • Evaluate the error function more frecuently.
  • For every time a component in the vector has been updated.

– New problem:

  • The evaluation is only possible with a compete vector.
slide-9
SLIDE 9

9 of 28

Cooperative Learning III

  • CPSO-S:

– n-dim vectors are partitioned into n swarms of 1-D – Each swarm represents 1 dimension of the problem – “Context vector”:

  • f requires an n-dim vector
  • To calculate the context vector for the particles of swarm j,

the remainig components are the best values of the remaining swarms.

slide-10
SLIDE 10

10 of 28

Cooperative Learning IV

context vector

slide-11
SLIDE 11

11 of 28

Cooperative Learning V

  • Advantage:

– The error function f is evaluated after each component in the vector is updated.

  • However:

– Some components in the vector could be correlated. – These components should be in the same swarm, since the independent changes made by the different swarms have a detrimental effect on correlated variables. – Swarms of 1-D, and swarms of c-D, taken blindly.

slide-12
SLIDE 12

12 of 28

Cooperative Learning VI

  • CPSO-Sk:

– Swarms of 1-D, and swarms of c-D, taken blindly, hoping that some correlated variables end up in the same swarm. – Split factor: The vector is split in K parts (swarms) – It is a particular CPSO-S case, where n=K.

slide-13
SLIDE 13

13 of 28

Cooperative Learning VII

CPSO-S

slide-14
SLIDE 14

14 of 28

Cooperative Learning VII

– Drawback:

  • It is possible that the algorithm become trapped in a state

where all the swarms are unable to discover better solutions: stagnation.

  • Authors show an example.
slide-15
SLIDE 15

15 of 28

Hybrid CPSOs – CPSO-Hk I

  • Motivation:

– CPSO-Sk can become trapped. – PSO has the hability to scape from pseudominimizers. – CPSO-Sk has faster convergence.

  • Solution:

– Interleave the two algorithms.

  • Execute CPSO-Sk for one iteration, followeb by one iteration
  • f PSO.
  • Information interchange is a form of cooperation.
slide-16
SLIDE 16

16 of 28 CPSO-Sk CPSO-Sk Overwite 1 particle k

  • f the swarm Q

PSO Swarms of the CPSO-Sk

slide-17
SLIDE 17

17 of 28

Experimental Setup I

  • Compare the PSO, CSPO-Sk, CSPO-Hk algorithms.
  • Measure: #function evaluations.
  • Several popular functions in the PSO comunity

were selected for testing.

slide-18
SLIDE 18

18 of 28

Experimental Setup II

“all the functions where tested under coordinate rotation using Salomon’s algorithm”

slide-19
SLIDE 19

19 of 28

Experimental Setup III

  • PSO configuration:

– All experiments were run 50 times – 10, 15, 20 particles per swarm. – Results reported are averages os the best value in the swarm.

Domain: “magnitude to which the initial random particles are scaled”

slide-20
SLIDE 20

20 of 28

Experimental Setup IV

slide-21
SLIDE 21

21 of 28

Experimental Setup V

  • GA configuration:
slide-22
SLIDE 22

22 of 28

Experimental Setup VI

slide-23
SLIDE 23

23 of 28

Results I

  • Fixed-Iteration Results I

– 2.10^5 function evaluations.

slide-24
SLIDE 24

24 of 28

Results II

  • Fixed-Iteration Results II

– 2.10^5 function evaluations.

slide-25
SLIDE 25

25 of 28

Results III

  • Fixed-Iteration Results III

– PSO-based algs. performed better that GA algs. in general. – Cooperative algorithms collectivelly performed better than the standard PSO in 80% of the cases.

slide-26
SLIDE 26

26 of 28

Results IV

  • Robustness and speed Results I

– “Robustness”: the algorithm succeed in reducing the the f below a specified threshold using fewer that than a number of evaluations. – “A robust algorithm”: one that manages to reach the threshold consistentle (during all runs).

slide-27
SLIDE 27

27 of 28

Results V

  • Robustness and speed Results II
slide-28
SLIDE 28

28 of 28

Results VI

  • Robustness and speed Results III

– CPSO-H6 appears to be the winner because it achieved a perfect score in 7 of 10 cases. – There is a tradeoff between the convergence speed and the robustness of the algorithm.