Automated Iterative Partitioning for Cooperatively Coevolving - - PowerPoint PPT Presentation

automated iterative partitioning for cooperatively
SMART_READER_LITE
LIVE PREVIEW

Automated Iterative Partitioning for Cooperatively Coevolving - - PowerPoint PPT Presentation

CCPSO2 Proposed Approach Experimental Results Conclusion Automated Iterative Partitioning for Cooperatively Coevolving Particle Swarms in Large Scale Optimization Peter Frank Perroni 1 Daniel Weingaertner 1 Myriam Regattieri Delgado 2 1


slide-1
SLIDE 1

CCPSO2 Proposed Approach Experimental Results Conclusion

Automated Iterative Partitioning for Cooperatively Coevolving Particle Swarms in Large Scale Optimization

Peter Frank Perroni1 Daniel Weingaertner1 Myriam Regattieri Delgado2

1Departamento de Inform´

atica Universidade Federal do Paran´ a

2P´

  • s-Gradua¸

c˜ ao em Engenharia El´ etrica e Inform´ atica Industrial Universidade Tecnol´

  • gica Federal do Paran´

a

BRACIS, 2015

1 / 21

slide-2
SLIDE 2

CCPSO2 Proposed Approach Experimental Results Conclusion

Summary

1

CCPSO2 Basic Concepts Limitation Addressed

2

Proposed Approach Hypothesis Iterative Partitioning Method

3

Experimental Results

4

Conclusion

2 / 21

slide-3
SLIDE 3

CCPSO2 Proposed Approach Experimental Results Conclusion Basic Concepts Limitation Addressed

CCPSO2

PSO variant developed to solve complex high scale

  • ptimization problems.

Relative low cost and good performance when compared to counterparts. Grouping of swarms’ dimensions is similar to the method used

  • n Cooperative (multiswarm) PSO.

3 / 21

slide-4
SLIDE 4

CCPSO2 Proposed Approach Experimental Results Conclusion Basic Concepts Limitation Addressed

Tackle high dimensionality by:

Permuting all n dimensions at every iteration t. Randomly changing partition size s if no improvement is

  • btained.

Each swarm is assigned the same number of dimensions.

Given : n = number of dimensions S = {s1, s2, ...} s ∈ S = Dimensions per swarm, randomly chosen Calculated :→ K × s = n K = number of Swarms

4 / 21

slide-5
SLIDE 5

CCPSO2 Proposed Approach Experimental Results Conclusion Basic Concepts Limitation Addressed

Convergence speed is controlled by using a lbest (local best) ring topology. Particle updates are performed by using:

Cauchy (C) or Gaussian (N) distributions. Personal best, lbest and swarm’s best to guide the direction.

xi,j(t + 1) =

  • yi,j(t) + C(1)|yi,j(t) − ˆ

y′

i,j(t)|, if rand ≤ r

ˆ y′

i,j(t) + N(0, 1)|yi,j(t) − ˆ

y′

i,j(t)| otherwise.

(1) where: xi,j: Particle’s dimension yi,j: Particle’s personal best ˆ y′

i,j: Ring local best (lbest)

5 / 21

slide-6
SLIDE 6

Algorithm 1 Pseudocode of CCPSO2

1: b(k, z) = (P1.ˆ y, · · · , Pk−1.ˆ y, z, Pk+1.ˆ y, · · · PK.ˆ y) 2: Create and initialize K swarms with s dimensions each 3: repeat 4: if f (ˆ y) has not improved then randomly choose s from S and let K = n/s 5: Randomly permutate all n dimension indices 6: Construct K swarms, each with s dimensions 7: for each swarm k ∈ [1 · · · K] do 8: for each particle i ∈ [1 · · · p] do 9: if f (b(k, Pk.xi)) < f (b(k, Pk.yi)) then 10: Pk.yi ← Pk.xi 11: if f (b(k, Pk.yi)) < f (b(k, Pk.ˆ y)) then 12: Pk.ˆ y ← Pk.yi 13: for each particle i ∈ [1 · · · p] do 14: Pk.ˆ y′

i ← localBest(Pk.yi−1, Pk.yi, Pk.yi+1)

15: if f (b(k, Pk.ˆ y)) < f (ˆ y) then the kth part of ˆ y is replaced by Pk.ˆ y 16: for each swarm k ∈ [1 · · · K] do 17: for each particle i ∈ [1 · · · p] do 18: Update particle Pk.xi using (1) 19: until termination criterion is met

slide-7
SLIDE 7

CCPSO2 Proposed Approach Experimental Results Conclusion Basic Concepts Limitation Addressed

Limitation Addressed Random rearrangement of swarm’s dimensions is one of the strongest characteristics of CCPSO2. However, it can also be a weakness if S is not satisfactory. Manual setup of S is time consuming and mostly will not test many possibilities. Random selection of s will not consider search phase characteristics.

7 / 21

slide-8
SLIDE 8

CCPSO2 Proposed Approach Experimental Results Conclusion Hypothesis Iterative Partitioning Method

Proposed Approach

Search characteristics can greatly benefit results. Well known behaviours include: Exploratory search reduces probability of local minima traps. Intensification search increases chance of finding better local results. Improved results can be obtained by:

Exploring at initial stages of the search. Intensificating at later stages.

8 / 21

slide-9
SLIDE 9

CCPSO2 Proposed Approach Experimental Results Conclusion Hypothesis Iterative Partitioning Method

Well known behaviours include: Each swarm has its own partially independent state. The less the number of swarms, more dimensions will be dependent of the same swarm state. The more the number of swarms, less dimensions will restrict the swarms’ movements.

9 / 21

slide-10
SLIDE 10

CCPSO2 Proposed Approach Experimental Results Conclusion Hypothesis Iterative Partitioning Method

Considering that:

Intensification is usually implemented by restricting the swarm’s movement.

Then:

Hypothetically, since a small number of swarms restrict swarms’ movement, it also could increase the likelihood of intensificating the search. Likewise, a higher number of swarms could increase the probability of exploring the search space.

10 / 21

slide-11
SLIDE 11

CCPSO2 Proposed Approach Experimental Results Conclusion Hypothesis Iterative Partitioning Method

CCPSO2-IP Replace S by a boost function that controls the number of swarms maxK.

Aggressiveness of boost function is controlled by a boost rate parameter Br. maxK is reduced iteratively a fixed number of times maxTries by a static factor Kr. Once maxK is minimum, the boost function is called again to define a new maxK.

The process is repeated until the end of the search.

11 / 21

slide-12
SLIDE 12

CCPSO2 Proposed Approach Experimental Results Conclusion Hypothesis Iterative Partitioning Method

Figure : Iterative Partitioning method for Exponential boost function. BoostE(t) = Br exp(12 ∗ Br ∗

  • t

Tmax

  • 12 / 21
slide-13
SLIDE 13

CCPSO2 Proposed Approach Experimental Results Conclusion Hypothesis Iterative Partitioning Method

Figure : IP for Sigmoid boost function (for Br = 1.0). BoostS(t) = Br 1.0 + exp(12 ∗ Br ∗

  • t

Tmax

  • − 6 ∗ Br)

13 / 21

slide-14
SLIDE 14

CCPSO2 Proposed Approach Experimental Results Conclusion Hypothesis Iterative Partitioning Method

Figure : IP for Linear boost function (for Br = 1.0). BoostL(t) = −Br ∗

  • t

Tmax

  • + Br

14 / 21

slide-15
SLIDE 15

Algorithm 2 CCPSO2-IP

1: maxK(t) = MIN(MAX(n ∗ Boost(t), 1), n) 2: K = maxK = maxK(0), Kr = 1/maxTries, fitImprovement = 1 3: Create K swarms 4: for t in [1 · · · Tmax] do 5:

if fitImprovement < minImprovement then

6:

if maxTries iterations without improvement then

7:

if K ≤ MAX(maxK ∗ Kr, 1)) then

8:

if maxTries updates on maxK without improvement then

9:

maxK = maxK(0) ⊲ force exploration

10:

else ⊲ Iteratively reduce maxK

11:

Calculate maxK(t)

12:

K = maxK

13:

else ⊲ Iteratively reduce K

14:

K = MIN(MAX(K − MAX(maxK ∗ Kr, 1), 1), maxK)

15:

if new K is different from previous K then

16:

Recreate swarms with new K

17:

else

18:

Permutate dimensions and resize swarms

19:

Recalculate PBest’s and KBest’s fitness values

20:

else ⊲ give it a 50% chance of permutation

21:

if rand < 0.5 then

22:

Permutate dimension and resize swarms

23:

Recalculate PBest’s and KBest’s fitness

24:

Execute CCPSO2 search and Calculate fitImprovement

slide-16
SLIDE 16

CCPSO2 Proposed Approach Experimental Results Conclusion

Results

Benchmark used to validate the method:

Congress on Evolutionary Computation 2013/2015 (CEC13/15) for Large Scale Global Optimization (LSGO). 15 Benchmark Functions 1000 dimensions

Iterative Partitioning method was compared to CCPSO2 and classic PSO.

16 / 21

slide-17
SLIDE 17

CCPSO2 Proposed Approach Experimental Results Conclusion

  • 1E+17
  • 1E+15
  • 1E+13
  • 1E+11
  • 1E+09
  • 1E+07
  • 1E+05
  • 1E+03
  • 1E+01

1E+01 1E+03 1E+05 1E+07 1E+09 1E+11 1E+13 1E+15 1E+17 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 Benchmark Functions

Results for PSO, CCPSO2 and CCPSO2-IP

PSO Average CCPSO2 Average CCPSO2-IP S Average CCPSO2-IP L Average Averaged Difference when compared with CCPSO2-IP E

Dimensions=1000, Particles=15, Evaluations=500K, Independent runs=10 Fitness Y=0 is CCPSO-IP E result, Y>0 is higher fitness (worse), Y<0 is lower fitness (better)

Figure : Averages and Std.Dev. on logarithmic scale compared to CCPSO2-IP E [15 benchmark functions, 500K fitness eval., 15 particles, 10

independent runs].

17 / 21

slide-18
SLIDE 18

CCPSO2 Proposed Approach Experimental Results Conclusion

Figure : Comparison between methods for F2 benchmark function [1M

fitness eval., 30 particles, 25 independent runs. PSO: [w=0.7; c1=0.8; c2=1.1]. CCPSO2: S={2,5,10,50,100,250}. CCPSO2-IP: E[Br=0.5, maxTries=2]; S[Br=0.521, maxTries=3]; L[Br=0.5, maxTries=5]].

18 / 21

slide-19
SLIDE 19

CCPSO2 Proposed Approach Experimental Results Conclusion

Figure : Last 200K fitness evaluations for CCPSO2-IP methods on F2 [1M fitness eval., 30 particles, 25 independent runs. CCPSO2-IP: E[Br=0.5,

maxTries=2]; S[Br=0.521, maxTries=3]; L[Br=0.5, maxTries=5]].

19 / 21

slide-20
SLIDE 20

CCPSO2 Proposed Approach Experimental Results Conclusion

Conclusion

CCPSO2-IP showed: Superior results when compared to CCPSO2 and PSO.

Specially on difficult functions.

Good capacity of escaping from local minima.

Even after long stagnation periods.

20 / 21

slide-21
SLIDE 21