Results of the 2017 IEEE CEC Competition on Niching Methods for - - PowerPoint PPT Presentation

results of the 2017 ieee cec competition on niching
SMART_READER_LITE
LIVE PREVIEW

Results of the 2017 IEEE CEC Competition on Niching Methods for - - PowerPoint PPT Presentation

1 Data Science Institute, Department of Management Science, Lancaster University, UK 2 School of Computer Science and Information Technology, RMIT University, Australia 3 Department of Computer Science, University of Pretoria, South Africa


slide-1
SLIDE 1

Results of the 2017 IEEE CEC Competition on Niching Methods for Multimodal Optimization

M.G. Epitropakis1, X. Li2, and A. Engelbrecht3

1Data Science Institute, Department of Management Science, Lancaster University, UK 2School of Computer Science and Information Technology, RMIT University, Australia 3Department of Computer Science, University of Pretoria, South Africa

IEEE CEC 2017 Competition on Niching Methods

m.epitropakis@lancaster.ac.uk xiaodong.li@rmit.edu.au engel@cs.up.ac.za

slide-2
SLIDE 2

Table of contents

  • 1. Introduction
  • 2. Participants
  • 3. Results
  • 4. Winners
  • 5. Summary

1

slide-3
SLIDE 3

Introduction

slide-4
SLIDE 4

Introduction

  • Many real-world problems are “multi-modal” by nature, i.e.,

multiple satisfactory solutions exist

  • Niching methods: promote and maintain formation of multiple

stable subpopulations within a single population

  • Aim: maintain diversity and locate multiple globally optimal

solutions.

  • Challenge: Find an efficient optimization algorithm, which is

able to locate multiple global optimal solutions for multi-modal problems with various characteristics.

2

slide-5
SLIDE 5

Competition: CEC 2013/2015/2016/2017

Provide a common platform that encourages fair and easy comparisons across different niching algorithms.

  • X. Li, A. Engelbrecht, and M.G. Epitropakis, “Benchmark Functions

for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization”, Technical Report, Evolutionary Computation and Machine Learning Group, RMIT University, Australia, 2013

  • 20 benchmark multi-modal functions with different characteristics
  • 5 accuracy levels: ε ∈ {10−1, 10−2, 10−3, 10−4, 10−5}
  • The benchmark suite and the performance measures have been

implemented in: C/C++, Java, MATLAB, (Python soon)

3

slide-6
SLIDE 6

Benchmark function set

  • X. Li, A. Engelbrecht, and M.G. Epitropakis, “Benchmark Functions for CEC’2013 Special Session

and Competition on Niching Methods for Multimodal Function Optimization”, Technical Report, Evolutionary Computation and Machine Learning Group, RMIT University, Australia, 2013 Id Dim. # GO Name Characteristics F1 1 2 Five-Uneven-Peak Trap Simple, deceptive F2 1 5 Equal Maxima Simple F3 1 1 Uneven Decreasing Maxima Simple F4 2 4 Himmelblau Simple, non-scalable, non-symmetric F5 2 2 Six-Hump Camel Back Simple, not-scalable, non-symmetric F6 2,3 18,81 Shubert Scalable, #optima increase with D, unevenly distributed grouped optima F7 2,3 36,216 Vincent Scalable, #optima increase with D, unevenly distributed optima F8 2 12 Modified Rastrigin Scalable, #optima independent from D, symmetric F9 2 6 Composition Function 1 Scalable, separable, non-symmetric F10 2 8 Composition Function 2 Scalable, separable, non-symmetric F11 2,3,5,10 6 Composition Function 3 Scalable, non-separable, non-symmetric F12 2,3,5,10 8 Composition Function 4 Scalable, non-separable, non-symmetric 4

slide-7
SLIDE 7

Performance Measures

Peak Ratio (PR) measures the average percentage of all known global optima found over multiple runs: PR = ∑NR

run=1 # of Global Optimai

(# of known Global Optima) ∗ (# of runs) Who is the winner:

  • The participant with the highest average Peak Ratio

performance on all benchmarks wins.

  • In all functions the following holds: the higher the PR value, the

better

5

slide-8
SLIDE 8

Participants

slide-9
SLIDE 9

Participants

Submissions to the competition:

  • (SSGA-DMRTS-DDC: SSGA-1) Steady State Genetic Algorithm with

the Dynamic Modified Restricted Tournament Selection Method and the Dynamic Distance Criterion [1]

  • (SSGA-DMRTS-DDC-F: SSGA-2) Steady State Genetic Algorithm

with a static (dimensionality-dependent) Modified Restricted Tournament Selection Method and the Dynamic Distance Criterion by Camila Silva de Magalhães, Lincon Onório Vidal, Matheus Muller Pereira da Silva, Raquel Gomes Gon calves Farias, Helio José Correa Barbosa, and Laurent Emmanuel Dardenne, from UFRJ and LNCC, Brazil

6

slide-10
SLIDE 10

Participants (2)

Implemented algorithms for comparisons:

  • (CrowdingDE) Crowding Differential Evolution [3]
  • (DE/nrand/1) Niching Differential Evolution algorithms with

neighborhood mutation strategies [4]

  • (dADE/nrand/1) A Dynamic Archive Niching Differential

Evolution algorithm for Multimodal Optimization [5]

  • (NEA2) Niching the CMA-ES via Nearest-Better Clustering [6]
  • (NMMSO) Niching Migratory Multi-Swarm Optimiser of Fieldsend

[2]. In the repository: CMA-ES, IPOP-CMA-ES, DE/nrand/1,2, DECG, DELG, DELS-aj, CrowdingDE, dADE/nrand/1,2, NEA1, NEA2, N-VMO, PNA-NSGAII, A-NSGAII, rlsis, rs-cmsa-es, ascga, nea2+, ...

7

slide-11
SLIDE 11

Results

slide-12
SLIDE 12

Results

Summary:

  • 2 new search algorithms
  • 5 comparators based on the previous competitions @ CEC2013

and CEC2015

  • 20 multi-modal benchmark functions
  • 5 accuracy levels ε ∈ {10−1, 10−2, 10−3, 10−4, 10−5}
  • Results: per accuracy level & over all accuracy levels
  • In total (CEC2013, CEC2015, CEC2016) 25 algorithms in the

repository: https://github.com/mikeagn/CEC2013

8

slide-13
SLIDE 13

Accuracy level ε = 10−1

Accuracy level 1.0e−1

Benchmark function

5 10 15 20 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O 0.0 0.2 0.4 0.6 0.8 1.0

  • 0.00

0.25 0.50 0.75 1.00 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O

Peak Ratio in all benchmark functions

Algorithms SSGA−1 SSGA−2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 NMMSO

Accuracy level 1.0e−1

9

slide-14
SLIDE 14

Accuracy level ε = 10−2

Accuracy level 1.0e−2

Benchmark function

5 10 15 20 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O 0.0 0.2 0.4 0.6 0.8 1.0

  • 0.00

0.25 0.50 0.75 1.00 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O

Peak Ratio in all benchmark functions

Algorithms SSGA−1 SSGA−2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 NMMSO

Accuracy level 1.0e−2

10

slide-15
SLIDE 15

Accuracy level ε = 10−3

Accuracy level 1.0e−3

Benchmark function

5 10 15 20 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O 0.0 0.2 0.4 0.6 0.8 1.0

  • 0.00

0.25 0.50 0.75 1.00 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O

Peak Ratio in all benchmark functions

Algorithms SSGA−1 SSGA−2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 NMMSO

Accuracy level 1.0e−3

11

slide-16
SLIDE 16

Accuracy level ε = 10−4

Accuracy level 1.0e−4

Benchmark function

5 10 15 20 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O 0.0 0.2 0.4 0.6 0.8 1.0

  • 0.00

0.25 0.50 0.75 1.00 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O

Peak Ratio in all benchmark functions

Algorithms SSGA−1 SSGA−2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 NMMSO

Accuracy level 1.0e−4

12

slide-17
SLIDE 17

Accuracy level ε = 10−5

Accuracy level 1.0e−5

Benchmark function

5 10 15 20 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O 0.0 0.2 0.4 0.6 0.8 1.0

  • 0.00

0.25 0.50 0.75 1.00 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O

Peak Ratio in all benchmark functions

Algorithms SSGA−1 SSGA−2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 NMMSO

Accuracy level 1.0e−5

13

slide-18
SLIDE 18

Performance per benchmark across all accuracy levels

  • 1

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00

S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O

Peak Ratio 14

slide-19
SLIDE 19

Performance per algorithm

  • SSGA−1

SSGA−2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 NMMSO 0.00 0.25 0.50 0.75 1.00

a c c 1 a c c 2 a c c 3 a c c 4 a c c 5 a c c 1 a c c 2 a c c 3 a c c 4 a c c 5 a c c 1 a c c 2 a c c 3 a c c 4 a c c 5 a c c 1 a c c 2 a c c 3 a c c 4 a c c 5 a c c 1 a c c 2 a c c 3 a c c 4 a c c 5 a c c 1 a c c 2 a c c 3 a c c 4 a c c 5 a c c 1 a c c 2 a c c 3 a c c 4 a c c 5

Accuracy Level Peak Ratio 15

slide-20
SLIDE 20

Statistical Analysis

SSGA-1 SSGA-2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 SSGA-2 +/= N/A N/A N/A N/A N/A CrowdingDE +/= –/– N/A N/A N/A N/A dADE/nrand/1 +/= =/= +/+ N/A N/A N/A DE/nrand/1 –/– –/– =/= –/– N/A N/A NEA2 +/+ –/– +/+ +/+ +/+ N/A NMMSO +/+ +/+ +/+ +/+ +/+ =/=

  • p: Wilcoxon rank-sum test
  • pb: Bonferroni correction
  • + row wins column,
  • – row loses from column,
  • = non-significant differences
  • N/A: Not Applicable

16

slide-21
SLIDE 21

Overall performance (1)

  • 0.00

0.25 0.50 0.75 1.00 S S G A − 1 S S G A − 2 C r

  • w

d i n g D E d A D E / n r a n d / 1 D E / n r a n d / 1 N E A 2 N M M S O

Peak Ratio in all benchmark functions

Algorithms SSGA−1 SSGA−2 CrowdingDE dADE/nrand/1 DE/nrand/1 NEA2 NMMSO

All Accuracy levels

17

slide-22
SLIDE 22

Overall performance (2)

Algorithm Statistics Rankings Median Mean St.D. Rank (Median) Rank (Mean) 2017 SSGA-1 0.6667 0.6794 0.3283 5 5 SSGA-2 0.8794 0.7381 0.3013 2 4 2015 NMMSO 0.9885 0.8221 0.2538 1 1 2013 CrowdingDE 0.6667 0.5731 0.3612 5 7 DE/nrand/1 0.6396 0.5809 0.3338 7 6 dADE/nrand/1 0.7488 0.7383 0.3010 4 3 NEA2 0.8513 0.7940 0.2332 3 2

18

slide-23
SLIDE 23

Winners

slide-24
SLIDE 24

Winners

Ranking based on average PR values (only CEC2017)

  • 1. (SSGA-DMRTS-DDC-F: SSGA-2) Steady State Genetic Algorithm

with a static (dimensionality-dependent) Modified Restricted Tournament Selection Method and the Dynamic Distance Criterion

  • 2. (SSGA-DMRTS-DDC: SSGA-1) Steady State Genetic Algorithm with

the Dynamic Modified Restricted Tournament Selection Method and the Dynamic Distance Criterion [1] Note: The algorithms have not been fine-tuned for the specific benchmark suite! Note: the new algorithm haven’t performed better than state-of-the-art

19

slide-25
SLIDE 25

Summary

slide-26
SLIDE 26

Conclusions

Summary

  • Two new search algorithms (in total 27 algorithms!)
  • Winner: SSGA-DMRTS-DDC-F Steady State Genetic Algorithm

with a static (dimensionality-dependent) Modified Restricted Tournament Selection Method and the Dynamic Distance Criterion

  • Competitive against state-of-the-art, (Steady state GA, RTS,

dynamic distance criterion, adaptive parameters)

20

slide-27
SLIDE 27

Conclusions (2)

  • State-of-the-art algorithms perform very well on the benchmark

set

  • New algorithms produce competitive and promising

performance Key characteristics of the new algorithms:

  • Modified classic niching techniques: Modified RTS, dynamic

distance measures

  • Usage of adaptive parameter techniques.
  • Algorithms: Steady State Genetic Algorithms

21

slide-28
SLIDE 28

Future Work

Possible objectives:

  • Re-organize the competitions from scratch
  • Enhance the benchmark function set
  • Introduce new performance measures

22

slide-29
SLIDE 29

Acknowledgment

We really want to thank for their help:

  • The participants :-)

23

slide-30
SLIDE 30

Results of the 2017 IEEE CEC Competition on Niching Methods for Multimodal Optimization

M.G. Epitropakis1, X. Li2, and A. Engelbrecht3

1Data Science Institute, Department of Management Science, Lancaster University, UK 2School of Computer Science and Information Technology, RMIT University, Australia 3Department of Computer Science, University of Pretoria, South Africa

IEEE CEC 2017 Competition on Niching Methods

m.epitropakis@lancaster.ac.uk xiaodong.li@rmit.edu.au engel@cs.up.ac.za

slide-31
SLIDE 31

References

[1 ]C.S. de Magalhães, D.M. Almeida, H.J.C. Barbosa, L.E. Dardenne, A dynamic niching genetic algorithm strategy for docking highly flexible ligands, Inf. Sci. 289 (2014) 206–224. [2 ] J. E. Fieldsend, ”Running Up Those Hills: Multi-Modal Search with the Niching Migratory Multi-Swarm Optimiser,” in IEEE Congress on Evolutionary Computation, 2014, pp. 2593 - 2600. [3 ] R. Thomsen, ”Multimodal optimization using crowding-based differential evolution,” In the IEEE Congress on Evolutionary Computation, 2004. CEC2004, vol.2, pp. 1382-1389, 19-23 June, 2004 [4 ] M. G. Epitropakis, V. P. Plagianakos, and M. N. Vrahatis, ”Finding multiple global optima exploiting differential evolution’s niching capability,” in 2011 IEEE Symposium on Differential Evolution (SDE), April 2011, pp. 1-8. [5 ] M. G. Epitropakis, Li, X., and Burke, E. K., ”A Dynamic Archive Niching Differential Evolution Algorithm for Multimodal Optimization”, IEEE Congress on Evolutionary Computation, 2013. CEC 2013. Cancun, Mexico, pp. 79-86, 2013. [6 ] M. Preuss. ”Niching the CMA-ES via nearest-better clustering.” In Proceedings of the 12th annual conference companion on Genetic and evolutionary computation (GECCO ’10). ACM, New York, NY, USA, pp. 1711-1718, 2010. 24

slide-32
SLIDE 32

Results of the 2017 IEEE CEC Competition on Niching Methods for Multimodal Optimization

M.G. Epitropakis1, X. Li2, and A. Engelbrecht3

1Data Science Institute, Department of Management Science, Lancaster University, UK 2School of Computer Science and Information Technology, RMIT University, Australia 3Department of Computer Science, University of Pretoria, South Africa

IEEE CEC 2017 Competition on Niching Methods

m.epitropakis@lancaster.ac.uk xiaodong.li@rmit.edu.au engel@cs.up.ac.za