Benchmarking the SMS-EMOA with Self-adaptation on the bbob-biobj - - PowerPoint PPT Presentation

benchmarking the sms emoa with self adaptation on the
SMART_READER_LITE
LIVE PREVIEW

Benchmarking the SMS-EMOA with Self-adaptation on the bbob-biobj - - PowerPoint PPT Presentation

Benchmarking the SMS-EMOA with Self-adaptation on the bbob-biobj Test Suite Simon Wessing Chair of Algorithm Engineering Computer Science Department Technische Universitt Dortmund 16 July 2017 Introduction Evolutionary multiobjective


slide-1
SLIDE 1

Benchmarking the SMS-EMOA with Self-adaptation on the bbob-biobj Test Suite

Simon Wessing

Chair of Algorithm Engineering Computer Science Department Technische Universität Dortmund

16 July 2017

slide-2
SLIDE 2

Introduction

◮ Evolutionary multiobjective optimization ◮ Continuous decision variables ◮ (1 + 1)-SMS-EMOA is algorithmically equivalent to

single-objective (1 + 1)-EA ⇒ Theory about optimal step size from single-objective

  • ptimization applies

Benchmarking the SMS-EMOA with Self-adaptation 2 / 18

slide-3
SLIDE 3

Introduction

◮ Evolutionary multiobjective optimization ◮ Continuous decision variables ◮ (1 + 1)-SMS-EMOA is algorithmically equivalent to

single-objective (1 + 1)-EA ⇒ Theory about optimal step size from single-objective

  • ptimization applies

◮ Situation for (µ + 1), (µ + λ) unknown ◮ How to define step size optimality? ◮ How to adapt step size if not with very sophisticated

MO-CMA-ES?

Benchmarking the SMS-EMOA with Self-adaptation 2 / 18

slide-4
SLIDE 4

Development of Control Mechanism

◮ Idea: use self-adaptation from single-objective optimization

Benchmarking the SMS-EMOA with Self-adaptation 3 / 18

slide-5
SLIDE 5

Development of Control Mechanism

◮ Idea: use self-adaptation from single-objective optimization ◮ Mutation of genome: y = x + σN(0, I) ◮ Mutation of step size: σ = ˜

σ · exp(τN(0, 1))

◮ Learning parameter τ ∝ 1/√n

Benchmarking the SMS-EMOA with Self-adaptation 3 / 18

slide-6
SLIDE 6

Development of Control Mechanism

◮ Idea: use self-adaptation from single-objective optimization ◮ Mutation of genome: y = x + σN(0, I) ◮ Mutation of step size: σ = ˜

σ · exp(τN(0, 1))

◮ Learning parameter τ ∝ 1/√n ◮ Not state of the art any more ◮ Behavior is emergent ◮ Theoretical analysis is difficult ◮ Application to multiobjective optimization is scarce

⇒ Experiment to find good parameter configurations

Benchmarking the SMS-EMOA with Self-adaptation 3 / 18

slide-7
SLIDE 7

Experimental Setup

Factor Type Symbol Levels Number variables

  • bservable

n {2, 3, 5, 10, 20} Learning param. constant control c {2−2, 2−1, 20, 21, 22, 23} Population size control µ {10, 50} Number offspring control λ {1, µ, 5µ} Recombination control {discrete, intermediate, arithmetic, none}

◮ Full factorial design ◮ 15 unimodal problems of BBOB-BIOBJ 2016

(only first instance)

◮ Budget: 104n function evaluations ◮ Assessment: rank-transformed HV values of whole EA runs

Benchmarking the SMS-EMOA with Self-adaptation 4 / 18

slide-8
SLIDE 8

Other Factors Held Constant

◮ Initial mutation strength σinit = 0.025 ◮ Repair method for bound violations: Lamarckian reflection

(search space [−100, 100]n, scaled to unit hypercube)

◮ Selection: iteratively removes worst individual, until µ reached

(backward elimination) ⇒ Might have to reconsider in the future

Benchmarking the SMS-EMOA with Self-adaptation 5 / 18

slide-9
SLIDE 9

Pseudocode

Input: population size µ, initial population P0, number of

  • ffspring λ

1: t ← 0 2: while stopping criterion not fulfilled do 3:

Ot ← createOffspring(Pt) // create λ offspring

4:

evaluate(Ot) // calculate objective values

5:

Qt ← Pt ∪ Ot

6:

r ← createReferencePoint(Qt)

7:

while |Qt| > µ do

8:

{F1, . . . , Fw} ← nondominatedSort(Qt) // sort in fronts

9:

x∗ ← argminx∈Fw(∆s(x, Fw, r)) // x∗ with smallest contr.

10:

Qt ← Qt \ {x∗} // remove worst individual

11:

end while

12:

Pt+1 ← Qt

13:

t ← t + 1

14: end while

Benchmarking the SMS-EMOA with Self-adaptation 6 / 18

slide-10
SLIDE 10

Main Effect: Learning Parameters τ = c/√n

c = 2−2 c = 2−1 c = 20 c = 21 c = 22 c = 23 20 40 60 80 100 120 140 Average Rank ◮ c = 2−2 is always the worst choice

⇒ Exclude c = 2−2 from further analysis

Benchmarking the SMS-EMOA with Self-adaptation 7 / 18

slide-11
SLIDE 11

Mutation Strength vs. Generation

100 101 102 103 Generation 10−5 10−4 10−3 10−2 10−1 100

  • Avg. step size ¯

σ

(a) τ = 2−2/√n.

100 101 102 103 Generation 10−5 10−4 10−3 10−2 10−1 100

  • Avg. step size ¯

σ

(b) τ = 20/√n.

100 101 102 103 Generation 10−5 10−4 10−3 10−2 10−1 100

  • Avg. step size ¯

σ

(c) τ = 22/√n.

100 101 102 103 Generation 10−5 10−4 10−3 10−2 10−1 100

  • Avg. step size ¯

σ

(d) τ = 23/√n.

Benchmarking the SMS-EMOA with Self-adaptation 8 / 18

slide-12
SLIDE 12

Main Effect: Selection Variants

(10 + 1) (10 + 10) (10 + 50) (50 + 1) (50 + 50)(50 + 250) 20 40 60 80 100 Average Rank

Benchmarking the SMS-EMOA with Self-adaptation 9 / 18

slide-13
SLIDE 13

Main and Interaction Effects: Recombination & Selection

20 40 60 80 100 Average Rank arithmetic discrete intermediate none (10 + 1) 46.97 85.43 82.53 78.95 (10 + 10) 51.29 72.55 83.48 68.34 (10 + 50) 47.69 62.90 82.25 42.50 (50 + 1) 61.93 63.21 84.93 40.95 (50 + 50) 58.23 55.88 84.06 30.43 (50 + 250) 53.77 51.34 78.82 27.14

Benchmarking the SMS-EMOA with Self-adaptation 10 / 18

slide-14
SLIDE 14

Interaction Effect: Learning Parameter vs. Recombination

arithmetic discrete intermediate none 2−1/√n 49.96 66.60 79.90 40.82 20/√n 57.01 53.97 83.87 44.49 21/√n 55.65 65.43 82.33 52.42 22/√n 48.70 66.57 80.38 50.98 23/√n 55.25 73.53 86.90 51.54

Benchmarking the SMS-EMOA with Self-adaptation 11 / 18

slide-15
SLIDE 15

Comparison with (50 + 250) SBX on bbob-biobj 2016

1 2 3 4 5 6 7 8

log10 of (# f-evals / dimension)

0.0 0.2 0.4 0.6 0.8 1.0

Proportion of function+target pairs

SBX ES bbob-biobj - f1-f55, 2-D 5, 5 instances

0.0.0

1 2 3 4 5 6 7 8

log10 of (# f-evals / dimension)

0.0 0.2 0.4 0.6 0.8 1.0

Proportion of function+target pairs

SBX ES bbob-biobj - f1-f55, 5-D 5, 5 instances

0.0.0

1 2 3 4 5 6 7 8

log10 of (# f-evals / dimension)

0.0 0.2 0.4 0.6 0.8 1.0

Proportion of function+target pairs

SBX ES bbob-biobj - f1-f55, 10-D 5, 5 instances

0.0.0

1 2 3 4 5 6 7 8

log10 of (# f-evals / dimension)

0.0 0.2 0.4 0.6 0.8 1.0

Proportion of function+target pairs

SBX ES bbob-biobj - f1-f55, 20-D 5, 5 instances

0.0.0

Benchmarking the SMS-EMOA with Self-adaptation 12 / 18

slide-16
SLIDE 16

Comparison with (50 + 250) SBX on bbob-biobj 2016

1 2 3 4 5 6 7 8

log10 of (# f-evals / dimension)

0.0 0.2 0.4 0.6 0.8 1.0

Proportion of function+target pairs

ES SBX bbob-biobj - f11, 5-D 5, 5 instances

0.0.0

11 sep. Ellipsoid/sep. Ellipsoid

1 2 3 4 5 6 7 8

log10 of (# f-evals / dimension)

0.0 0.2 0.4 0.6 0.8 1.0

Proportion of function+target pairs

SBX ES bbob-biobj - f18, 3-D 5, 5 instances

0.0.0

18 sep. Ellipsoid/Schwefel

◮ SBX is better/competitive on separable problems

Benchmarking the SMS-EMOA with Self-adaptation 13 / 18

slide-17
SLIDE 17

Discussion

◮ Self-adaptive step size adaptation works in both directions

(increasing/decreasing)

◮ Best configuration for budget of 104n:

◮ No recombination ◮ τ = 20/√n ◮ (50 + 250)-selection

◮ Surprisingly similar to single-objective case ◮ Only arithmetic and no recombination seem to be worth

investigating further

Benchmarking the SMS-EMOA with Self-adaptation 14 / 18

slide-18
SLIDE 18

Application to bbob-biobj 2017

Modifications to previous experiments:

◮ Initialization in [0.475, 0.525]n (normalized), corresponding to

[−5, 5]n in original problem space

◮ Budget of 105n ◮ Comparison to (µ + 1)-SMS-EMOA from bbob-biobj 2016

◮ DE variation ◮ SBX/PM variation Benchmarking the SMS-EMOA with Self-adaptation 15 / 18

slide-19
SLIDE 19

Some Results 5-D

separable-separable separable-moderate

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-DE SMS-PM SMS-ES best 2016 bbob-biobj - f1, f2, f11, 5-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-PM SMS-DE SMS-ES best 2016 bbob-biobj - f3, f4, f12, f13, 5-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

multimodal-multimodal multimodal-weakstructure

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-ES SMS-PM SMS-DE best 2016 bbob-biobj - f46, f47, f50, 5-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-ES SMS-PM SMS-DE best 2016 bbob-biobj - f48, f49, f51, f52, 5-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

Benchmarking the SMS-EMOA with Self-adaptation 16 / 18

slide-20
SLIDE 20

All 55 Functions

2-D 5-D

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-PM SMS-DE SMS-ES best 2016 bbob-biobj - f1-f55, 2-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-PM SMS-DE SMS-ES best 2016 bbob-biobj - f1-f55, 5-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

10-D 20-D

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-PM SMS-DE SMS-ES best 2016 bbob-biobj - f1-f55, 10-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

1 2 3 4 5 6 7 8 log10 of (# f-evals / dimension) 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of function+target pairs

SMS-PM SMS-DE SMS-ES best 2016 bbob-biobj - f1-f55, 20-D 58 targets in 1..-1.0e-4 10 instances

v2.1, hv-hash=ff0e71e8cd978373

Benchmarking the SMS-EMOA with Self-adaptation 17 / 18

slide-21
SLIDE 21

Conclusions and Outlook

Conclusions:

◮ Self-adaptive variation better than SBX in all tested

dimensions, also on multimodal problems

◮ But not better than DE on multimodal problems ◮ Not a good anytime algorithm ◮ Restarts?

Outlook:

◮ Separate step size for each decision variable? ◮ Exploit knowledge that dominated solutions need higher

mutation strength?

◮ More sophisticated recombination variants? ◮ Does variation interact with backward/forward greedy

selection?

Benchmarking the SMS-EMOA with Self-adaptation 18 / 18