Research Issues in Many-Objective Optimization with Evolutionary - - PowerPoint PPT Presentation

research issues in many objective optimization with
SMART_READER_LITE
LIVE PREVIEW

Research Issues in Many-Objective Optimization with Evolutionary - - PowerPoint PPT Presentation

Research Issues in Many-Objective Optimization with Evolutionary Algorithms Frederico Gadelha Guimares fredericoguimaraes@ufmg.br +55 31-3409-3419 Faculty of Engineering Department of Electrical Engineering Universidade Federal de Minas


slide-1
SLIDE 1

Research Issues in Many-Objective Optimization with Evolutionary Algorithms

Frederico Gadelha Guimarães

fredericoguimaraes@ufmg.br +55 31-3409-3419 Faculty of Engineering Department of Electrical Engineering Universidade Federal de Minas Gerais

slide-2
SLIDE 2

Presentation plan

  • Introduction and terminology
  • Motivation
  • Issues in many-objective optimization
  • Approaches and techniques
  • Directions
slide-3
SLIDE 3

Introduction and terminology

Multi-objective optimization problems:

slide-4
SLIDE 4

Introduction and terminology

Multi-objective optimization problems:

Objective 1 Objective 2 better worse worse

slide-5
SLIDE 5

Introduction and terminology

  • From a multi-objective problem to a single objective problem: preferences

and aggregation methods;

  • The optimization process returns a single solution;
slide-6
SLIDE 6

Introduction and terminology

  • Since evolutionary algorithms work with a population of points, they can

search for a representative set of estimates of Pareto optimal solutions;

  • Multi-objective Evolutionary Algorithms (MOEAs):
  • P(t+1)

Sv{ V{ Sr{P(t)} }, P(t) } ←

slide-7
SLIDE 7

Advantages of searching for the trade-off front

  • Preferences do not need to be specified a priori – choose after seeing the

alternatives;

  • Offer different alternatives to different clients;
  • Reveal common properties among trade-off solutions;
  • Introduce more flexibility into the design process;
slide-8
SLIDE 8

A brief history

  • 1984: first EMO approaches;
  • 1990: dominance-based ranking;
  • 1990: dominance-based ranking with diversity preservation techniques;
  • 1995: elitist algorithms; convergence proofs; preference incorporation;
  • 2000: comparison and performance; test functions; quality measures;
  • 2000: EMO+MCDM; indicator-based algorithms
  • 2010: statistical performance evaluation;
  • 2010: scalability; many-objective optimization;
slide-9
SLIDE 9

What about scalability?

First of all: how many is too many? It was only in recent years that researchers have investigated the scalability of MOEAs – and the results were not favourable:

  • Khare et al. (2003): the poor scalability of NSGA-II, PESA and SPEA2 in

scalable test functions;

  • Hughes (2005): aggregation methods with multistart perform better MOEAs;
  • Knowles & Corne (2007): MOEAs do not perform better than random search

in problems with more than 10 objectives;

  • Purshouse & Fleming (2007): the ability of variation operators to produce

solutions that dominate their parents decrease with increasing the number

  • f objectives;
slide-10
SLIDE 10

What about scalability?

Understanding the problem Garza-Fabre et al. (2011):

slide-11
SLIDE 11

What about scalability?

Difficulties with many-objective optimization:

  • Loss of selective pressure (proximity and diversity);
  • Dimensionality and computational cost;
  • Visualization of solutions;
  • Decision-making under a huge set of alternatives;
slide-12
SLIDE 12

Why many-objective optimization?

  • Multiobjectivization: supplementary objectives or decomposition of the
  • riginal objective;
  • Constraint-handling;
  • Multidisciplinary optimization (MDO), e.g. aircraft design;
slide-13
SLIDE 13

Why many-objective optimization?

  • Multiobjectivization: supplementary objectives or decomposition of the
  • riginal objective;
  • Constraint-handling;
  • Multidisciplinary optimization (MDO), e.g. aircraft design;
  • Musselman & Talavage (1980): Water resource engineering problem with 5
  • bjectives and 7 constraints;
  • Fleming et al. (2005): a flight control system with 8 objectives;
  • Hughes (2007): Radar waveform optimization with 9 objectives;
  • Sulflow et al. (2007): Nurse scheduling problem with 25 objectives;
  • Knowles & Corne (2007): Travelling salesman problems and job shop

scheduling problems with 5 to 20 objectives;

slide-14
SLIDE 14

Different notions of dominance

  • Pareto dominance;
  • Epsilon dominance;
  • Cone dominance;
slide-15
SLIDE 15

Different notions of dominance

  • Batista et al. (EMO 2011): Pareto cone epsilon dominance: relaxation of

dominance that enables the approximation of nondominated points in some adjacent boxes that would otherwise be epsilon-dominated

slide-16
SLIDE 16

Different notions of dominance

slide-17
SLIDE 17

Different notions of dominance

  • Batista et al. (IEEE CEC 2011)
  • Order induced by different dominance criteria in some quadratic test

problems and DTLZ problems; (1)Rate of nondominated solutions (RNS): proportion of points within a given finite set that are not dominated by any other point in that set; (2)Normalized dominance depth (NDD): Number of successive fronts that can be obtained from a given finite set of points, divided by the size of the test set;

slide-18
SLIDE 18

Different notions of dominance

  • Batista et al. (IEEE CEC 2011)
  • Order induced by different dominance criteria in some quadratic test

problems and DTLZ problems;

slide-19
SLIDE 19

Approaches – modifying Pareto dominance

  • Sato et al. (2007): use cone dominance to improve convergence:
  • Increases selective pressure, but decreases diversity;
slide-20
SLIDE 20

Approaches – modifying Pareto dominance

  • Saxena et al. (2009): uses epsilon dominance to improve convergence

together with PCA-based approach for dimensionality reduction;

  • Argues that epsilon dominance offers a good balance between convergence

and diversity;

slide-21
SLIDE 21

Approaches – modifying ranking

  • Drechsler et al. (2001): proposes the relation “favour”, based on the

number of objectives for which one solution is better than the other;

  • Zou et al. (2008): introduce L-dominance:

X1 L-dominates X2 if:

  • B(X1,X2) – W(X1,X2) > 0;
  • The p-norm of F(X1) is smaller than the p-norm of F(X2);
slide-22
SLIDE 22

Approaches – modifying ranking

  • Sato et al. (2009): introduce Pareto partial dominance:
  • Select r<m objectives to check Pareto dominance and rank the population;
  • At every fixed number of generations, switch the r objective functions used

for ranking;

slide-23
SLIDE 23

Approaches – modifying ranking

  • Wang & Wu (2007): introduce fuzzy Pareto-dominance:

X1 fuzzy-dominates X2 with degree: μ(X 1, X 2)= 1 N ∑ μb( f i( X 1)− f i( X 2))+ 1 2N∑ μe( f i( X 1)− f i(X 2))

slide-24
SLIDE 24

Approaches – modifying ranking

  • Knowles & Corne (2007): simple average ranking performs better than more

complicated ranking schemes; Simple average ranking: each solution is ranked according to each objective, then the average rank is computed for each solution; However, the gain in selective pressure towards proximity to the Pareto front comes at the expense of diversity: few solutions are found;

slide-25
SLIDE 25

Approaches – performance indicators

  • Indicator-based Evolutionary Algorithms (IBEA): Zitzler et al. (2004);
  • Hypervolume-based MOEAs: Beume et al. (2007);
  • The search ability of IBEAs scales well with the number of objectives;
  • However, the time to compute the hypervolume grows exponentially with

the number of objectives – impractical for more than six objectives;

  • Brockhoff & Zitzler (2006, 2007): dimensionality reduction to extend the

applicability of hypervolume-based MOEA;

  • Tagawa et al. (2011): multi-core processing to reduce the cost of

hypervolume computation;

  • Many papers on computing hypervolume...
slide-26
SLIDE 26

Approaches – dimensionality reduction

Saxena & Deb (2007): dimensionality reduction using PCA methods:

  • Sources of redundancy of objectives:

– either non-conflicting objectives or – the removal of a given objective from the problem makes no significant difference in the front obtained;

  • Run a MOEA for a large number of generations then reduce the number of
  • bjectives using the correlation matrix of the objective values, while

maintaining the shape of the Pareto front;

slide-27
SLIDE 27

Approaches – dimensionality reduction

Brockhoff & Zitzler (2006, 2007): dimensionality reduction based on dominance, however high computational cost; Singh et al. (2011): similar ideas but using heuristics instead:

  • Relevant or critical objectives are the ones that affect more the number of

nondominated solutions in the population;

  • Run a MOEA for a large number of generations then reduce the number of
  • bjectives using the following heuristic: compute the change in the number
  • f nondominated solutions with the removal of a given objective; and

eliminate the objective that causes negligible change in the number of nondominated solutions;

slide-28
SLIDE 28

Approaches – scalar functions

Scalar functions provide a way to aggregate objectives without the computational cost of hypervolume indicators;

  • Hughes (2007): different scalar functions are defined and each solution is

ranked according to each scalar function. An overall rank is calculated based on the multiple ranks;

  • Ishibuchi et al. (2006, 2007): different scalar functions are defined, but each

solution is evaluated with a single scalar function;

  • Wickramasinghe et al. (2009): distance to reference points to guide PSO in

many-objective optimization;

slide-29
SLIDE 29

Some directions

  • Reducing the computational cost, specially in indicator-based MOEAs – are

there alternatives to hypervolume?

  • Relaxation of the concept of dominance – cone epsilon dominance?
  • Surrogate-assisted many-objective optimization for expensive problems;
  • Co-evolutionary approaches: evolving parameters of scalar functions

together with the solutions in the search space;

  • Visualization and decision-making tools;
slide-30
SLIDE 30

Summary

  • Evolutionary algorithms are very useful in multi-objective optimization:

– Search for a set of solutions – General problem solvers

  • Regarding many-objective problems, some interesting approaches have

been proposed: relaxation of dominance, different ranking methods, indicator-based evolution, dimensionality reduction and scalar functions;

slide-31
SLIDE 31

Summary

  • Current investigations have shown that scalarization and average ranking

suffer from loss of diversity, while indicator-based approaches are computationally expensive;

  • Diversity promotion is important in MOEAs, still hard to implement in

many-objective optimization; Adra & Fleming (2011);

  • The studies so far are promising, but further improvements are still required

before MOEAs can be deemed efficient for many-objective optimization problems;

slide-32
SLIDE 32

References

1.

  • V. Khare, X. Yao, and K. Deb, “Performance scaling of multi-objective evolutionary algorithms,” in Proc.

2nd Int. Conf. Evol. Multi-Criterion Optim. (EMO 2003), C. M. Fonseca, P. J. Fleming, E. Zitzler, K. Deb, and L. Thiele, Eds., Berlin, Germany, 2003, pp. 376–390. 2.

  • E. J. Hughes, “Evolutionary many-objective optimiszation: Many once or one many?,” in Proc. 2005 Congr.
  • Evol. Comput. (CEC 2005), Edinburgh, Scotland, 2005, vol. 1, pp. 222–227.

3.

  • J. Knowles and D. Corne, “Quantifying the effects of objective space dimension in evolutionary

multiobjective optimization,” in Proc. 4th Int. Conf. Evol. Multi-Criterion Optim. (EMO 2007), S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T. Murata, Eds., Berlin, Germany, 2007, pp. 757–771. 4.

  • R. C. Purshouse and P. J. Fleming, “On the evolutionary optimization of many conflicting objectives,” IEEE
  • Trans. On Evolutionary Computation, vol. 11, no. 6, pp. 770-784, 2007.

5.

  • X. Zou, Y. Chen, M. Liu, L. Kang, “A new evolutionary algorithm for solving many-objective optimization

problems,” IEEE Trans. on Systems, Man, and Cybernetics – Part B: Cybernetics, vol. 38, no. 5, 2008. 6.

  • E. Zitzler and S. Kunzli, “Indicator-based selection in multiobjective search,” in Proc. 8nd Int. Conf.

Parallel Problem Solving from Nature, X. Yao, E. Burke, J. A. Lozano, J. Smith, J. J. Merelo-Guervós, J. A. Bullinaria, J. Rowe, P. Tino, A. Kabán, and H.-P. Schwefel, Eds. Berlin, Germany: Springer-Verlag, Sep. 2004, vol. 3242, pp. 832–842. 7.

  • N. Beume, B. Naujoks, and M. Emmerich, “SMS-EMOA: Multiobjective selection based on dominated

hypervolume,” Eur. J. Oper. Res., vol. 181, no. 3, pp. 1653–1669, 2007.

slide-33
SLIDE 33

References

1.

  • H. Sato, H. E. Aguirre, and K. Tanaka, “Controlling dominance area of solutions and its impact on the

performance of MOEAs,” in Proc. 4th Int. Conf. Evol. Multi-Criterion Optimization, S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T. Murata, Eds. New York: Springer-Verlag, Mar. 2007, vol. 4403, pp. 5–20. 2.

  • D. Corne and J. D. Knowles, “Techniques for highly multiobjective optimisation: Some nondominated points

are better than others,” in Proc. 9th Annu. GECCO, H. Lipson, Ed., Jul. 2007, pp. 773–780. 3.

  • D. Brockhoff and E. Zitzler, “Improving hypervolume-based multiobjective evolutionary algorithms by using
  • bjective reduction methods,” in Proc. CEC, Sep. 2007, pp. 2086–2093.

4.

  • D. Saxena and K. Deb, “Non-linear dimensionality reduction procedures for certain large-dimensional

multi-objective optimization problems: Employing correntropy and a novel maximum variance unfolding,” in

  • Proc. 4th Int. Conf. Evol. Multi-Criterion Optimization, S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T.

Murata, Eds. New York: Springer-Verlag, Mar. 2007, vol. 4403, pp. 772–787. 5. di Pierro, F. Many-objective evolutionary algorithms and applications to water resources engineering. PhD thesis, University of Exeter, UK, August 2006. 6.

  • K. Musselman and J. Talavage, “A tradeoff cut approach to multiple objective optimization,” Oper. Res.,
  • vol. 28, no. 6, pp. 1424–1435, 1980.

7.

  • E. J. Hughes, “Radar waveform optimization as a many-objective application benchmark,” in Proc. EMO,
  • vol. 4403. 2007, pp. 700–714.

8.

  • M. Garza-Fabre, G. Toscano-Pulido, C. A. C. Coello, E. Rodriguez-Tello, “Effective ranking + speciation

= many-objective optimization,” in Proc. CEC, 2011, pp. 2115-2122.

slide-34
SLIDE 34

Acknowledgements

  • National Council for Research and Development (CNPq)
  • Marie Curie Actions (FP7 Program)