research issues in many objective optimization with
play

Research Issues in Many-Objective Optimization with Evolutionary - PowerPoint PPT Presentation

Research Issues in Many-Objective Optimization with Evolutionary Algorithms Frederico Gadelha Guimares fredericoguimaraes@ufmg.br +55 31-3409-3419 Faculty of Engineering Department of Electrical Engineering Universidade Federal de Minas


  1. Research Issues in Many-Objective Optimization with Evolutionary Algorithms Frederico Gadelha Guimarães fredericoguimaraes@ufmg.br +55 31-3409-3419 Faculty of Engineering Department of Electrical Engineering Universidade Federal de Minas Gerais

  2. Presentation plan • Introduction and terminology • Motivation • Issues in many-objective optimization • Approaches and techniques • Directions

  3. Introduction and terminology Multi-objective optimization problems:

  4. Introduction and terminology Multi-objective optimization problems: Objective 2 worse Objective 1 better worse

  5. Introduction and terminology • From a multi-objective problem to a single objective problem: preferences and aggregation methods; • The optimization process returns a single solution;

  6. Introduction and terminology • Since evolutionary algorithms work with a population of points, they can search for a representative set of estimates of Pareto optimal solutions; • Multi-objective Evolutionary Algorithms (MOEAs): ← • P(t+1) Sv{ V{ Sr{P(t)} }, P(t) }

  7. Advantages of searching for the trade-off front • Preferences do not need to be specified a priori – choose after seeing the alternatives; • Offer different alternatives to different clients; • Reveal common properties among trade-off solutions; • Introduce more flexibility into the design process;

  8. A brief history • 1984: first EMO approaches; • 1990: dominance-based ranking; • 1990: dominance-based ranking with diversity preservation techniques; • 1995: elitist algorithms; convergence proofs; preference incorporation; • 2000: comparison and performance; test functions; quality measures; • 2000: EMO+MCDM; indicator-based algorithms • 2010: statistical performance evaluation; • 2010: scalability; many-objective optimization;

  9. What about scalability? First of all: how many is too many? It was only in recent years that researchers have investigated the scalability of MOEAs – and the results were not favourable: • Khare et al. (2003): the poor scalability of NSGA-II, PESA and SPEA2 in scalable test functions; • Hughes (2005): aggregation methods with multistart perform better MOEAs; • Knowles & Corne (2007): MOEAs do not perform better than random search in problems with more than 10 objectives; • Purshouse & Fleming (2007): the ability of variation operators to produce solutions that dominate their parents decrease with increasing the number of objectives;

  10. What about scalability? Understanding the problem Garza-Fabre et al. (2011):

  11. What about scalability? Difficulties with many-objective optimization: • Loss of selective pressure (proximity and diversity); • Dimensionality and computational cost; • Visualization of solutions; • Decision-making under a huge set of alternatives;

  12. Why many-objective optimization? • Multiobjectivization: supplementary objectives or decomposition of the original objective; • Constraint-handling; • Multidisciplinary optimization (MDO), e.g. aircraft design;

  13. Why many-objective optimization? • Multiobjectivization: supplementary objectives or decomposition of the original objective; • Constraint-handling; • Multidisciplinary optimization (MDO), e.g. aircraft design; • Musselman & Talavage (1980): Water resource engineering problem with 5 objectives and 7 constraints; • Fleming et al. (2005): a flight control system with 8 objectives; • Hughes (2007): Radar waveform optimization with 9 objectives; • Sulflow et al. (2007): Nurse scheduling problem with 25 objectives; • Knowles & Corne (2007): Travelling salesman problems and job shop scheduling problems with 5 to 20 objectives;

  14. Different notions of dominance • Pareto dominance; • Epsilon dominance; • Cone dominance;

  15. Different notions of dominance • Batista et al. (EMO 2011): Pareto cone epsilon dominance: relaxation of dominance that enables the approximation of nondominated points in some adjacent boxes that would otherwise be epsilon-dominated

  16. Different notions of dominance

  17. Different notions of dominance • Batista et al. (IEEE CEC 2011) • Order induced by different dominance criteria in some quadratic test problems and DTLZ problems; (1)Rate of nondominated solutions (RNS): proportion of points within a given finite set that are not dominated by any other point in that set; (2)Normalized dominance depth (NDD): Number of successive fronts that can be obtained from a given finite set of points, divided by the size of the test set;

  18. Different notions of dominance • Batista et al. (IEEE CEC 2011) • Order induced by different dominance criteria in some quadratic test problems and DTLZ problems;

  19. Approaches – modifying Pareto dominance • Sato et al. (2007): use cone dominance to improve convergence: • Increases selective pressure, but decreases diversity;

  20. Approaches – modifying Pareto dominance • Saxena et al. (2009): uses epsilon dominance to improve convergence together with PCA-based approach for dimensionality reduction; • Argues that epsilon dominance offers a good balance between convergence and diversity;

  21. Approaches – modifying ranking • Drechsler et al. (2001): proposes the relation “favour”, based on the number of objectives for which one solution is better than the other; • Zou et al. (2008): introduce L-dominance: X1 L-dominates X2 if: • B(X1,X2) – W(X1,X2) > 0; • The p-norm of F(X1) is smaller than the p-norm of F(X2);

  22. Approaches – modifying ranking • Sato et al. (2009): introduce Pareto partial dominance: • Select r<m objectives to check Pareto dominance and rank the population; • At every fixed number of generations, switch the r objective functions used for ranking;

  23. Approaches – modifying ranking • Wang & Wu (2007): introduce fuzzy Pareto-dominance: X1 fuzzy-dominates X2 with degree: μ( X 1, X 2 )= 1 N ∑ μ b ( f i ( X 1 )− f i ( X 2 ))+ 1 2N ∑ μ e ( f i ( X 1 )− f i ( X 2 ))

  24. Approaches – modifying ranking • Knowles & Corne (2007): simple average ranking performs better than more complicated ranking schemes; Simple average ranking: each solution is ranked according to each objective, then the average rank is computed for each solution; However, the gain in selective pressure towards proximity to the Pareto front comes at the expense of diversity: few solutions are found;

  25. Approaches – performance indicators • Indicator-based Evolutionary Algorithms (IBEA): Zitzler et al. (2004); • Hypervolume-based MOEAs: Beume et al. (2007); • The search ability of IBEAs scales well with the number of objectives; • However, the time to compute the hypervolume grows exponentially with the number of objectives – impractical for more than six objectives; • Brockhoff & Zitzler (2006, 2007): dimensionality reduction to extend the applicability of hypervolume-based MOEA; • Tagawa et al. (2011): multi-core processing to reduce the cost of hypervolume computation; • Many papers on computing hypervolume...

  26. Approaches – dimensionality reduction Saxena & Deb (2007): dimensionality reduction using PCA methods: • Sources of redundancy of objectives: – either non-conflicting objectives or – the removal of a given objective from the problem makes no significant difference in the front obtained; • Run a MOEA for a large number of generations then reduce the number of objectives using the correlation matrix of the objective values, while maintaining the shape of the Pareto front;

  27. Approaches – dimensionality reduction Brockhoff & Zitzler (2006, 2007): dimensionality reduction based on dominance, however high computational cost; Singh et al. (2011): similar ideas but using heuristics instead: • Relevant or critical objectives are the ones that affect more the number of nondominated solutions in the population; • Run a MOEA for a large number of generations then reduce the number of objectives using the following heuristic: compute the change in the number of nondominated solutions with the removal of a given objective; and eliminate the objective that causes negligible change in the number of nondominated solutions;

  28. Approaches – scalar functions Scalar functions provide a way to aggregate objectives without the computational cost of hypervolume indicators; • Hughes (2007): different scalar functions are defined and each solution is ranked according to each scalar function. An overall rank is calculated based on the multiple ranks; • Ishibuchi et al. (2006, 2007): different scalar functions are defined, but each solution is evaluated with a single scalar function; • Wickramasinghe et al. (2009): distance to reference points to guide PSO in many-objective optimization;

  29. Some directions • Reducing the computational cost, specially in indicator-based MOEAs – are there alternatives to hypervolume? • Relaxation of the concept of dominance – cone epsilon dominance? • Surrogate-assisted many-objective optimization for expensive problems; • Co-evolutionary approaches: evolving parameters of scalar functions together with the solutions in the search space; • Visualization and decision-making tools;

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend