SWARM SWARM INTELLIGENCE INTELLIGENCE Milad Abolhassani - - PowerPoint PPT Presentation

swarm swarm intelligence intelligence
SMART_READER_LITE
LIVE PREVIEW

SWARM SWARM INTELLIGENCE INTELLIGENCE Milad Abolhassani - - PowerPoint PPT Presentation

SWARM SWARM INTELLIGENCE INTELLIGENCE Milad Abolhassani Supervisor: Hamid Mir Vaziri 3 WHY? WHY? 4 WHAT KIND OF PROBLEMS? WHAT KIND OF PROBLEMS? 5 WHAT KIND OF PROBLEMS? WHAT KIND OF PROBLEMS? Optimization 5 WHAT KIND OF PROBLEMS?


slide-1
SLIDE 1

SWARM SWARM INTELLIGENCE INTELLIGENCE

Milad Abolhassani

Supervisor: Hamid Mir Vaziri

3
slide-2
SLIDE 2

WHY? WHY?

4
slide-3
SLIDE 3

WHAT KIND OF PROBLEMS? WHAT KIND OF PROBLEMS?

5
slide-4
SLIDE 4

WHAT KIND OF PROBLEMS? WHAT KIND OF PROBLEMS?

Optimization

5
slide-5
SLIDE 5

WHAT KIND OF PROBLEMS? WHAT KIND OF PROBLEMS?

Optimization Modeling

5
slide-6
SLIDE 6

WHAT KIND OF PROBLEMS? WHAT KIND OF PROBLEMS?

Optimization Modeling Simulation

5
slide-7
SLIDE 7

OPTIMIZATION OPTIMIZATION

6 . 1
slide-8
SLIDE 8

MODELING MODELING

6 . 2
slide-9
SLIDE 9

SIMULATION SIMULATION

6 . 3
slide-10
SLIDE 10

OPTIMIZATION OPTIMIZATION

7
slide-11
SLIDE 11

EXHAUSTIVE SEARCH EXHAUSTIVE SEARCH

8
slide-12
SLIDE 12

OTHER METHODS OTHER METHODS

9
slide-13
SLIDE 13

OTHER METHODS OTHER METHODS

Analytical

9
slide-14
SLIDE 14

OTHER METHODS OTHER METHODS

Analytical Uninformed

9
slide-15
SLIDE 15

OTHER METHODS OTHER METHODS

Analytical Uninformed Informed

9
slide-16
SLIDE 16

METAHEURISTIC METAHEURISTIC

10
slide-17
SLIDE 17

LOCAL & GLOBAL OPTIMUM LOCAL & GLOBAL OPTIMUM

11
slide-18
SLIDE 18

COMPLEX SPACES COMPLEX SPACES

12
slide-19
SLIDE 19

EXPLORATION EXPLORATION

13
slide-20
SLIDE 20

EXPLOITATION EXPLOITATION

14
slide-21
SLIDE 21

EXPLOITATION EXPLOITATION

15
slide-22
SLIDE 22

CATEGORIES CATEGORIES

16
slide-23
SLIDE 23

SWARM SWARM

17 . 1
slide-24
SLIDE 24

GOAL GOAL

Is to model their simple behaviors to ndout about more complex behaviors.

17 . 2
slide-25
SLIDE 25

SIGN BASED ALGORITHMS SIGN BASED ALGORITHMS

18 . 1
slide-26
SLIDE 26

STEPS STEPS

18 . 2
slide-27
SLIDE 27

STEPS STEPS

  • 1. Init memory
18 . 2
slide-28
SLIDE 28

STEPS STEPS

  • 1. Init memory
  • 2. Generate a solution
18 . 2
slide-29
SLIDE 29

STEPS STEPS

  • 1. Init memory
  • 2. Generate a solution
  • 3. Calculate the tness of generated solution
18 . 2
slide-30
SLIDE 30

STEPS STEPS

  • 1. Init memory
  • 2. Generate a solution
  • 3. Calculate the tness of generated solution
  • 4. Continue this for all population
18 . 2
slide-31
SLIDE 31

STEPS STEPS

  • 1. Init memory
  • 2. Generate a solution
  • 3. Calculate the tness of generated solution
  • 4. Continue this for all population
  • 5. Update signs memory
18 . 2
slide-32
SLIDE 32

STEPS STEPS

  • 1. Init memory
  • 2. Generate a solution
  • 3. Calculate the tness of generated solution
  • 4. Continue this for all population
  • 5. Update signs memory
  • 6. Repeat until stop condition meets
18 . 2
slide-33
SLIDE 33

ACO ACO

18 . 3
slide-34
SLIDE 34

ACO ACO

Marco Dorigo (1992)

18 . 3
slide-35
SLIDE 35

ACO ACO

Marco Dorigo (1992) Finding good paths through graphs

18 . 3
slide-36
SLIDE 36

HOW IT WORKS? HOW IT WORKS?

18 . 4
slide-37
SLIDE 37 18 . 5
slide-38
SLIDE 38 18 . 6
slide-39
SLIDE 39 18 . 7
slide-40
SLIDE 40 18 . 8
slide-41
SLIDE 41 18 . 9
slide-42
SLIDE 42 18 . 10
slide-43
SLIDE 43 18 . 11
slide-44
SLIDE 44 18 . 12
slide-45
SLIDE 45 18 . 13
slide-46
SLIDE 46

ACO ADVANTAGES ACO ADVANTAGES

Search among a population in parallel Can give rapid discovery of good solutions Can adapt to changes in graph

18 . 14
slide-47
SLIDE 47

ACO ADVANTAGES ACO ADVANTAGES

Search among a population in parallel Can give rapid discovery of good solutions Can adapt to changes in graph

18 . 14
slide-48
SLIDE 48

ACO DISADVANTAGES ACO DISADVANTAGES

Prone to stagnation Premature convergence Uncertain converge time Long calculation time Solutions might be far from optimum

18 . 15
slide-49
SLIDE 49

IMITATION BASED ALGORITHMS IMITATION BASED ALGORITHMS

19 . 1
slide-50
SLIDE 50

STEPS STEPS

19 . 2
slide-51
SLIDE 51

STEPS STEPS

  • 1. Init Parameters
19 . 2
slide-52
SLIDE 52

STEPS STEPS

  • 1. Init Parameters
  • 2. Init Population
19 . 2
slide-53
SLIDE 53

STEPS STEPS

  • 1. Init Parameters
  • 2. Init Population
  • 3. Move Particles
19 . 2
slide-54
SLIDE 54

STEPS STEPS

  • 1. Init Parameters
  • 2. Init Population
  • 3. Move Particles
  • 4. Calculate the tness
19 . 2
slide-55
SLIDE 55

STEPS STEPS

  • 1. Init Parameters
  • 2. Init Population
  • 3. Move Particles
  • 4. Calculate the tness
  • 5. Update particles memories
19 . 2
slide-56
SLIDE 56

STEPS STEPS

  • 1. Init Parameters
  • 2. Init Population
  • 3. Move Particles
  • 4. Calculate the tness
  • 5. Update particles memories
  • 6. Repeat until stop condition meets
19 . 2
slide-57
SLIDE 57

PSO PSO

19 . 3
slide-58
SLIDE 58 19 . 4
slide-59
SLIDE 59 19 . 5
slide-60
SLIDE 60 19 . 6
slide-61
SLIDE 61 19 . 7
slide-62
SLIDE 62 19 . 8
slide-63
SLIDE 63 19 . 9
slide-64
SLIDE 64 19 . 10
slide-65
SLIDE 65

PSO ADVANTAGES PSO ADVANTAGES

19 . 11
slide-66
SLIDE 66

PSO ADVANTAGES PSO ADVANTAGES

Fast

19 . 11
slide-67
SLIDE 67

PSO ADVANTAGES PSO ADVANTAGES

Fast Easy to implement

19 . 11
slide-68
SLIDE 68

PSO ADVANTAGES PSO ADVANTAGES

Fast Easy to implement No complex calculations

19 . 11
slide-69
SLIDE 69

PSO ADVANTAGES PSO ADVANTAGES

Fast Easy to implement No complex calculations Doesn't have so much parameters

19 . 11
slide-70
SLIDE 70

PSO DISADVANTAGES PSO DISADVANTAGES

19 . 12
slide-71
SLIDE 71

PSO DISADVANTAGES PSO DISADVANTAGES

Prone to premature convergence

19 . 12
slide-72
SLIDE 72

LET'S HA VE A LOOK TO LET'S HA VE A LOOK TO OTHER ALGORITHMS OTHER ALGORITHMS

20
slide-73
SLIDE 73

HARMONY SEARCH HARMONY SEARCH

21 . 1
slide-74
SLIDE 74

HARMONY SEARCH HARMONY SEARCH

21 . 2
slide-75
SLIDE 75

HARMONY SEARCH HARMONY SEARCH

Init Harmony Memory (RANDOM)

21 . 2
slide-76
SLIDE 76

HARMONY SEARCH HARMONY SEARCH

Init Harmony Memory (RANDOM) Improvise NEW harmony

21 . 2
slide-77
SLIDE 77

HARMONY SEARCH HARMONY SEARCH

Init Harmony Memory (RANDOM) Improvise NEW harmony If NEW is better than min(HM)

21 . 2
slide-78
SLIDE 78

HARMONY SEARCH HARMONY SEARCH

Init Harmony Memory (RANDOM) Improvise NEW harmony If NEW is better than min(HM) Replace(min(HM), NEW)

21 . 2
slide-79
SLIDE 79

HARMONY SEARCH HARMONY SEARCH

Init Harmony Memory (RANDOM) Improvise NEW harmony If NEW is better than min(HM) Replace(min(HM), NEW) Loop till end condition meets

21 . 2
slide-80
SLIDE 80

HARMONY SEARCH HARMONY SEARCH

21 . 3
slide-81
SLIDE 81

HS ADVANTAGES HS ADVANTAGES

Quick convergence Easy implementation Less adjustable parameters Fewer mathematical requirements Generates a new solution, after considering all of the existing solutions

21 . 4
slide-82
SLIDE 82

HS DISADVANTAGES HS DISADVANTAGES

Premature convergence

21 . 5
slide-83
SLIDE 83

ICA ICA

22 . 1
slide-84
SLIDE 84

ICA ICA

22 . 2
slide-85
SLIDE 85

ICA ICA

22 . 3
slide-86
SLIDE 86

ICA PROS & CONS ICA PROS & CONS

22 . 4
slide-87
SLIDE 87

PROS PROS

Good speed Same and better solutions compared with other metaheuristic algorithms

22 . 5
slide-88
SLIDE 88

PROS PROS

Good speed Same and better solutions compared with other metaheuristic algorithms

CONS CONS

Complex implementation

22 . 5
slide-89
SLIDE 89

GWO GWO

Mimics leadership hierachy of wolves

23 . 1
slide-90
SLIDE 90

GWO HIERACHY GWO HIERACHY

23 . 2
slide-91
SLIDE 91

SOCIAL BEHA VIOR OF GREY WOLVES SOCIAL BEHA VIOR OF GREY WOLVES

Tracking, chasing, and approaching the prey. Pursuing, encircling, and harassing the prey until it stops moving. Attack towards the prey.

23 . 3
slide-92
SLIDE 92 23 . 4
slide-93
SLIDE 93

GWO GWO

ENCIRCLING PREY ENCIRCLING PREY

23 . 5
slide-94
SLIDE 94

GWO GWO

slide-95
SLIDE 95 23 . 6
slide-96
SLIDE 96

GWO GWO

ATTACK ATTACK

23 . 7
slide-97
SLIDE 97

TO SUM UP: TO SUM UP:

23 . 8
slide-98
SLIDE 98

TO SUM UP: TO SUM UP:

Creating a random population of grey wolves

23 . 8
slide-99
SLIDE 99

TO SUM UP: TO SUM UP:

Creating a random population of grey wolves Alpha, beta, and delta wolves estimate the probable position of the prey

23 . 8
slide-100
SLIDE 100

TO SUM UP: TO SUM UP:

Creating a random population of grey wolves Alpha, beta, and delta wolves estimate the probable position of the prey Each candidate solution updates its distance from the prey

23 . 8
slide-101
SLIDE 101

TO SUM UP: TO SUM UP:

Creating a random population of grey wolves Alpha, beta, and delta wolves estimate the probable position of the prey Each candidate solution updates its distance from the prey The parameter a is decreased from 2 to 0 in order to emphasize exploration and exploitation

23 . 8
slide-102
SLIDE 102

TO SUM UP: TO SUM UP:

Creating a random population of grey wolves Alpha, beta, and delta wolves estimate the probable position of the prey Each candidate solution updates its distance from the prey The parameter a is decreased from 2 to 0 in order to emphasize exploration and exploitation Candidate solutions tend to diverge from the prey when j > 1 and converge towards the prey when A < 1

23 . 8
slide-103
SLIDE 103

TO SUM UP: TO SUM UP:

Creating a random population of grey wolves Alpha, beta, and delta wolves estimate the probable position of the prey Each candidate solution updates its distance from the prey The parameter a is decreased from 2 to 0 in order to emphasize exploration and exploitation Candidate solutions tend to diverge from the prey when j > 1 and converge towards the prey when A < 1 GW terminated by the satisfaction of an end criterion

23 . 8
slide-104
SLIDE 104

GWO ADVANTAGES GWO ADVANTAGES

Free from the initialization of inputparameters Free from computational complexity Ease of understanding and implementation

23 . 9
slide-105
SLIDE 105

GSA GSA

24 . 1
slide-106
SLIDE 106

GSA GSA

24 . 2
slide-107
SLIDE 107

HOW DOES IT WORKS? HOW DOES IT WORKS?

24 . 3
slide-108
SLIDE 108

ADVANTAGES ADVANTAGES

Easy implementation Fast convergence Low computational cost

24 . 4
slide-109
SLIDE 109

DISADVANTAGES DISADVANTAGES

Premature converge Complexity in calculation It is easy to fall into local optimum solution

24 . 5
slide-110
SLIDE 110

FIREFLY FIREFLY

25 . 1
slide-111
SLIDE 111

HUNTING HUNTING

25 . 2
slide-112
SLIDE 112 25 . 3
slide-113
SLIDE 113 25 . 4
slide-114
SLIDE 114

HYPOTHESES HYPOTHESES

25 . 5
slide-115
SLIDE 115

HYPOTHESES HYPOTHESES

25 . 5
slide-116
SLIDE 116 25 . 6
slide-117
SLIDE 117

ADVANTAGES ADVANTAGES

Automatical Subdivision Ability of dealing with multimodality

25 . 7
slide-118
SLIDE 118

DISADVANTAGES DISADVANTAGES

Getting trapped into several local optima Does not memorize or remember any history of better situation for each rey and this causes them to move regardless of its previous better situation

25 . 8
slide-119
SLIDE 119

IDEA IDEA

26
slide-120
SLIDE 120

REFERENCES REFERENCES

Geem ZW, Kim JH, Loganathan GV. A new heuristic optimization algorithm:harmony search. Simulation [2001] A Mirjalili, Grey Wolf Optimizer [2014] E Rashedi, GSA: A Gravitational Search Algorithm [ 2009] Yang, X. S, Nature-Inspired Metaheuristic Algorithms [2008] Yu Zhang, Immunity-Based Gravitational Search Algorithm [2012] J Yang, An improved ant colony optimization(I-ACO) method for the quasi travelling salesman problem [2015] M Mahdavi, An improved harmony search algorithmfor solving optimization problems [2007] W Sun, An Improved Harmony Search Algorithm for Power Distribution Network Planning [2015] Y Zhang, Improved Imperialist Competitive Algorithm for Constrained Optimization [2015] D Guha, Load frequency control of large scale power system usingquasi-oppositional grey wolf optimization algorithm [2016] N Naji, A Review of the Metaheuristic Algorithms and their Capabilities [2017] Saibal K. Pal, Comparative Study of Firey Algorithm and Particle Swarm Optimization for Noisy Non- Linear Optimization Problems [2012] 27