A Futurist approach to dynamic environments Jano van Hemert, Leiden - - PDF document
A Futurist approach to dynamic environments Jano van Hemert, Leiden - - PDF document
A Futurist approach to dynamic environments Jano van Hemert, Leiden University, The Netherlands & Clarissa Van Hoyweghen, University of Antwerp, Belgium & Eduard Lukschandl, Ericsson & Hewlett-Packard & Katja
Introduction 2 ✬ ✫ ✩ ✪
How it all started
Coil Summer School 2000, Limerick, Ireland ☞ People assigned to groups to solve different problems ☞ Conor Ryan provided our group with two dynamic problems ☞ He has attempted to solve those problems using diploid chromosomes ☞ Our objective set was to try to solve them using one of the techniques presented at the summer school
gecco-2001 Workshop on Dynamic Optimization
Introduction 3 ✬ ✫ ✩ ✪
How it all started
Coil Summer School 2000, Limerick, Ireland
gecco-2001 Workshop on Dynamic Optimization
Introduction 4 ✬ ✫ ✩ ✪
The next half hour
① Problem descriptions ② General idea ③ Two tested implementation ④ Experiments & Results ⑤ Conclusions & Future Work ⑥ Questions & Discussion
gecco-2001 Workshop on Dynamic Optimization
Problem descriptions 5 ✬ ✫ ✩ ✪
0 − 1 Knapsack – Definition
✔ Goal is to fill a knapsack with objects ✔ Each object has a weight and value assigned ✔ Every 15 generations the maximum allowed weight is changed ✔ Maximum weight is switched between 50% and 80% of the total weight
- f all the objects
✔ Total of 400 generations (time steps) is used
gecco-2001 Workshop on Dynamic Optimization
Problem descriptions 6 ✬ ✫ ✩ ✪
0 − 1 Knapsack – Behaviour
70 80 90 50 100 150 200 250 300 350 400
- ptimal value
- generations
☞ Optimum changes over time
gecco-2001 Workshop on Dynamic Optimization
Problem descriptions 7 ✬ ✫ ✩ ✪
O˘ smera’s function — Definition
g1(x, t) = 1 − e200(x−c(t))2 with c(t) = 0.04(⌊t/20⌋), x ∈ {0.000, . . . 2.000}, each time step t ∈ {0, . . . 1000} equal to one generation
gecco-2001 Workshop on Dynamic Optimization
Problem descriptions 8 ✬ ✫ ✩ ✪
O˘ smera’s function — Behaviour
0.5 1 1.5 2 x 200 400 600 800 t 0.5 1 g(x,t) 0.05 0.1 0.15 0.2 0.25 20 40 60 80 100 c(t)
- t
gecco-2001 Workshop on Dynamic Optimization
General idea 9 ✬ ✫ ✩ ✪
Predicting the future
f(x,t) t t+∆
maxgen
acquire fitness values regress fitness predictor use predictor for future population
gecco-2001 Workshop on Dynamic Optimization
General idea 10 ✬ ✫ ✩ ✪
Learning from the future migration
current population future population
gecco-2001 Workshop on Dynamic Optimization
General idea 11 ✬ ✫ ✩ ✪
Parameters
✔ m determines how many individuals are copied to the current population, best m from the future are selected and overwrite the worst m in the current population ✔ ∆ determines how many generations ahead the future population lives
gecco-2001 Workshop on Dynamic Optimization
Experimental setup 12 ✬ ✫ ✩ ✪
Two experiments
Perfect prediction ☞ Idea is that the best what could happen is that you have a perfect prediction of the future ☞ With these problems this is very easy to implement as we know exactly the optimum for t + ∆ ☞ If this is not successful, we could ask ourselves if it is useful to continue with the idea of predicting the future Noisy prediction ☞ Could the use of a predictor be harmful? ☞ We give the algorithm noisy and deceptive predictions of the future ☞ Knapsack problem gets wrong optimum (deceptive) and O˘ smera gets a random value
gecco-2001 Workshop on Dynamic Optimization
Experimental & Results 13 ✬ ✫ ✩ ✪
Experimental setup
For both problems we do ✔ a test without any future population ✔ tests with four parameter settings (two pairs) for perfect prediction ✔ tests with four parameter settings (two pairs) for noisy / deceptive predictor ✔ 50 runs for each test with unique random seeds
gecco-2001 Workshop on Dynamic Optimization
Experiments & Results 14 ✬ ✫ ✩ ✪
Knapsack results
predictor ∆ m error stdev best run none × × 16.6% 3.52 8.96% perfect 5 10 11.9% 3.77 4.85% perfect 15 10 20.3% 4.26 11.7% perfect 5 50 11.8% 3.70 5.97% perfect 15 50 21.4% 6.06 12.0% deceptive 5 10 12.6% 4.12 6.77% deceptive 15 10 13.0% 3.74 7.50% deceptive 5 50 12.7% 4.02 6.47% deceptive 15 50 12.8% 4.07 4.99%
gecco-2001 Workshop on Dynamic Optimization
Experiments & Results 15 ✬ ✫ ✩ ✪
Knapsack results
5 10 15 20 25 50 100 150 200 250 300 350 400 Error (%)
- Generations
"knapsack_clean" 5 10 15 20 25 50 100 150 200 250 300 350 400 Error (%)
- Generations
"knapsack_perfect_d5_m10" 5 10 15 20 25 50 100 150 200 250 300 350 400 Error (%)
- Generations
"knapsack_noisy_d15_m50"
gecco-2001 Workshop on Dynamic Optimization
Experiments & Results 16 ✬ ✫ ✩ ✪
O˘ smera results
predictor ∆ m error stdev best run none × × 63.8% 10.3 41.2% perfect 5 10 0.261% 0.153 0.0751% perfect 5 50 0.168% 0.148 0.0266% perfect 10 10 0.241% 0.220 0.0680% perfect 10 50 0.203% 0.099 0.0698% noisy 5 10 0.241% 0.186 0.0488% noisy 5 50 0.144% 0.122 0.0358% noisy 10 10 0.241% 0.186 0.0488% noisy 10 50 0.168% 0.148 0.0266%
gecco-2001 Workshop on Dynamic Optimization
Experiments & Results 17 ✬ ✫ ✩ ✪
O˘ smera results
10 20 30 40 50 60 70 80 90 100 100 200 300 400 500 600 700 800 900 1000 Error (%)
- Generations
"osmera_clean" 0.1 0.2 0.3 0.4 0.5 0.6 200 400 600 800 1000 Error (%)
- Generations
"osmera_perfect_d5_m50" 0.1 0.2 0.3 0.4 0.5 0.6 200 400 600 800 1000 Error (%)
- Generations
"osmera_noisy_d10_m50"
gecco-2001 Workshop on Dynamic Optimization
Conclusions & Future Research 18 ✬ ✫ ✩ ✪
Conclusions
Pros and cons ✘ Knapsack problem is better solved with a look-a-head time of 5 generations as opposed to 15, which is the length of the cycle ✔ Adding future predictions when solving the knapsack problem slightly improves the performance when using a deceptive function or when using small values for m ✔ Adding future predictions when solving O˘ smera’s function seems to help ✘ There is little difference in performance between using a perfect or noisy predictor... ✔ There is little difference in performance between using a perfect or noisy predictor...
gecco-2001 Workshop on Dynamic Optimization
Conclusions & Future Research 19 ✬ ✫ ✩ ✪
Future Research
☞ How sensitive are the parameters m and ∆? ☞ Why does this work well for a real-valued problem and not for a problem from a discrete domain? ☞ Could we replace this whole complicated process by adding more disturbance? For instance with a high mutation rate?
gecco-2001 Workshop on Dynamic Optimization