Applications of Machine Learning in Engineering (and Parameter - - PowerPoint PPT Presentation

applications of machine learning in engineering
SMART_READER_LITE
LIVE PREVIEW

Applications of Machine Learning in Engineering (and Parameter - - PowerPoint PPT Presentation

Applications of Machine Learning in Engineering (and Parameter Tuning) Lars Kotthofg University of Wyoming larsko@uwyo.edu RMACC, 23 May 2019 slides available at https://www.cs.uwyo.edu/~larsko/slides/rmacc19.pdf 1 Optimizing Graphene Oxide


slide-1
SLIDE 1

Applications of Machine Learning in Engineering

(and Parameter Tuning) Lars Kotthofg

University of Wyoming larsko@uwyo.edu RMACC, 23 May 2019

slides available at https://www.cs.uwyo.edu/~larsko/slides/rmacc19.pdf 1

slide-2
SLIDE 2

Optimizing Graphene Oxide Reduction

▷ reduce graphene oxide to graphene through laser irradiation ▷ allows to create electrically conductive lines in insulating material ▷ laser parameters need to be tuned carefully to achieve good results

2

slide-3
SLIDE 3

From Graphite/Coal to Carbon Electronics

Overview of the Process

3

slide-4
SLIDE 4

Evaluation of Irradiated Material

4

slide-5
SLIDE 5

ML-Optimized Laser Parameters

  • 2

4 6 10 20 30 40 50

Iteration Ratio

5

slide-6
SLIDE 6

ML-Optimized Laser Parameters

  • 2

4 6 2 4 6 8

Iteration Ratio

  • Predictions work even with small training dataset (19 points)
  • AI Model achieved IG/ID ratio (>6) after 1st prediction

During Training After 1st prediction + Prediction

  • Actual

50 um 50 um 6

slide-7
SLIDE 7

Explored Parameter Space

  • 1

4 3 45 5 7 6 8 2 15 13 14 12 17 26 21 22 27 19 20 18 25 9 24 31 40 32 35 29 38 41 34 33 11 30 28 10 16 23 46 36 42 37 39 47 48 44 43 2 4 6

Parameter Space Ratio

7

slide-8
SLIDE 8

Design of New Materials

▷ optimize parameters of pattern generator for energy absorption of material

8

slide-9
SLIDE 9

Big Picture

▷ advance the state of the art through meta-algorithmic techniques ▷ rather than inventing new things, use existing things more intelligently – automatically ▷ invent new things through combinations of existing things

9

slide-10
SLIDE 10

What to Tune – Parameters

▷ anything you can change that makes sense to change ▷ e.g. heuristic, power of a laser, kernel for a machine learning method ▷ not random seed, whether to enable debugging, etc. ▷ some will afgect performance, others will have no efgect at all

10

slide-11
SLIDE 11

Automated Parameter Tuning

Frank Hutter and Marius Lindauer, “Algorithm Confjguration: A Hands on Tutorial”, AAAI 2016 11

slide-12
SLIDE 12

General Approach

▷ evaluate algorithm as black-box function ▷ observe efgect of parameters without knowing the inner workings ▷ decide where to evaluate next ▷ balance diversifjcation/exploration and intensifjcation/exploitation ▷ repeat until satisfjed

12

slide-13
SLIDE 13

When are we done?

▷ most approaches incomplete ▷ cannot prove optimality, not guaranteed to fjnd optimal solution (with fjnite time) ▷ performance highly dependent on confjguration space

How do we know when to stop?

13

slide-14
SLIDE 14

Resource Budget

How much time/how many function evaluations? ▷ too much wasted resources ▷ too little suboptimal result ▷ use statistical tests ▷ evaluate on parts of the instance set ▷ for runtime: adaptive capping

14

slide-15
SLIDE 15

Grid and Random Search

▷ evaluate certain points in parameter space

Bergstra, James, and Yoshua Bengio. “Random Search for Hyper-Parameter Optimization.” J. Mach. Learn.

  • Res. 13, no. 1 (February 2012): 281–305.

15

slide-16
SLIDE 16

Local Search

▷ start with random confjguration ▷ change a single parameter (local search step) ▷ if better, keep the change, else revert ▷ repeat, stop when resources exhausted or desired solution quality achieved ▷ restart occasionally with new random confjgurations

16

slide-17
SLIDE 17

Local Search Example

(Initialisation)

graphics by Holger Hoos 17

slide-18
SLIDE 18

Local Search Example

(Initialisation)

graphics by Holger Hoos 18

slide-19
SLIDE 19

Local Search Example

(Local Search)

graphics by Holger Hoos 19

slide-20
SLIDE 20

Local Search Example

(Local Search)

graphics by Holger Hoos 20

slide-21
SLIDE 21

Local Search Example

(Perturbation)

graphics by Holger Hoos 21

slide-22
SLIDE 22

Local Search Example

(Local Search)

graphics by Holger Hoos 22

slide-23
SLIDE 23

Local Search Example

(Local Search)

graphics by Holger Hoos 23

slide-24
SLIDE 24

Local Search Example

(Local Search)

graphics by Holger Hoos 24

slide-25
SLIDE 25

Local Search Example

?

Selection (using Acceptance Criterion)

graphics by Holger Hoos 25

slide-26
SLIDE 26

Surrogate-Model-Based Search

▷ evaluate small number of initial (random) confjgurations ▷ build surrogate model of parameter-performance surface based

  • n this

▷ use model to predict where to evaluate next ▷ repeat, stop when resources exhausted or desired solution quality achieved ▷ allows targeted exploration of promising confjgurations

26

slide-27
SLIDE 27

Surrogate-Model-Based Search Example

  • y

ei −1.0 −0.5 0.0 0.5 1.0 0.0 0.4 0.8 −0.02 0.00 0.02 0.04 0.06

x type

  • init

prop

type

y yhat ei

Iter = 1, Gap = 1.5281e−01

Bischl, Bernd, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, and Michel Lang. “MlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions,” March 9, 2017. http://arxiv.org/abs/1703.03373. 27

slide-28
SLIDE 28

Surrogate-Model-Based Search Example

  • y

ei −1.0 −0.5 0.0 0.5 1.0 0.0 0.4 0.8 0.00 0.01 0.02 0.03

x type

  • init

prop seq

type

y yhat ei

Iter = 2, Gap = 1.5281e−01

Bischl, Bernd, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, and Michel Lang. “MlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions,” March 9, 2017. http://arxiv.org/abs/1703.03373. 28

slide-29
SLIDE 29

Surrogate-Model-Based Search Example

  • y

ei −1.0 −0.5 0.0 0.5 1.0 0.0 0.4 0.8 0.000 0.005 0.010 0.015 0.020

x type

  • init

prop seq

type

y yhat ei

Iter = 3, Gap = 1.5281e−01

Bischl, Bernd, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, and Michel Lang. “MlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions,” March 9, 2017. http://arxiv.org/abs/1703.03373. 29

slide-30
SLIDE 30

Surrogate-Model-Based Search Example

  • y

ei −1.0 −0.5 0.0 0.5 1.0 0.0 0.4 0.8 0.000 0.005 0.010

x type

  • init

prop seq

type

y yhat ei

Iter = 4, Gap = 1.3494e−02

Bischl, Bernd, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, and Michel Lang. “MlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions,” March 9, 2017. http://arxiv.org/abs/1703.03373. 30

slide-31
SLIDE 31

Surrogate-Model-Based Search Example

  • y

ei −1.0 −0.5 0.0 0.5 1.0 0.0 0.4 0.8 0.000 0.005 0.010 0.015

x type

  • init

prop seq

type

y yhat ei

Iter = 5, Gap = 1.3494e−02

Bischl, Bernd, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, and Michel Lang. “MlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions,” March 9, 2017. http://arxiv.org/abs/1703.03373. 31

slide-32
SLIDE 32

Surrogate-Model-Based Search Example

  • y

ei −1.0 −0.5 0.0 0.5 1.0 0.0 0.4 0.8 0.000 0.002 0.004 0.006

x type

  • init

prop seq

type

y yhat ei

Iter = 6, Gap = 2.1938e−06

Bischl, Bernd, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, and Michel Lang. “MlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions,” March 9, 2017. http://arxiv.org/abs/1703.03373. 32

slide-33
SLIDE 33

Surrogate-Model-Based Search Example

  • y

ei −1.0 −0.5 0.0 0.5 1.0 0.0 0.4 0.8 0e+00 5e−04 1e−03

x type

  • init

prop seq

type

y yhat ei

Iter = 7, Gap = 2.1938e−06

Bischl, Bernd, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, and Michel Lang. “MlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions,” March 9, 2017. http://arxiv.org/abs/1703.03373. 33

slide-34
SLIDE 34

Tools and Resources

iRace http://iridia.ulb.ac.be/irace/ TPOT https://github.com/EpistasisLab/tpot mlrMBO https://github.com/mlr-org/mlrMBO SMAC http://www.cs.ubc.ca/labs/beta/Projects/SMAC/

Spearmint https://github.com/HIPS/Spearmint TPE https://jaberg.github.io/hyperopt/

COSEAL group for COnfjguration and SElection of ALgorithms: https://www.coseal.net/ Further reading: Jones, Donald R., Matthias Schonlau, and William J. Welch. “Effjcient Global Optimization of Expensive Black-Box Functions.” J. of Global Optimization 13, no. 4 (December 1998): 455–92.

34

slide-35
SLIDE 35

https://www.automl.org/book/

35

slide-36
SLIDE 36

I’m hiring!

Several positions available.

36

slide-37
SLIDE 37

Exercises

▷ (install python and/or scikit-learn) ▷ download https://www.cs.uwyo.edu/~larsko/mbo/mbo.py and run it ▷ try a difgerent data set ▷ try tuning difgerent/more parameters ▷ …

37

slide-38
SLIDE 38

Two-Slide MBO

# https://www.cs.uwyo.edu/~larsko/mbo/mbo.py params = { 'C': np.logspace(-2, 10, 13), 'gamma': np.logspace(-9, 3, 13) } param_grid = [ { 'C': x, 'gamma': y } for x in params['C'] for y in params['gamma'] ] # [{'C': 0.01, 'gamma': 1e-09}, {'C': 0.01, 'gamma': 1e-08}...] initial_samples = 3 evals = 10 random.seed(1) def est_acc(pars): clf = svm.SVC(**pars) return np.median(cross_val_score(clf, iris.data, iris.target, cv = 10)) data = [] for pars in random.sample(param_grid, initial_samples): acc = est_acc(pars) data += [ list(pars.values()) + [ acc ] ] # [[1.0, 0.1, 1.0], # [1000000000.0, 1e-07, 1.0], # [0. 1, 1e-06,0.9333333333333333]]

38

slide-39
SLIDE 39

Two-Slide MBO

regr = RandomForestRegressor(random_state = 0) for evals in range(0, evals): df = np.array(data) regr.fit(df[:,0:2], df[:,2]) preds = regr.predict([ list(pars.values()) for pars in param_grid ]) i = preds.argmax() acc = est_acc(param_grid[i]) data += [ list(param_grid[i].values()) + [ acc ] ] print("{}: best predicted {} for {}, actual {}" .format(evals, round(preds[i], 2), param_grid[i], round(acc, 2))) i = np.array(data)[:,2].argmax() print("Best accuracy ({}) for parameters {}".format(data[i][2], data[i][0:2]))

39

slide-40
SLIDE 40

Two-Slide MBO (slide 3)

0: best predicted 0.99 for {'C': 1.0, 'gamma': 1e-09}, actual 0.93 1: best predicted 0.99 for {'C': 1000000000.0, 'gamma': 1e-09}, actual 0.93 2: best predicted 0.99 for {'C': 1000000000.0, 'gamma': 0.1}, actual 0.93 3: best predicted 0.97 for {'C': 1.0, 'gamma': 0.1}, actual 1.0 4: best predicted 0.99 for {'C': 1.0, 'gamma': 0.1}, actual 1.0 5: best predicted 1.0 for {'C': 1.0, 'gamma': 0.1}, actual 1.0 6: best predicted 1.0 for {'C': 1.0, 'gamma': 0.1}, actual 1.0 7: best predicted 1.0 for {'C': 1.0, 'gamma': 0.1}, actual 1.0 8: best predicted 1.0 for {'C': 0.01, 'gamma': 0.1}, actual 0.93 9: best predicted 1.0 for {'C': 1.0, 'gamma': 0.1}, actual 1.0 Best accuracy (1.0) for parameters [1.0, 0.1]

40