Enhancing Experimental Design and Understanding with Deep - - PowerPoint PPT Presentation

β–Ά
enhancing experimental design and understanding with deep
SMART_READER_LITE
LIVE PREVIEW

Enhancing Experimental Design and Understanding with Deep - - PowerPoint PPT Presentation

Enhancing Experimental Design and Understanding with Deep Learning/AI Vic Castillo, Ph. D. Computational Engineering Lawrence Livermore National Laboratory March 28, 2018 LLNL-PRES-748201 This work was performed under the auspices of the U.S.


slide-1
SLIDE 1

LLNL-PRES-748201

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE- AC52-07NA27344. Lawrence Livermore National Security, LLC

Enhancing Experimental Design and Understanding with Deep Learning/AI

Vic Castillo, Ph. D. Computational Engineering Lawrence Livermore National Laboratory March 28, 2018

slide-2
SLIDE 2

LLNL-PRES-748201

Enhancing Experimental Design

  • Complex Systems such as manufacturing process, energy systems,

fusion reactors have a large design space.

  • Intelligent sampling / Experimental design can help.
  • Simulation, like experiments, can be expensive.
  • DNN’s can make good, fast-running surrogate models.

Main Idea: Leverage ML and GPUs to help the designer navigate a complex design space at rapid cadence by giving quicker feedback. This leads to more agile development and enhances creativity.

slide-3
SLIDE 3

LLNL-PRES-748201

Example Application: Glass production

slide-4
SLIDE 4

LLNL-PRES-748201

Generating the f fast- runnin ing flo low vis isualizer

  • Deep Convolutional autoencoder

used to reduce simulation state space to a reduced latent space

  • Fully-connected neural network

to correlate control and design parameters to latent space

  • Projection to full state space is

from the decoder

slide-5
SLIDE 5

LLNL-PRES-748201

Parameter Space Sampling

This is a review of a framework for developing and evaluating speculative sampling methods. Classical statistical sampling is used as a baseline and incorporated. A simple coupled oscillator model with a six-dimensional design space is used as a prototype. A recent development from Google Deep Mind is discussed. Example runs are used for discussion.

slide-6
SLIDE 6

LLNL-PRES-748201

Sim imple Example: Coupled Oscil illa lator

Design Parameters:

  • Two masses {m1, m2}
  • Connected with a damping spring {k12, c12}
  • Tethered to the origin with damping springs

{k1, c1, k2, c2}

  • Subject to a time-dependent body force

{F1, F2, t1, t2, t3} Coupled Oscillator State Space: {x1, x2, ሢ 𝑦1, ሢ 𝑦2}(t) Simple, explainable system with a reasonably- complex Design Space: {m1, m2, k1, c1, k2, c2, k12, c12 , F1, F2, t1, t2, t3}

slide-7
SLIDE 7

LLNL-PRES-748201

Parameter Space

Let’s consider 6 parameters to vary in our design space: {c1, c12, c2, k1, k12, k2} In a β€œcoded” space, each parameter can be varied from -1 to +1. A scale can be assigned to describe regions Scaled regions are at centers and extremes: pi ~ {-1, 0, +1} In the figure,

pi ∈ 0 +/- 0.05, βˆ€ i

slide-8
SLIDE 8

LLNL-PRES-748201

Mapping parameter space

For convenience, we map test regions that span a 6D hypercube to a 2D grid (33x33) Each region has a scale Regions span the space but may not completely cover it

\01 Mapping Parameter Space

slide-9
SLIDE 9

LLNL-PRES-748201

Box-Behnken (1960)

Classic Statistical Sampling provides a baseline Guarantees a smooth quadratic fit in high- dimensional space Designs are rotatable 6D -> 720 permutations

D samples

3 12+center 4 24+center 5 40+center 6 48+center 7 56+center 8 112+center 9 96+center 10 160+center 11 176+center 12 192+center 16 384+center

\02 BoxBehnken

slide-10
SLIDE 10

LLNL-PRES-748201

Learning the Transition Function

A Neural Network is used to learn the transition function: Design Parameters: {c1, c12, c2, k1, k12, k2} + Body Force: F(F1,F2,t) + State: {x1, x2, ሢ 𝑦1, ሢ 𝑦2}(t) β†’ NewState: {x1, x2, ሢ 𝑦1, ሢ 𝑦2}(t+βˆ†t)

\03 Learning Transition Function

Learned Dynamics

slide-11
SLIDE 11

LLNL-PRES-748201

Predic iction Error

System was trained from region {0,0,0,0,0,0} with a scale of 0.10 Prediction is done in region {-1,0,0,0,0,0} with a scale of 0.01 Predictor has not seen this dynamic, but tries! Error is calculated as the integrated L1 norm: Error = Χ¬

𝑒0 𝑒𝑔 𝑄𝑠𝑓𝑒 𝑒 βˆ’ π·π‘π‘šπ‘‘(𝑒)

Extrapolated Dynamics

slide-12
SLIDE 12

LLNL-PRES-748201

Mapping predic iction error

Error for each region is mapped to grid for each state component Patterns can reveal parameter interactions Error is calculated as the integrated L1 norm: Error = Χ¬

𝑒0 𝑒𝑔 𝑄𝑠𝑓𝑒 𝑒 βˆ’ π·π‘π‘šπ‘‘(𝑒)

slide-13
SLIDE 13

LLNL-PRES-748201

Population-based Training

New method by Google Deep Mind (28 November 2017) Used to search out optimal hyper- parameters for DNNs Can be used as a sampling method Leverages Explore vs. Exploit

slide-14
SLIDE 14

LLNL-PRES-748201

Exp xperiment: Agent-based Sampli ling

Six agents explore center region {0,0,0,0,0,0} with scale = 0.10 State transitions are stored in DB Random transitions are used to train Prediction0 Local Error in all regions is calculated Agents move to a random Box-Behnken region (scale = 0.10) and explore Initial local error is calculated (current prediction) Agents with low error move to help others Simulations continue

slide-15
SLIDE 15

LLNL-PRES-748201

Exp xperiment: Agent-based Sampli ling

Six agents explore center region {0,0,0,0,0,0} with scale = 0.10 State transitions are stored in DB Random transitions are used to train Prediction0 Local Error in all regions is calculated Agents move to a random Box-Behnken region (scale = 0.10) and explore Initial local error is calculated (current prediction) Agents with low error move to help others Simulations continue

slide-16
SLIDE 16

LLNL-PRES-748201

Exp xperiment: Agent-based Sampli ling

Six agents explore center region {0,0,0,0,0,0} with scale = 0.10 State transitions are stored in DB Random transitions are used to train Prediction0 Local Error in all regions is calculated Agents move to a random Box-Behnken region (scale = 0.10) and explore Initial local error is calculated (current prediction) Agents with low error move to help others Simulations continue

slide-17
SLIDE 17

LLNL-PRES-748201

Specula lativ ive Sampling

slide-18
SLIDE 18

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 0 Sampling Center Region

slide-19
SLIDE 19

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 1 Sampling 7/729 Regions

slide-20
SLIDE 20

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 2 Sampling 13/729 Regions

slide-21
SLIDE 21

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 3 Sampling 19/729 Regions

slide-22
SLIDE 22

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 4 Sampling 25/729 Regions

slide-23
SLIDE 23

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 5 Sampling 31/729 Regions

slide-24
SLIDE 24

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 6 Sampling 37/729 Regions

slide-25
SLIDE 25

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 7 Sampling 43/729 Regions

slide-26
SLIDE 26

LLNL-PRES-748201

Simple Sampling Speculative Sampling Round 8 Sampling 49/729 Regions

slide-27
SLIDE 27

LLNL-PRES-748201

TensorFlow im imple lementation

TensorFlow can be used for the solver Allows hardware control: Solver/Simulator -> CPU Learning -> GPU Inference -> TPU/neuromorphic Allows algorithm instrumentation

slide-28
SLIDE 28
slide-29
SLIDE 29

LLNL-PRES-748201

Discussion

  • Simple system for prototyping – Coupled Oscillator
  • Small parameter space – {c1, c12, c2, k1, k12, k2}
  • Mapping parameter space to 2D grid
  • Classical sampling baseline – Box-Behnken
  • Learning the transition function
  • Mapping prediction error
  • Population-based search
  • Experiments
  • TensorFlow implementation