DNN#Assisted*Parameter*Space* Exploration*and*Visualization*for* Large*Scale*Simulations
HA HAN#WE WEI*SH SHEN DE DEPARTM TMENT*OF NT*OF*C *COM OMPUTE TER*S *SCIENC NCE*A *AND*E ND*ENG NGINE NEERING NG TH THE*OH OHIO* O*STATE TE*UNIVERSITY TY
1
DNN#Assisted*Parameter*Space* Exploration*and*Visualization*for* - - PowerPoint PPT Presentation
DNN#Assisted*Parameter*Space* Exploration*and*Visualization*for* Large*Scale*Simulations HA HAN#WE WEI*SH SHEN DE DEPARTM TMENT*OF NT*OF*C *COM OMPUTE TER*S *SCIENC NCE*A *AND*E ND*ENG NGINE NEERING NG TH THE*OH OHIO* O*STATE
HA HAN#WE WEI*SH SHEN DE DEPARTM TMENT*OF NT*OF*C *COM OMPUTE TER*S *SCIENC NCE*A *AND*E ND*ENG NGINE NEERING NG TH THE*OH OHIO* O*STATE TE*UNIVERSITY TY
1
Disk I/O Supercomputer Simulator Raw*data Post9analysis Memory I/O
3
Or
: Identify$important$simulation$parameters$ : Identify$simulation$parameter$sensitivity$ : Quantify$the$uncertainty$of$the$simulation$models
: InSituNet (IEEE$SciVis’19$Best$Paper) : NNVA (IEEE$VAST’19$Best$Paper$Honorable$Mention)
4
5
5 I/O(bottleneck(and(storage(overhead
6
ensemble simulations simulation parameters disk I/O raw data post-hoc analysis I/O
7
ensemble simulations simulation parameters disk I/O post-hoc analysis I/O
view parameters visual mapping parameters in situ vis visualization images
image data
) Limiting.the.flexibility.of.post)hoc.exploration.and.analysis
) Raw.data.are.no.longer.available
) Incapable.of.exploring.the.simulation.parameters
) Expensive.simulations.need.to.be.conducted.for.new.parameter.settings
) Studying.how.the.parameters.influence.the.visualization.results ) Predicting.visualization.results.for.new.parameter.settings
8
9
in situ vis view parameters visual mapping parameters ensemble simulations simulation parameters visualization images
!"#$ !%#" !%#&' (
ℱ !"#$, !%#", !%#&' = (
InSituNet
A#deep#neural#network#that#models#the#function#ℱ
10
3 Regressor (mapping'parameters'to'prediction'images) 3 Feature+comparator+(computing'feature+reconstruction+loss) 3 Discriminator (computing'adversarial+loss)
input parameters
Regressor
ground truth prediction feature reconstruction loss
Discriminator
adversarial loss
Feature Comparator
11
2 Regressor (mapping(parameters(to(prediction(images) 2 Feature'comparator'(computing&feature'reconstruction'loss) 2 Discriminator (computing&adversarial'loss)
input parameters
Regressor
ground truth prediction feature reconstruction loss
Discriminator
adversarial loss
Feature Comparator
4 Input:"parameters"!"#$,"!%#","and"!%#&' 4 Output:"prediction"image"( ) 4 Weights:"*,"updated"during"training
12
input parameters
Regressor
prediction
simulation parameters ReLU
bacth normalization 2D convolution fully connected residual block input/output view parameters visual mapping parameters
Architecture"of"the"regressor
concat (1536, 4×4×16×k) reshape (512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (512) (n) (n, 512) (512, 512) (512) (m) (m, 512) (512, 512) (k, 3, 3, 3) tanh (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image (in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum
a
13
concat (1536, 4×4×16×k) reshape (k, 3, 3, 3) tanh
simulation parameters
(512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image
ReLU
bacth normalization 2D convolution fully connected residual block
(in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum
input/output view parameters
(512) (n) (n, 512) (512, 512)
visual mapping parameters
(512) (m) (m, 512) (512, 512)
a
network$(CNN)
mapping,$view$ parameters
image
collected$from$all$layers
14
concat (1536, 4×4×16×k) reshape (k, 3, 3, 3) tanh
simulation parameters
(512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image
ReLU
bacth normalization 2D convolution fully connected residual block
(in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum
input/output view parameters
(512) (n) (n, 512) (512, 512)
visual mapping parameters
(512) (m) (m, 512) (512, 512)
a
' = )# '′ = max(0, ')
15
concat (1536, 4×4×16×k) reshape (k, 3, 3, 3) tanh
simulation parameters
(512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image
ReLU
bacth normalization 2D convolution fully connected residual block
(in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum
input/output view parameters
(512) (n) (n, 512) (512, 512)
visual mapping parameters
(512) (m) (m, 512) (512, 512)
a
ℝ-_(×-_&×)×)+
* = , ∗ #
1K.%He,%X.%Zhang,%S.%Ren,%and%J.%Sun.%Deep%residual%learning%for%image%recognition.%In%Proceedings%of%2016%IEEE%
Conference%on%Computer%Vision%and%Pattern%Recognition,%pp.%770–778,%2016.
16
input parameters
Regressor
prediction
ground truth
Loss Backpropagation
/ Example:&mean&squared&error&loss&(MSE&loss&ℒ"#$) / Issue:&blurry&prediction&images
17
input parameters
Regressor
prediction ground truth
MSE&Loss
Blurry&image&generated&by&a regressor&trained&with&the&MSE&loss
5 Feature(comparator(5>(feature(reconstruction(loss(ℒ"#$% 5 Discriminator(5>(adversarial(loss(ℒ$&'
18
input parameters
Regressor
prediction ground truth feature reconstruction loss
Discriminator
adversarial loss
Feature Comparator
19
3 Regressor (mapping'parameters'to'prediction'images) 3 Feature'comparator'(computing+feature'reconstruction'loss) 3 Discriminator (computing'adversarial1loss)
input parameters
Regressor
ground truth prediction feature reconstruction loss
Discriminator
adversarial loss
Feature Comparator
3 Input:"image"" 3 Output:"feature"map"!#(") ∈ ℝ(×*×+of"an"intermedia"layer",
20
conv1_1 relu1_1 pool1 conv1_2 relu1_2 conv2_2 relu2_2 conv2_1 relu2_1 pool2
2D convolution ReLU max pooling
input
feature map
h w c channels
1K."Simonyan and"A."Zisserman."Very"deep"convolutional"networks"for"large3scale"image"recognition."In"
Proceedings"of"International"Conference"on"Learning"Representations,"2015.
( MSE,loss between,the,feature'maps of,the,prediction,and,the,ground,truth ( Given,a,batch,of,ground,truth,images,!":$%& and,predictions,' !":$%&
( Making,the,regressor,to,generate,images,sharing,similar,features,with,the,ground,truth,,which,leads, to,images,with,sharper,features
21
ℒ)*+,
= 1 ℎ345 6
78" $%&
9/ !7 − 9/ ' !7
; ;
ground,truth ℒ)*+, ℒ<=*
22
2 Regressor (mapping¶meters&to&prediction&images) 2 Feature+comparator+(computing&feature+reconstruction+loss) 2 Discriminator (computing+adversarial/loss)
input parameters
Regressor
ground truth prediction feature reconstruction loss
Discriminator
adversarial loss
Feature Comparator
(256, 256, 3) (in=3, out=k) (in=k, out=2×k) (in=2×k, out=4×k) (in=4×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=16×k) (128, 128, k) (64, 64, 2×k) (32, 32, 4×k) (16, 16, 8×k) (8, 8, 8×k) (4, 4, 16×k) image (in=16×k, out=16×k) global sum pooling (16×k, 1) (16×k) (1) (1) sum sigmoid (in, out, 3, 3) (out, out, 3, 3) (in, out, 1, 1) average pooling sum average pooling real / fake
ReLU
2D convolution fully connected residual block input/output
(1536, 16×k) (16×k) dot concat (512) (l) (l, 512) (512, 512) (1536) (512) (n) (n, 512) (512, 512) (512) (m) (m, 512) (512, 512)
a b
simulation parameters view parameters visual mapping parameters 23
weights$#
prediction$(fake)$and$ground$ truth$(real)$images
the$input$parameters$%
!" $, % ∈ [0,1]
!":$%&,'and'parameter'settings'(":$%&
9 Regressor'is'trying'to'fool'discriminator'by'minimizing 9 Discriminator'is'trying'to'differentiate'real'and'fake'images'by'minimizing
24
1I.'J.'Goodfellow,'J.'Pouget9Abadie,'M.'Mirza,'B.'Xu,'D.'Warde9Farley,'S.'Ozair,'A.'Courville,'and'Y.'Bengio.'Generative'
adversarial'nets.'In'Proceedings'of'Advances'in'Neural'Information'Processing'Systems,'pp.'2672–2680,'2014.
ℒ*+,_. = − 1 2 3
45" $%&
log 9: ' !4, (4 ℒ*+,_< = − 1 2 3
45" $%&
log 9: !4, (4 + >?@ 1 − 9: ' !4, (4
ground'truth ℒAB*C ℒDEB ℒ*+,
5 ℒ"#$%:"overall"feature"level"difference"between"image"pairs 5 ℒ$&':"local"details"that"the"real"and"fake"images"differ"the"most ℒ = ℒ"#$% + λℒ$&'
25 ground"truth ℒ"#$% ℒ+,# ℒ$&' ℒ"#$% + λℒ$&'
26
. Predicting'image'for'new'parameter'settings
. Computing'the'sensitivity'of'the'parameters
27
Subregion Sensitivity of: BwsA Visualization View
Compute Overall Sensitivity Curve
Simulation Parameters BwsA
1 1.5 2 2.5 3 3.5 3.8Visual Mapping Parameters Isovalue of temperature: 15 20 25 View Parameters theta
60 120 240 300 360 160phi
Parameters View
sensitivity 300 350 400 450 500 550 600sensitivity 80
salinity 30 40
forward'prediction backward'sensitivity'analysis
28
Nyx SmallPoolFire MPAS-Ocean Lmse Lfeat Ground Truth Ladv_R Lfeat +10-2Ladv_R
log density 9 12.5 salinity 30 40 temperature 300 1850
29
30 Datasets(and(timings:(tsim,(tvis,(and(ttr are(timings(for(running(ensemble(simulations,(visualizing(data(in(situ, and(training(InSituNet,(respectively;(tfp and(tbp are(timings(for(a(forward(and(backward(propagation(of the(trained(InSituNet,(respectively.
31
32
Experimental Biologist Computational Biologist
Mathematical Simulation Model Simulation Domain
Computational Biologist
Mathematical Simulation Model
high Cdc42 polarization
concentration in a small region of the cell membrane
Microscopic Image Simulation Domain
Computational Biologist
! 35 uncalibrated simulation input parameters ! 400-dimensional output
! ~2.3 hrs/execution
analysis of the parameter space
Mathematical Simulation Model Simulation Domain
Computational Biologist
configurations
Mathematical Simulation Model
Data Scientist
Surrogate Model
Simulation Domain
Surrogate Simulation
Computational Biologist
38
Neural Network Surrogate Model
NNVA: Visual Analysis System Yeast Cell Polarization Simulation training data
new parameters
uncertainty visualization parameter sensitivity parameter
n model diagnosis
dropout Act.Max-Min Weight Matrix
visual queries, new input parameter configurations, etc.
35 1024 ReLU Dropout1(0.3) 800 500 400 ReLU ReLU Dropout1(0.3) Input Layer Hidden Layer (H0) Hidden Layer (H1) Hidden Layer (H2) Output Layer
( p1, p2, … p35 )
Simulation Domain
35 1024 ReLU Dropout1(0.3) 800 500 400 ReLU ReLU Dropout1(0.3) Input Layer Hidden Layer (H0) Hidden Layer (H1) Hidden Layer (H2) Output Layer
( p1, p2, … p35 ) Visualization: Predicted protein concentration is color-mapped and laid-out radially
Cdc42 concentration
200 400
35 1024 ReLU Dropout1(0.3) 800 500 400 ReLU ReLU Dropout1(0.3)
Input Layer Hidden Layer (H0) Hidden Layer (H1) Hidden Layer (H2) Output Layer
! Acts as regularizer to avoid overfitting
! Uncertainty quantification of predicted result
[1] Gal et al. : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML 2016
! Acts as regularizer to avoid overfitting
! Uncertainty quantification of predicted result
[1] Gal et al. : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML 2016
Uncertaint y bands
! Measure of how much the output changes for a small change in the input ! Useful for parameter tuning
!"# !$#
"# $#
! Measure of how much the output changes for a small change in the input ! Useful for parameter tuning
400 35
! Measure of how much the output changes for a small change in the input ! Useful for parameter tuning
concentration values at different regions
! Keeping the weights fixed, update the input (via gradient ascent) such that it maximize the model output i,e.
!" #
max
#
'
( # − * # − #+ ,
L2 -regularizer
m-.
#
'
( # + * # − #+ ,
concentration values at different regions
50
! Detail analysis of the predicted result for a specific parameter configuration of interest
! Detail analysis of the predicted result for a specific parameter configuration of interest
! Interactively calibrate the 35 simulation parameters
! Detail analysis of the predicted result for a specific parameter configuration of interest
! Interactively calibrate the 35 simulation parameters
! Visualize predicted protein concentration
! Detail analysis of the predicted result for a specific parameter configuration of interest
! Interactively calibrate the 35 simulation parameters
! Visualize predicted protein concentration
! Progressively store newly discovered sets of parameters
! Detail analysis of the predicted result for a specific parameter configuration of interest
! Interactively calibrate the 35 simulation parameters
! Visualize predicted protein concentration
! Progressively store newly discovered sets of parameters
! Analysis the weight matrices of the trained network
simulate high Cdc42 polarization
simulated high PF (>0.8)
! Highest PF = 0.82
polynomial surrogate models)
Angles in degrees Cdc42 conc.
Raw$image$data$(PF$=$0.87) NNVA$discovered$(PF$=$0.82) MCMC$(PF$=$0.57) Simulated$Annealing$(PF$=$0.64)
35 1024 800 500 400 Input Layer H0 H
1
H2 Output Layer
w1 w1024
1024 35
k_42a, k_42d, q, h, C42_t
Correlated Parameters Insufficient parameter ranges Row-wise Sorted H0 Input
w1 w400
400 500 35 H2 Output Input neuron indices in H2 layer
center neurons
patterns
knowledge
Sorted Avg. Sensitivity 400
matrix analyses require ML knowledge
interpret abstract domain level scientific concepts from the model