DNN#Assisted*Parameter*Space* Exploration*and*Visualization*for* - - PowerPoint PPT Presentation

dnn assisted parameter space exploration and
SMART_READER_LITE
LIVE PREVIEW

DNN#Assisted*Parameter*Space* Exploration*and*Visualization*for* - - PowerPoint PPT Presentation

DNN#Assisted*Parameter*Space* Exploration*and*Visualization*for* Large*Scale*Simulations HA HAN#WE WEI*SH SHEN DE DEPARTM TMENT*OF NT*OF*C *COM OMPUTE TER*S *SCIENC NCE*A *AND*E ND*ENG NGINE NEERING NG TH THE*OH OHIO* O*STATE


slide-1
SLIDE 1

DNN#Assisted*Parameter*Space* Exploration*and*Visualization*for* Large*Scale*Simulations

HA HAN#WE WEI*SH SHEN DE DEPARTM TMENT*OF NT*OF*C *COM OMPUTE TER*S *SCIENC NCE*A *AND*E ND*ENG NGINE NEERING NG TH THE*OH OHIO* O*STATE TE*UNIVERSITY TY

1

slide-2
SLIDE 2

Traditional**Visualization*Pipeline*

Disk I/O Supercomputer Simulator Raw*data Post9analysis Memory I/O

  • Data*are*often*very*large*
  • Mainly*batch*mode*processing;**Interactive*exploration*is*not*possible
  • Limited*parameters*to*explore
slide-3
SLIDE 3

Parameter'Space'Exploration

  • Running&large&scale&simulations&is&very&time&and&storage&

consuming&

  • A&physical&simulation&typically&have&a&huge&parameter&space
  • Ensemble&simulations&are&needed&for&analyzing&the&model&quality&

and&also&identify&the&uncertainty&of&the&simulation&

  • It&is&not&possible&to&exhaust&all&possible&input&parameters&– time&

and&space&prohibitive&

3

slide-4
SLIDE 4

DNN#Assisted#Parameter#Space#Exploration

  • Can$we$create$visualization$of$the$simulation$outputs$without$saving$the$data?

Or

  • Can$we$predict$the$simulation$results$without$running$the$simulation?
  • Why?$

: Identify$important$simulation$parameters$ : Identify$simulation$parameter$sensitivity$ : Quantify$the$uncertainty$of$the$simulation$models

  • Methods:

: InSituNet (IEEE$SciVis’19$Best$Paper) : NNVA (IEEE$VAST’19$Best$Paper$Honorable$Mention)

4

slide-5
SLIDE 5

InSituNet:*Deep*Image*Synthesis*for* Parameter*Space*Exploration*of*Ensemble* Simulations

5

slide-6
SLIDE 6

Introduction

  • Ensemble(data(analysis(workflow
  • Issues

5 I/O(bottleneck(and(storage(overhead

6

ensemble simulations simulation parameters disk I/O raw data post-hoc analysis I/O

slide-7
SLIDE 7

Introduction

  • In#situ#visualization
  • Generating#visualizations#at#simulation#time
  • Storing#images#for#post-hoc#analysis

7

ensemble simulations simulation parameters disk I/O post-hoc analysis I/O

view parameters visual mapping parameters in situ vis visualization images

image data

slide-8
SLIDE 8
  • Challenges

) Limiting.the.flexibility.of.post)hoc.exploration.and.analysis

) Raw.data.are.no.longer.available

) Incapable.of.exploring.the.simulation.parameters

) Expensive.simulations.need.to.be.conducted.for.new.parameter.settings

  • Our.Approach

) Studying.how.the.parameters.influence.the.visualization.results ) Predicting.visualization.results.for.new.parameter.settings

Introduction

8

slide-9
SLIDE 9

Approach(Overview

9

in situ vis view parameters visual mapping parameters ensemble simulations simulation parameters visualization images

!"#$ !%#" !%#&' (

ℱ !"#$, !%#", !%#&' = (

InSituNet

A#deep#neural#network#that#models#the#function#ℱ

slide-10
SLIDE 10

Design'of'InSituNet

10

  • Three'subnetworks'and'two'losses

3 Regressor (mapping'parameters'to'prediction'images) 3 Feature+comparator+(computing'feature+reconstruction+loss) 3 Discriminator (computing'adversarial+loss)

input parameters

Regressor

ground truth prediction feature reconstruction loss

Discriminator

adversarial loss

Feature Comparator

slide-11
SLIDE 11

Design'of'InSituNet

11

  • Three&subnetworks&and&two&losses

2 Regressor (mapping(parameters(to(prediction(images) 2 Feature'comparator'(computing&feature'reconstruction'loss) 2 Discriminator (computing&adversarial'loss)

input parameters

Regressor

ground truth prediction feature reconstruction loss

Discriminator

adversarial loss

Feature Comparator

slide-12
SLIDE 12
  • A"convolutional"neural"network"(CNN)

4 Input:"parameters"!"#$,"!%#","and"!%#&' 4 Output:"prediction"image"( ) 4 Weights:"*,"updated"during"training

Regressor'+,

12

input parameters

Regressor

prediction

simulation parameters ReLU

  • thers

bacth normalization 2D convolution fully connected residual block input/output view parameters visual mapping parameters

Architecture"of"the"regressor

concat (1536, 4×4×16×k) reshape (512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (512) (n) (n, 512) (512, 512) (512) (m) (m, 512) (512, 512) (k, 3, 3, 3) tanh (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image (in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum

a

slide-13
SLIDE 13

Regressor'!"

13

concat (1536, 4×4×16×k) reshape (k, 3, 3, 3) tanh

simulation parameters

(512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image

ReLU

  • thers

bacth normalization 2D convolution fully connected residual block

(in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum

input/output view parameters

(512) (n) (n, 512) (512, 512)

visual mapping parameters

(512) (m) (m, 512) (512, 512)

a

  • A$convolutional$neural$

network$(CNN)

  • Input:$simulation,$visual$

mapping,$view$ parameters

  • Output:$prediction$

image

  • Weights$#:$weights$

collected$from$all$layers

slide-14
SLIDE 14

Regressor'!"

14

concat (1536, 4×4×16×k) reshape (k, 3, 3, 3) tanh

simulation parameters

(512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image

ReLU

  • thers

bacth normalization 2D convolution fully connected residual block

(in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum

input/output view parameters

(512) (n) (n, 512) (512, 512)

visual mapping parameters

(512) (m) (m, 512) (512, 512)

a

  • Fully'connected'layer
  • Input:'1D'vector'# ∈ ℝ&
  • Output:'1D'vector'' ∈ ℝ(
  • Weights:'matrix') ∈ ℝ&×(
  • Activation'function
  • Rectified'Linear'Units'(ReLU)

' = )# '′ = max(0, ')

slide-15
SLIDE 15

Regressor'!"

15

concat (1536, 4×4×16×k) reshape (k, 3, 3, 3) tanh

simulation parameters

(512) (l) (l, 512) (512, 512) (1536) (4×4×16×k) (4, 4, 16×k) (in=16×k, out=16×k) (in=16×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=4×k) (in=4×k, out=2×k) (in=2×k, out=k) (8, 8, 16×k) (16, 16, 8×k) (32, 32, 8×k) (64, 64, 4×k) (128, 128, 2×k) (256, 256, k) (256, 256, 3) image

ReLU

  • thers

bacth normalization 2D convolution fully connected residual block

(in, out, 3, 3) (out, out, 3, 3) upsampling (in, out, 1, 1) upsampling sum

input/output view parameters

(512) (n) (n, 512) (512, 512)

visual mapping parameters

(512) (m) (m, 512) (512, 512)

a

  • 2D%Convolutional%Layer
  • Input:%tensor%# ∈ ℝ&×(×)
  • Output:%tensor%* ∈ ℝ&×(×)+
  • Weights:%kernel%, ∈

ℝ-_(×-_&×)×)+

  • Residual%block1
  • Adding%input%to%the%output%
  • f%convolutional%layers

* = , ∗ #

1K.%He,%X.%Zhang,%S.%Ren,%and%J.%Sun.%Deep%residual%learning%for%image%recognition.%In%Proceedings%of%2016%IEEE%

Conference%on%Computer%Vision%and%Pattern%Recognition,%pp.%770–778,%2016.

slide-16
SLIDE 16

Loss$Function

16

input parameters

Regressor

prediction

  • Difference*between*the*prediction*and*the*ground*truth
  • Used*to*update*the*weights*!

ground truth

Loss Backpropagation

slide-17
SLIDE 17
  • Pixel&wise&loss&functions

/ Example:&mean&squared&error&loss&(MSE&loss&ℒ"#$) / Issue:&blurry&prediction&images

Loss$Function$+ Straightforward$Approach

17

input parameters

Regressor

prediction ground truth

MSE&Loss

Blurry&image&generated&by&a regressor&trained&with&the&MSE&loss

slide-18
SLIDE 18
  • Combining(two(loss(functions(defined(by(two(subnetworks

5 Feature(comparator(5>(feature(reconstruction(loss(ℒ"#$% 5 Discriminator(5>(adversarial(loss(ℒ$&'

Loss$Function$+ Our$Approach

18

input parameters

Regressor

prediction ground truth feature reconstruction loss

Discriminator

adversarial loss

Feature Comparator

slide-19
SLIDE 19

Design'of'InSituNet

19

  • Three'subnetworks'and'two'losses

3 Regressor (mapping'parameters'to'prediction'images) 3 Feature'comparator'(computing+feature'reconstruction'loss) 3 Discriminator (computing'adversarial1loss)

input parameters

Regressor

ground truth prediction feature reconstruction loss

Discriminator

adversarial loss

Feature Comparator

slide-20
SLIDE 20

Feature'Comparator'!

  • A"pretrained"CNN"(e.g.,"VGG3191)

3 Input:"image"" 3 Output:"feature"map"!#(") ∈ ℝ(×*×+of"an"intermedia"layer",

20

conv1_1 relu1_1 pool1 conv1_2 relu1_2 conv2_2 relu2_2 conv2_1 relu2_1 pool2

2D convolution ReLU max pooling

input

feature map

h w c channels

1K."Simonyan and"A."Zisserman."Very"deep"convolutional"networks"for"large3scale"image"recognition."In"

Proceedings"of"International"Conference"on"Learning"Representations,"2015.

slide-21
SLIDE 21
  • Definition

( MSE,loss between,the,feature'maps of,the,prediction,and,the,ground,truth ( Given,a,batch,of,ground,truth,images,!":$%& and,predictions,' !":$%&

  • Benefits

( Making,the,regressor,to,generate,images,sharing,similar,features,with,the,ground,truth,,which,leads, to,images,with,sharper,features

21

ℒ)*+,

  • ,/

= 1 ℎ345 6

78" $%&

9/ !7 − 9/ ' !7

; ;

Feature'Reconstruction'Loss'ℒ)*+,

ground,truth ℒ)*+, ℒ<=*

slide-22
SLIDE 22

Design'of'InSituNet

22

  • Three&subnetworks&and&two&losses

2 Regressor (mapping&parameters&to&prediction&images) 2 Feature+comparator+(computing&feature+reconstruction+loss) 2 Discriminator (computing+adversarial/loss)

input parameters

Regressor

ground truth prediction feature reconstruction loss

Discriminator

adversarial loss

Feature Comparator

slide-23
SLIDE 23

Discriminator+!"

(256, 256, 3) (in=3, out=k) (in=k, out=2×k) (in=2×k, out=4×k) (in=4×k, out=8×k) (in=8×k, out=8×k) (in=8×k, out=16×k) (128, 128, k) (64, 64, 2×k) (32, 32, 4×k) (16, 16, 8×k) (8, 8, 8×k) (4, 4, 16×k) image (in=16×k, out=16×k) global sum pooling (16×k, 1) (16×k) (1) (1) sum sigmoid (in, out, 3, 3) (out, out, 3, 3) (in, out, 1, 1) average pooling sum average pooling real / fake

ReLU

  • thers

2D convolution fully connected residual block input/output

(1536, 16×k) (16×k) dot concat (512) (l) (l, 512) (512, 512) (1536) (512) (n) (n, 512) (512, 512) (512) (m) (m, 512) (512, 512)

a b

simulation parameters view parameters visual mapping parameters 23

  • A$binary$classifier$(CNN)$with$

weights$#

  • Trained$to$differentiate$

prediction$(fake)$and$ground$ truth$(real)$images

  • Input:$real/fake$image$$ and$

the$input$parameters$%

  • Output:$likelihood$value$

!" $, % ∈ [0,1]

  • 1$means$real
  • 0$means$fake
slide-24
SLIDE 24
  • Regressor'and'discriminator'are'trained'in'an'adversarial manner1
  • Given'a'batch'of'real'images'!":$%&,'fake'images''

!":$%&,'and'parameter'settings'(":$%&

9 Regressor'is'trying'to'fool'discriminator'by'minimizing 9 Discriminator'is'trying'to'differentiate'real'and'fake'images'by'minimizing

Adversarial*Loss*ℒ*+,

24

1I.'J.'Goodfellow,'J.'Pouget9Abadie,'M.'Mirza,'B.'Xu,'D.'Warde9Farley,'S.'Ozair,'A.'Courville,'and'Y.'Bengio.'Generative'

adversarial'nets.'In'Proceedings'of'Advances'in'Neural'Information'Processing'Systems,'pp.'2672–2680,'2014.

ℒ*+,_. = − 1 2 3

45" $%&

log 9: ' !4, (4 ℒ*+,_< = − 1 2 3

45" $%&

log 9: !4, (4 + >?@ 1 − 9: ' !4, (4

ground'truth ℒAB*C ℒDEB ℒ*+,

slide-25
SLIDE 25

Total&Loss

  • A"weighted"combination"of"ℒ"#$% and"ℒ$&'
  • ℒ"#$% and"ℒ$&' are"complimentary"with"each"other

5 ℒ"#$%:"overall"feature"level"difference"between"image"pairs 5 ℒ$&':"local"details"that"the"real"and"fake"images"differ"the"most ℒ = ℒ"#$% + λℒ$&'

25 ground"truth ℒ"#$% ℒ+,# ℒ$&' ℒ"#$% + λℒ$&'

slide-26
SLIDE 26

Training

  • Updating)!" and)#$ w.r.t. the)loss)functions)iteratively

26

slide-27
SLIDE 27

Parameter'Space'Exploration'with' InSituNet

  • Forward'prediction

. Predicting'image'for'new'parameter'settings

  • Backward'sensitivity'analysis

. Computing'the'sensitivity'of'the'parameters

27

Subregion Sensitivity of: BwsA Visualization View

Compute Overall Sensitivity Curve

Simulation Parameters BwsA

1 1.5 2 2.5 3 3.5 3.8

Visual Mapping Parameters Isovalue of temperature: 15 20 25 View Parameters theta

60 120 240 300 360 160

phi

  • 90
  • 30
30 60 90
  • 54

Parameters View

sensitivity 300 350 400 450 500 550 600

sensitivity 80

salinity 30 40

forward'prediction backward'sensitivity'analysis

slide-28
SLIDE 28
  • Three%simulations:%SmallPoolFire,%Nyx,%and%MPAS:Ocean
  • Comparing%the%prediction%with%the%ground%truth

Results

28

Nyx SmallPoolFire MPAS-Ocean Lmse Lfeat Ground Truth Ladv_R Lfeat +10-2Ladv_R

log density 9 12.5 salinity 30 40 temperature 300 1850

slide-29
SLIDE 29

Results

  • Nyx$(cosmological$simulation) • MPAS6Ocean$(ocean$simulation)

29

slide-30
SLIDE 30

Results

30 Datasets(and(timings:(tsim,(tvis,(and(ttr are(timings(for(running(ensemble(simulations,(visualizing(data(in(situ, and(training(InSituNet,(respectively;(tfp and(tbp are(timings(for(a(forward(and(backward(propagation(of the(trained(InSituNet,(respectively.

slide-31
SLIDE 31
  • We#propose#InSituNet,#a#deep#learning4based#image#synthesis#

model#for#parameter#space#exploration#of#large4scale#ensemble# simulations

  • We#evaluate#the#effectiveness#of#InSituNet in#analyzing#ensemble#

simulations#that#model#different#physical#phenomena

Conclusion

31

slide-32
SLIDE 32

Neural'Network'Assisted'Visual'Analysis'of' Yeast'Cell'Polarization'Simulation

32

slide-33
SLIDE 33

Experimental Biologist Computational Biologist

Mathematical Simulation Model Simulation Domain

slide-34
SLIDE 34

Computational Biologist

Mathematical Simulation Model

  • Protein species of interest : Cdc42
  • Identify parameters configurations that can simulate

high Cdc42 polarization

  • Polarization: Asymmetric localization of protein

concentration in a small region of the cell membrane

Simulation Background

Microscopic Image Simulation Domain

slide-35
SLIDE 35

Computational Biologist

  • High-dimensional input/output spaces

! 35 uncalibrated simulation input parameters ! 400-dimensional output

  • Computationally expensive

! ~2.3 hrs/execution

  • Challenging to perform interactive exploratory

analysis of the parameter space

Challenges

Mathematical Simulation Model Simulation Domain

slide-36
SLIDE 36

Computational Biologist

  • Neural network-based surrogate model
  • Mimics the expensive simulation during analysis
  • Facilitates interactive visual analytics
  • Quick preview of predicted results for new parameters

configurations

Proposed Approach

Mathematical Simulation Model

Data Scientist

Surrogate Model

Simulation Domain

slide-37
SLIDE 37
  • Quickly preview results for new parameters
  • Perform parameter sensitivity analysis
  • Discover interesting parameter configurations
  • Validate the surrogate model
  • Extract insights from trained surrogate

VA System Requirements

Surrogate Simulation

Computational Biologist

slide-38
SLIDE 38

Visual'Analysis'Workflow

38

slide-39
SLIDE 39

Neural Network Surrogate Model

NNVA: Visual Analysis System Yeast Cell Polarization Simulation training data

new parameters

uncertainty visualization parameter sensitivity parameter

  • ptimizatio

n model diagnosis

dropout Act.Max-Min Weight Matrix

visual queries, new input parameter configurations, etc.

slide-40
SLIDE 40

Network Structure and Training

35 1024 ReLU Dropout1(0.3) 800 500 400 ReLU ReLU Dropout1(0.3) Input Layer Hidden Layer (H0) Hidden Layer (H1) Hidden Layer (H2) Output Layer

( p1, p2, … p35 )

Simulation Domain

slide-41
SLIDE 41

Network Structure and Training

35 1024 ReLU Dropout1(0.3) 800 500 400 ReLU ReLU Dropout1(0.3) Input Layer Hidden Layer (H0) Hidden Layer (H1) Hidden Layer (H2) Output Layer

( p1, p2, … p35 ) Visualization: Predicted protein concentration is color-mapped and laid-out radially

Cdc42 concentration

200 400

slide-42
SLIDE 42

Network Structure and Training

35 1024 ReLU Dropout1(0.3) 800 500 400 ReLU ReLU Dropout1(0.3)

  • Loss Function: MSE
  • Training data size: 3000
  • Uniformly sampled parameter space
  • Validation data size: 500
  • Final Accuracy: 87.6%

Input Layer Hidden Layer (H0) Hidden Layer (H1) Hidden Layer (H2) Output Layer

slide-43
SLIDE 43

Uncertainty in Neural Networks using Dropout

  • Important to visualize the prediction uncertainty in the exploration process
  • Dropout: Randomly ignoring the output of neurons in a layer
  • Training phase:

! Acts as regularizer to avoid overfitting

  • Prediction phase[1]:

! Uncertainty quantification of predicted result

[1] Gal et al. : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML 2016

slide-44
SLIDE 44

Uncertainty in Neural Networks using Dropout

  • Important to visualize the prediction uncertainty in the exploration process
  • Dropout: Randomly ignoring the output of neurons in a layer
  • Training phase:

! Acts as regularizer to avoid overfitting

  • Prediction phase[1]:

! Uncertainty quantification of predicted result

  • Visualized using standard deviation bands

[1] Gal et al. : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. ICML 2016

Uncertaint y bands

slide-45
SLIDE 45

Parameter Sensitivity Analysis

  • Influence of individual parameters on different areas of output domain

! Measure of how much the output changes for a small change in the input ! Useful for parameter tuning

  • Partial derivative of output w.r.t input:
  • Computed using backpropagation

!"# !$#

"# $#

slide-46
SLIDE 46

Parameter Sensitivity Analysis

  • Influence of individual parameters on different areas of output domain

! Measure of how much the output changes for a small change in the input ! Useful for parameter tuning

  • Visualize detail parameter sensitivity across the cell membrane

400 35

slide-47
SLIDE 47

Parameter Sensitivity Analysis

  • Influence of individual parameters on different areas of output domain

! Measure of how much the output changes for a small change in the input ! Useful for parameter tuning

  • Visualize detail parameter sensitivity across the cell membrane
  • Interactive sensitivity brush to select area of interest
slide-48
SLIDE 48

Parameter Optimization

  • Recommend parameter configurations which maximizes/minimizes the predicted protein

concentration values at different regions

  • Activation Maximization:

! Keeping the weights fixed, update the input (via gradient ascent) such that it maximize the model output i,e.

  • Activation Minimization:

!" #

max

#

'

( # − * # − #+ ,

L2 -regularizer

m-.

#

'

( # + * # − #+ ,

slide-49
SLIDE 49

Parameter Optimization

  • Recommend parameter configurations which maximizes/minimizes the predicted protein

concentration values at different regions

  • Interactive brushes to select specific areas of the cell membrane
  • Act. Max
  • Act. Min
  • Act. Max-Min
slide-50
SLIDE 50

Visual'Analysis'System

50

slide-51
SLIDE 51
  • Instance View:

! Detail analysis of the predicted result for a specific parameter configuration of interest

slide-52
SLIDE 52
  • Instance View:

! Detail analysis of the predicted result for a specific parameter configuration of interest

  • Parameter Control Board:

! Interactively calibrate the 35 simulation parameters

slide-53
SLIDE 53
  • Instance View:

! Detail analysis of the predicted result for a specific parameter configuration of interest

  • Parameter Control Board:

! Interactively calibrate the 35 simulation parameters

  • Quick View

! Visualize predicted protein concentration

slide-54
SLIDE 54
  • Instance View:

! Detail analysis of the predicted result for a specific parameter configuration of interest

  • Parameter Control Board:

! Interactively calibrate the 35 simulation parameters

  • Quick View

! Visualize predicted protein concentration

  • Parameter List View:

! Progressively store newly discovered sets of parameters

slide-55
SLIDE 55
  • Instance View:

! Detail analysis of the predicted result for a specific parameter configuration of interest

  • Parameter Control Board:

! Interactively calibrate the 35 simulation parameters

  • Quick View

! Visualize predicted protein concentration

  • Parameter List View:

! Progressively store newly discovered sets of parameters

  • Model Analysis View:

! Analysis the weight matrices of the trained network

slide-56
SLIDE 56

Use Case 1: Discover New Parameter Configurations

  • Identify parameters configurations that can

simulate high Cdc42 polarization

  • Polarization Factor (PF): Measure of the extent
  • f polarization [0.0, 1.0]
  • Found several parameter configurations that

simulated high PF (>0.8)

! Highest PF = 0.82

  • Compared with previous analysis efforts (using

polynomial surrogate models)

Angles in degrees Cdc42 conc.

Raw$image$data$(PF$=$0.87) NNVA$discovered$(PF$=$0.82) MCMC$(PF$=$0.57) Simulated$Annealing$(PF$=$0.64)

slide-57
SLIDE 57

Use Case 2: Analyzing the trained surrogate

  • To extract and validate the knowledge learned by the trained network
  • Visualize and analyze the weight matrices

35 1024 800 500 400 Input Layer H0 H

1

H2 Output Layer

slide-58
SLIDE 58

Use Case 2: Analyzing the trained surrogate

  • First Weight Matrix

w1 w1024

1024 35

  • k_24cm0, k_24cm1
  • k_42a, k_42d
  • q, h
  • k_24cm0, k_24cm1,

k_42a, k_42d, q, h, C42_t

Correlated Parameters Insufficient parameter ranges Row-wise Sorted H0 Input

slide-59
SLIDE 59

Case Study 2: Analyzing the trained surrogate

  • Final Weight Matrix

w1 w400

400 500 35 H2 Output Input neuron indices in H2 layer

  • Patterns with high weights to the

center neurons

  • Avg. Parameter sensitivity to such

patterns

  • Top 15 aligns with the domain

knowledge

Sorted Avg. Sensitivity 400

slide-60
SLIDE 60

Conclusion and Future Work

  • Neural network-assisted analysis backend for interactive visual analytics
  • Utilized post-hoc analysis operation on trained model to facilitate scientific enquires
  • [Future] Investigate more effective model interpretation techniques. Currently, weight

matrix analyses require ML knowledge

  • [Future] Explore additional post-hoc analysis techniques for neural networks to

interpret abstract domain level scientific concepts from the model