Representation Learning and Super-Resolution Generation for - - PowerPoint PPT Presentation

representation learning and super resolution generation
SMART_READER_LITE
LIVE PREVIEW

Representation Learning and Super-Resolution Generation for - - PowerPoint PPT Presentation

Representation Learning and Super-Resolution Generation for Scientific Visualization Chaoli Wang University of Notre Dame 1 Outline of talk Scientific visualization FlowNet for representation learning TSR-TVD for super-resolution


slide-1
SLIDE 1

Representation Learning and Super-Resolution Generation for Scientific Visualization

Chaoli Wang University of Notre Dame

1

slide-2
SLIDE 2

Outline of talk

  • Scientific visualization
  • FlowNet for representation learning
  • TSR-TVD for super-resolution generation
  • Improvement and expansion
  • Emerging directions for AI+VIS research

2

slide-3
SLIDE 3

Scientific visualization

3

slide-4
SLIDE 4

Scalar fields

4

slide-5
SLIDE 5

Direct volume rendering and isosurface rendering

Transfer function

5

slide-6
SLIDE 6

Vector fields

6

slide-7
SLIDE 7

Streamlines and stream surfaces

! Streamlines are a family of curves that are instantaneously tangent to the velocity vector of the flow ! Show the trajectory a seed will travel in at any point in time ! Replace a seeding point with a seeding curve trace a stream surface FlowVisual https://sites.nd.edu/chaoli-wang/flowvisual/

7

slide-8
SLIDE 8

Examples of flow lines and surfaces

8

slide-9
SLIDE 9

FlowNet

Jun Han, Jun Tao, and Chaoli Wang. FlowNet: A Deep Learning Framework for Clustering and Selection of Streamlines and Stream Surfaces. IEEE Transactions on Visualization and Computer Graphics, 26(4):1732- 1744, 2020.

9

slide-10
SLIDE 10

Outline of approach

  • Goal

– A single deep learning approach for identifying representative flow lines or flow surfaces

  • Key ideas

– Leverage an autoencoder to automatically learn line

  • r surface feature descriptors

– Apply dimensionality reduction and interactive clustering for exploration and selection

10

slide-11
SLIDE 11

FlowNet user interface

11

slide-12
SLIDE 12

Video demo

12

slide-13
SLIDE 13

! Encoder-decoder framework ! 3D voxel-based binary representation as input ! Feature descriptor learning in the latent space

FlowNet architecture

13

slide-14
SLIDE 14

! Manifold-based

" Suitable for 3D mesh manifold (genus zero or higher genus surface) " Does not work for flow lines or surfaces (non-closed)

! Multiview-based

" Represent 3D shape with images rendered from different views " Flow surfaces could be severely self-occluded

! Voxel-based

" No precise line or surface is required for loss function computation and reconstruction quality evaluation " Currently limited to a low resolution (e.g., 1283) " Encode any 3D volumetric information (line, surface, volume)

14

Why voxel-based approach?

slide-15
SLIDE 15

FlowNet details

! The encoder consists of four convolutional (CONV) layers with batch normalization (BN) added in between,

  • ne CONV layer w/o BN, followed by two fully-connected

layers ! The decoder consists of five CONV layers and four BN layers ! Apply the rectified linear unit (ReLU) at the hidden layers and the sigmoid function at the output layer ! Consider three loss functions: binary cross entropy, mean squared error (MSE), and Dice loss

15

slide-16
SLIDE 16

16

1,1,32,32,32 1,512,29,29,29 1,256,26,26,26 1,128,23,23,23 1,64,20,20,20 1,1,17,17,17 1,17x17x17 1,1024 1,1024 1,47x47x47 1,1,47,47,47 1,64,44,44,44 1,128,41,41,41 1,256,38,38,38 1,512,35,35,35 1,1,32,32,32

FlowNet details

B,C,L,H,W B,CxLxHxW

slide-17
SLIDE 17

Dimensionality reduction and

  • bject clustering
  • Consider three dimensionality reduction methods: t-SNE

(neighborhood-preserving), MDS and Isomap (distance- preserving)

  • Consider three clustering methods: DBSCAN (density-

based), k-means (partition-based), and agglomerative clustering (hierarchy-based)

  • Finally choose t-SNE + DBSCAN
  • Compare three distance measures: FlowNet feature

Euclidean distance, streamline MCP distance, and streamline Hausdorff distance

17

slide-18
SLIDE 18

Parameter setting and performance

18

slide-19
SLIDE 19

Qualitative evaluation

Training set only Test set only Training set + test set

19

slide-20
SLIDE 20

Quantitative evaluation

! Use representative streamlines to reconstruct the vector field using gradient vector flow (GVF)

20

slide-21
SLIDE 21

FlowNet results

21

slide-22
SLIDE 22

FlowNet results

22

slide-23
SLIDE 23

FlowNet results

23

slide-24
SLIDE 24

TSR-TVD

Jun Han and Chaoli Wang. TSR-TVD: Temporal Super-Resolution for Time-Varying Data Analysis and

  • Visualization. IEEE Transactions on Visualization and Computer Graphics, 26(1):205-215, 2020.

Interpolation Vm Vm+s Vm+1 Vm+i Vm+s-1 ... ... Training Testing V1 Vm Vm+s Vn ... ... ...

24

slide-25
SLIDE 25

Outline of approach

  • Goal

– Generation of temporal super-resolution (TSR) of time- varying data (TVD)

  • Key idea

– Leverage a recurrent generative network, a combination of recurrent neural network (RNN) and generative adversarial network (GAN) to generate temporal high-resolution volume sequences

25

slide-26
SLIDE 26

TSR-TVD architecture

26

slide-27
SLIDE 27

Generator and discriminator

! Generator G consists of the predicting and blending modules

" Predicting module produces a forward prediction VF through Vi and a backward prediction VB through Vi+k " Blending module takes Vi, Vi+k, VF, and VB that share the same time step as input and outputs the synthesized volume

! Discriminator D distinguishes the synthesized volume from the ground-truth volume

27

slide-28
SLIDE 28

Architecture details

Predicting module in G Network architecture of D Residual block

Skip connection

28

slide-29
SLIDE 29

Loss function

  • Adversarial loss that trains G with the goal of

fooling D

  • Volumetric loss that mixes the adversarial loss

with a more traditional loss, such as L2 distance

  • Feature loss that constrains G to produce

natural statics at multiple scales

29

slide-30
SLIDE 30

Quantitative evaluation

! PNSR at data-level, SSIM at image-level, and IS at feature-level

30

slide-31
SLIDE 31

Qualitative analysis (solar plume)

Linear interpolation TSR-TVD

31

slide-32
SLIDE 32

Qualitative analysis (solar plume)

RNN TSR-TVD

32

slide-33
SLIDE 33

Qualitative analysis (solar plume)

CNN TSR-TVD

33

slide-34
SLIDE 34

Qualitative analysis (combustion, MF)

Linear interpolation Ground truth TSR-TVD

34

slide-35
SLIDE 35

Qualitative analysis (combustion, MF)

Linear interpolation Ground truth TSR-TVD

35

slide-36
SLIDE 36

Qualitative analysis (combustion, MF)

Linear interpolation Ground truth TSR-TVD

36

slide-37
SLIDE 37

Qualitative analysis (combustion, MF)

Linear interpolation Ground truth TSR-TVD

37

slide-38
SLIDE 38

Qualitative analysis (combustion, MF)

Linear interpolation Ground truth TSR-TVD

38

slide-39
SLIDE 39

Qualitative analysis (combustion, MF!HR)

Linear interpolation Ground truth TSR-TVD

39

slide-40
SLIDE 40

40

slide-41
SLIDE 41

Qualitative analysis (supernova, entropy, v=0.176)

Linear interpolation Ground truth TSR-TVD

41

slide-42
SLIDE 42

Qualitative analysis (combustion, HR, v=0.569)

Linear interpolation Ground truth TSR-TVD

42

slide-43
SLIDE 43

43

slide-44
SLIDE 44

Future research directions

44

slide-45
SLIDE 45

Representation learning for volumes

45

William P. Porter, Yunhao Xing, Blaise R. von Ohlen, Jun Han, and Chaoli Wang. A Deep Learning Approach to Selecting Representative Time Steps for Time-Varying Multivariate Data. In Proceedings of IEEE VIS Conference (Short Papers), pages 131-135, 2019.

slide-46
SLIDE 46

From voxel to graph representation

FlowNet SurfNet

46

slide-47
SLIDE 47

Other super-resolution works

SSR-TVD V2V SSR-VFD TSR-VFD

47

slide-48
SLIDE 48
  • Training time

– May take hours to a few days on a single GPU

  • Synthesized details

– Largely avoid fake details by using observation-driven instead of noise-driven GAN

  • Ground truth

– Possible to generate super-resolution w/o the presence of the

  • riginal high-resolution data
  • Model generalization

– Could apply the trained model to different sequences or ensemble runs of the same or similar simulations

Key concerns

48

slide-49
SLIDE 49
  • VIS for AI

– Interpreting or explaining the inner working of neural nets – Network model debugging, improvement, comparison, and selection – Teaching and learning deep learning concepts

  • AI for VIS

– Representation learning for clustering and selection – Data generation and augmentation – Replacing the traditional visualization pipeline – Simulation parameter space exploration – Parallel and in situ workflow optimization – Physics-informed deep learning

Emerging directions in AI+VIS

Fred Hohman, Minsuk Kahng, Robert Pienta, and Duen Horng Chau. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers. IEEE Transactions on Visualization and Computer Graphics, 25(8):2674-2693, 2019.

49

slide-50
SLIDE 50
  • Team members

– Graduate students: Jun Han, Hao Zheng, Martin Imre – Postdoc: Jun Tao (Sun Yat-sen Univ.) – Undergraduate students: William Porter, Blaise von Ohlen – Exchange students: Yunhao Xing (Columbia), Yihong Ma (Notre Dame) – iSURE students: Li Guo (CMU), Shaojie Ye (UW-Madison)

  • Collaborators

– Danny Chen (Notre Dame), Jian-Xun Wang (Notre Dame), Hanqi Guo (ANL), Tom Peterka (ANL), Choong-Seock Chang (PPPL)

  • Funding

– NSF IIS-1455886, CNS-1629914, DUE-1833129, IIS-1955395 – NVIDIA GPU Grant Program

Acknowledgements

50

slide-51
SLIDE 51

chaoli.wang@nd.edu

Thank you!

51