Learning on manifolds and graphs with intrinsic CNNs Michael - - PowerPoint PPT Presentation

learning on manifolds and graphs with intrinsic cnns
SMART_READER_LITE
LIVE PREVIEW

Learning on manifolds and graphs with intrinsic CNNs Michael - - PowerPoint PPT Presentation

Learning on manifolds and graphs with intrinsic CNNs Michael Bronstein University of Lugano Tel Aviv University Intel Corporation Switzerland Israel Israel 3DDL NIPS Workshop, Barcelona, 9 December 2016 1/49 2/49 $100K $100 $20 2005


slide-1
SLIDE 1

1/49

Learning on manifolds and graphs with intrinsic CNNs

Michael Bronstein

University of Lugano Tel Aviv University Intel Corporation Switzerland Israel Israel 3DDL NIPS Workshop, Barcelona, 9 December 2016

slide-2
SLIDE 2

2/49

slide-3
SLIDE 3

2/49

2005

$100K

2010

$100

2014

$20

slide-4
SLIDE 4
slide-5
SLIDE 5

4/49

(Acquired by Intel in 2012)

slide-6
SLIDE 6

5/49

Applications

Markerless motion capture Gesture control

slide-7
SLIDE 7

6/49

Basic problems: shape similarity and correspondence

Isometric

slide-8
SLIDE 8

6/49

Basic problems: shape similarity and correspondence

Isometric Non-isometric

slide-9
SLIDE 9

6/49

Basic problems: shape similarity and correspondence

Isometric Non-isometric Partial

slide-10
SLIDE 10

6/49

Basic problems: shape similarity and correspondence

Isometric Non-isometric Partial Different representation

slide-11
SLIDE 11

7/49

Task-specific features

Correspondence

slide-12
SLIDE 12

7/49

Task-specific features

Correspondence Similarity

...

slide-13
SLIDE 13

8/49

Deep learning in computer vision

2010 2011 2012 2013 2014 2015 10 20 30

Error % “Deep learning era” in vision

2016 2.9%

ImageNet ILSVRC Challenge

slide-14
SLIDE 14

9/49

Deep learning in computer graphics

Single view based2 Multiple view based3 Volumetric1

1Wu et al. 2015; 2Wei et al. 2016; 3Su et al. 2015

slide-15
SLIDE 15

10/49

Extrinsic vs Intrinsic CNNs

Extrinsic Intrinsic

slide-16
SLIDE 16

11/49

What is convolution on manifolds?

slide-17
SLIDE 17

12/49

Convolution

Euclidean Spatial domain (f ⋆ g)(x) = π

−π

f(ξ)g(x − ξ)dξ Non-Euclidean

slide-18
SLIDE 18

12/49

Convolution

Euclidean Spatial domain (f ⋆ g)(x) = π

−π

f(ξ)g(x − ξ)dξ Spectral domain

  • (f ⋆ g)(ω) = ˆ

f(ω) · ˆ g(ω) ‘Convolution Theorem’ Non-Euclidean

slide-19
SLIDE 19

12/49

Convolution

Euclidean Spatial domain (f ⋆ g)(x) = π

−π

f(ξ)g(x − ξ)dξ Spectral domain

  • (f ⋆ g)(ω) = ˆ

f(ω) · ˆ g(ω) ‘Convolution Theorem’ Non-Euclidean

? ?

slide-20
SLIDE 20

13/49

Fourier analysis (Euclidean spaces)

A function f : [−π, π] → R can be written as Fourier series f(x) =

  • ω

1 2π π

−π

f(ξ)e−ikξdξ eikx

ˆ f1 ˆ f2 ˆ f3 = + + + . . .

slide-21
SLIDE 21

13/49

Fourier analysis (Euclidean spaces)

A function f : [−π, π] → R can be written as Fourier series f(x) =

  • ω

1 2π π

−π

f(ξ)e−ikξdξ

  • ˆ

fk=f,eikxL2([−π,π])

eikx

ˆ f1 ˆ f2 ˆ f3 = + + + . . .

slide-22
SLIDE 22

13/49

Fourier analysis (Euclidean spaces)

A function f : [−π, π] → R can be written as Fourier series f(x) =

  • ω

1 2π π

−π

f(ξ)e−ikξdξ

  • ˆ

fk=f,eikxL2([−π,π])

eikx

ˆ f1 ˆ f2 ˆ f3 = + + + . . .

Fourier basis = Laplacian eigenfunctions: ∆eikx = k2eikx

We define Laplacian as a positive semi-definite operator ∆ = − d2

dx2

slide-23
SLIDE 23

14/49

Fourier analysis (non-Euclidean spaces)

A function f : X → R can be written as Fourier series f(x) =

  • k≥0
  • X

f(ξ)φk(ξ)dξ

  • ˆ

fk=f,φkL2(X)

φk(x)

= ˆ f1 + ˆ f2 + ˆ f3 + . . . f φ1 φ2 φ3

Fourier basis = Laplacian eigenfunctions: ∆φk(x) = λkφk(x)

slide-24
SLIDE 24

15/49

Laplacian operator

Laplacian ∆:L2(X)→L2(X) ∆f = −div(∇f) “difference between f(x) and average value of f around x”

x f

slide-25
SLIDE 25

15/49

Laplacian operator

Laplacian ∆:L2(X)→L2(X) ∆f = −div(∇f) “difference between f(x) and average value of f around x”

x f

Intrinsic (expressed solely in terms of the Riemannian metric)

slide-26
SLIDE 26

15/49

Laplacian operator

Laplacian ∆:L2(X)→L2(X) ∆f = −div(∇f) “difference between f(x) and average value of f around x”

x f

Intrinsic (expressed solely in terms of the Riemannian metric) Isometry-invariant

slide-27
SLIDE 27

15/49

Laplacian operator

Laplacian ∆:L2(X)→L2(X) ∆f = −div(∇f) “difference between f(x) and average value of f around x”

x f

Intrinsic (expressed solely in terms of the Riemannian metric) Isometry-invariant Self-adjoint ∆f, gL2(X) = f, ∆gL2(X)

slide-28
SLIDE 28

15/49

Laplacian operator

Laplacian ∆:L2(X)→L2(X) ∆f = −div(∇f) “difference between f(x) and average value of f around x”

x f

Intrinsic (expressed solely in terms of the Riemannian metric) Isometry-invariant Self-adjoint ∆f, gL2(X) = f, ∆gL2(X) ⇒ orthogonal eigenfunctions

slide-29
SLIDE 29

15/49

Laplacian operator

Laplacian ∆:L2(X)→L2(X) ∆f = −div(∇f) “difference between f(x) and average value of f around x”

x f

Intrinsic (expressed solely in terms of the Riemannian metric) Isometry-invariant Self-adjoint ∆f, gL2(X) = f, ∆gL2(X) ⇒ orthogonal eigenfunctions Positive semidefinite

slide-30
SLIDE 30

15/49

Laplacian operator

Laplacian ∆:L2(X)→L2(X) ∆f = −div(∇f) “difference between f(x) and average value of f around x”

x f

Intrinsic (expressed solely in terms of the Riemannian metric) Isometry-invariant Self-adjoint ∆f, gL2(X) = f, ∆gL2(X) ⇒ orthogonal eigenfunctions Positive semidefinite ⇒ non-negative eigenvalues

slide-31
SLIDE 31

16/49

Convolution

Euclidean Spatial domain (f ⋆ g)(x) = π

−π

f(ξ)g(x − ξ)dξ Spectral domain

  • (f ⋆ g)(ω) = ˆ

f(ω) · ˆ g(ω) ‘Convolution Theorem’ Non-Euclidean

?

  • (f ⋆ g)k = f, φkL2(X)g, φkL2(X)
slide-32
SLIDE 32

17/49

Spectral convolution

Function f Filtered function ˜ f Henaff, Bruna, LeCun 2015; Defferrard, Bresson, Vandergheynst 2016

slide-33
SLIDE 33

17/49

Spectral convolution

Function f Filtered function ˜ f Same function, same filter, another shape Henaff, Bruna, LeCun 2015; Defferrard, Bresson, Vandergheynst 2016

slide-34
SLIDE 34

17/49

Spectral convolution

Function f Filtered function ˜ f Same function, same filter, another shape

Filter is basis dependent

Henaff, Bruna, LeCun 2015; Defferrard, Bresson, Vandergheynst 2016

slide-35
SLIDE 35

17/49

Spectral convolution

Function f Filtered function ˜ f Same function, same filter, another shape

Filter is basis dependent ⇒ does not generalize across domains!

Henaff, Bruna, LeCun 2015; Defferrard, Bresson, Vandergheynst 2016

slide-36
SLIDE 36

18/49

Convolution in the spatial domain

A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z

Euclidean Non-Euclidean

No canonical global system of coordinates

slide-37
SLIDE 37

18/49

Convolution in the spatial domain

A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z

Euclidean Non-Euclidean

No canonical global system of coordinates No grid structure (no regular memory access)

slide-38
SLIDE 38

18/49

Convolution in the spatial domain

A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z A B C D E F G H I J K P V L R W M S X N T Y O U Z

Euclidean Non-Euclidean

No canonical global system of coordinates No grid structure (no regular memory access) No shift-invariance (patch operator is position-dependent)

slide-39
SLIDE 39

19/49

Convolution

Euclidean Spatial domain (f ⋆ g)(x) = π

−π

f(ξ)g(x − ξ)dξ Spectral domain

  • (f ⋆ g)(ω) = ˆ

f(ω) · ˆ g(ω) ‘Convolution Theorem’ Non-Euclidean (f ⋆ g)(x) =

  • (D(x)f)(u)g(u)du
  • (f ⋆ g)k = f, φkL2(X)g, φkL2(X)
slide-40
SLIDE 40

20/49

Patch operator

×

(f ⋆ g)(x) =

  • du

(D(x)f)(u) g(u)

Masci, Boscaini, B, Vandergheynst 2015; Boscaini, Masci, Rodol` a, B 2016

slide-41
SLIDE 41

21/49

Heat diffusion on manifolds

ft = −c∆f

Newton’s law of cooling: rate of change of the temperature of an object is proportional to the difference between its own temperature and the temperature of the surrounding c [m2/sec] = thermal diffusivity constant

slide-42
SLIDE 42

22/49

Heat diffusion on manifolds

ft(x, t) = −∆f(x, t) f(x, 0) = f0(x)

f(x, t) = amount of heat at point x at time t f0(x) = initial heat distribution

slide-43
SLIDE 43

22/49

Heat diffusion on manifolds

ft(x, t) = −∆f(x, t) f(x, 0) = f0(x)

f(x, t) = amount of heat at point x at time t f0(x) = initial heat distribution

Solution of the heat equation expressed through the heat operator f(x, t) = e−t∆f0(x)

slide-44
SLIDE 44

22/49

Heat diffusion on manifolds

ft(x, t) = −∆f(x, t) f(x, 0) = f0(x)

f(x, t) = amount of heat at point x at time t f0(x) = initial heat distribution

Solution of the heat equation expressed through the heat operator f(x, t) = e−t∆f0(x) =

  • k≥0

f0, φkL2(X)e−tλkφk(x)

slide-45
SLIDE 45

22/49

Heat diffusion on manifolds

ft(x, t) = −∆f(x, t) f(x, 0) = f0(x)

f(x, t) = amount of heat at point x at time t f0(x) = initial heat distribution

Solution of the heat equation expressed through the heat operator f(x, t) = e−t∆f0(x) =

  • k≥0

f0, φkL2(X)e−tλkφk(x) =

  • X

f0(ξ)

  • k≥0

e−tλkφk(x)φk(ξ) dξ

slide-46
SLIDE 46

22/49

Heat diffusion on manifolds

ft(x, t) = −∆f(x, t) f(x, 0) = f0(x)

f(x, t) = amount of heat at point x at time t f0(x) = initial heat distribution

Solution of the heat equation expressed through the heat operator f(x, t) = e−t∆f0(x) =

  • k≥0

f0, φkL2(X)e−tλkφk(x) =

  • X

f0(ξ)

  • k≥0

e−tλkφk(x)φk(ξ)

  • heat kernel ht(x,ξ)

slide-47
SLIDE 47

23/49

Heat kernels

slide-48
SLIDE 48

23/49

Heat kernels

slide-49
SLIDE 49

23/49

Heat kernels

slide-50
SLIDE 50

23/49

Heat kernels

slide-51
SLIDE 51

24/49

Homogeneous diffusion

ft(x) = −div(c∇f(x))

c = thermal diffusivity constant describing heat conduction properties of the material (diffusion speed is equal everywhere)

slide-52
SLIDE 52

24/49

Anisotropic diffusion

ft(x) = −div(A(x)∇f(x))

A(x) = heat conductivity tensor describing heat conduction properties of the material (diffusion speed is position + direction dependent)

slide-53
SLIDE 53

25/49

Anisotropic diffusion

Isotropic Anisotropic

slide-54
SLIDE 54

26/49

Anisotropic diffusion on manifolds

θ umax umin

ft(x) = −div

α 1

  • R⊤

θ ∇f(x)

  • Andreux et al. 2014; Boscaini, Masci, Rodol`

a, B, Cremers 2015

slide-55
SLIDE 55

26/49

Anisotropic diffusion on manifolds

θ umax umin

ft(x) = −div

α 1

  • R⊤

θ

  • Dαθ(x)

∇f(x)

  • Anisotropic Laplacian ∆αθf(x) = div (Dαθ(x)∇f(x))

θ = orientation w.r.t. max curvature direction α = ‘elongation’

Andreux et al. 2014; Boscaini, Masci, Rodol` a, B, Cremers 2015

slide-56
SLIDE 56

27/49

Anisotropic heat kernels

hαθt(x, ξ) =

  • k≥0

e−tλαθkφαθk(x)φαθk(ξ)

Scale t Orientation θ Elongation α

Boscaini, Masci, Rodol` a, B, Cremers 2015

slide-57
SLIDE 57

28/49

Intrinsic patch operator

x

Given a function f ∈ L2(X), the patch operator (D(x)f)(θ, t) =

  • X

f(ξ)hαθt(x, ξ)dξ produces a local representation of f around point x

θ = ‘angular coordinate’ t = ‘radial coordinate’

Masci, Boscaini, B, Vandergheynst 2015; Boscaini, Masci, Rodol` a, B 2016

slide-58
SLIDE 58

28/49

Intrinsic patch operator

x

Given a function f ∈ L2(X), the patch operator (D(x)f)(θ, t) =

  • X

f(ξ)hαθt(x, ξ)dξ produces a local representation of f around point x

θ = ‘angular coordinate’ t = ‘radial coordinate’

Intrinsic convolution (f ⋆ a)(x) =

  • θ,t

(D(x)f)(θ, t)g(θ, t)

Masci, Boscaini, B, Vandergheynst 2015; Boscaini, Masci, Rodol` a, B 2016

slide-59
SLIDE 59

29/49

Toy Anisotropic CNN architecture

... ...

Σ

...

Σ

...

Σ

... ... ...

Σ Σ Σ ξ ξ ξ D D D

Input layer M-dim Linear+ReLU layer Intrinsic convolutional layer Output layer Q-dim fin

M

fin

1

fin

2

fout

Q

fout

1

fout

2

P filters filter bank 1 filter bank 2 filter bank Q

Masci, Boscaini, B, Vandergheynst 2015; Boscaini, Masci, Rodol` a, B 2016

slide-60
SLIDE 60

30/49

Learning shape correspondence

Query X x y∗(x) Reference Y

Correspondence = labeling problem ACNN output fΘ(x) = probability distribution on reference Y Minimize logistic regression cost w.r.t. ACNN parameters Θ ℓ(Θ) = −

  • (x,y∗(x))∈T

δy∗(x), log fΘ(x)L2(Y)

Rodol` a et al. 2014; Masci, Boscaini, B, Vandergheynst 2015; Boscaini, Masci, Rodol` a, B 2016

slide-61
SLIDE 61

31/49

Correspondence performance

0.05 0.1 0.15 0.2 0.2 0.4 0.6 0.8 1 % geodesic error % correspondences BIM RF ADD GCNN ACNN Correspondence evaluated using asymmetric Princeton benchmark (training and testing: disjoint subsets of FAUST) Methods: Kim et al. 2011 (BIM); Boscaini, Masci, Melzi, B, Castellani, Vandergheynst 2015 (LSCNN); Rodol` a et al. 2014 (RF); Boscaini, Masci, Rodol` a, B, Cremers 2015 (ADD); Masci, Boscaini, B, Vandergheynst 2015 (GCNN); Boscaini, Masci, Rodol` a, B 2016 (ACNN); data: Bogo et al. 2014 (FAUST); benchmark: Kim et al. 2011

slide-62
SLIDE 62

32/49

Correspondence error: Blended Intrinsic Map

7.5%

Pointwise geodesic error (in % of geodesic diameter) Kim, Lipman, Funkhouser 2011

slide-63
SLIDE 63

32/49

Correspondence error: GCNN

7.5%

Pointwise geodesic error (in % of geodesic diameter) Masci, Boscaini, B, Vandergheynst 2015

slide-64
SLIDE 64

32/49

Correspondence error: ACNN

7.5%

Pointwise geodesic error (in % of geodesic diameter) Boscaini, Masci, Rodol` a, Bronstein 2016

slide-65
SLIDE 65

33/49

Partial correspondence with ACNN

Correspondence Correspondence error

0.0 0.1

Boscaini, Masci, Rodol` a, B 2016

slide-66
SLIDE 66

34/49

Partial correspondence with ACNN

Correspondence Correspondence error

0.0 0.1

Boscaini, Masci, Rodol` a, B 2016

slide-67
SLIDE 67

35/49

Partial correspondence performance

Cuts Holes

0.05 0.1 0.15 0.2 0.2 0.4 0.6 0.8 1 % geodesic error % correspondences RF PFM ACNN 0.05 0.1 0.15 0.2 0.2 0.4 0.6 0.8 1 % geodesic error % correspondences RF PFM ACNN Methods: Rodol` a et al. 2014 (RF); Rodol` a et al. 2015 (PFM); Boscaini, Masci, Rodol` a, B 2016 (ACNN); data: Cosmo et al. 2016 (SHREC); benchmark: Kim et al. 2011

slide-68
SLIDE 68

36/49

Mixture Model Network (MoNet)

Local geodesic coordinates u(x, y) = (ρ(x, y), θ(x, y))

x x u1 = ρ u2 = θ Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-69
SLIDE 69

36/49

Mixture Model Network (MoNet)

Local geodesic coordinates u(x, y) = (ρ(x, y), θ(x, y)) Gaussian weight functions wk(u) = exp

  • (u − µk)⊤Σ−1

k (u − µk)

  • learnable covariance Σ and mean µ

x x u1 = ρ u2 = θ Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-70
SLIDE 70

36/49

Mixture Model Network (MoNet)

Local geodesic coordinates u(x, y) = (ρ(x, y), θ(x, y)) Gaussian weight functions wk(u) = exp

  • (u − µk)⊤Σ−1

k (u − µk)

  • learnable covariance Σ and mean µ

Patch operator (D(x)f)k =

  • X

wk(u(x, y))f(y)dy

x x u1 = ρ u2 = θ Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-71
SLIDE 71

36/49

Mixture Model Network (MoNet)

Local geodesic coordinates u(x, y) = (ρ(x, y), θ(x, y)) Gaussian weight functions wk(u) = exp

  • (u − µk)⊤Σ−1

k (u − µk)

  • learnable covariance Σ and mean µ

Patch operator (D(x)f)k =

  • X

wk(u(x, y))f(y)dy Spatial convolution (f ⋆ g)(x) =

  • k

(D(x)f)k · gk

x x u1 = ρ u2 = θ Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-72
SLIDE 72

37/49

Patch operator weight functions

GCNN ACNN MoNet Masci, Boscaini, B, Vandergheynst 2016 (GCNN); Boscaini, Masci, Rodol` a, B 2016 (ACNN); Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016 (MoNet)

slide-73
SLIDE 73

38/49

Correspondence performance

4 8 12 16 20 cm 0.02 0.04 0.06 0.08 0.1 0.2 0.4 0.6 0.8 1 % geodesic error % correspondences BIM RF ADD GCNN ACNN MoNet Correspondence evaluated using asymmetric Princeton benchmark (training and testing: disjoint subsets of FAUST) Methods: Kim et al. 2011 (BIM); Rodol` a et al. 2014 (RF); Boscaini, Masci, Rodol` a, B, Cremers 2015 (ADD); Masci, Boscaini, B, Vandergheynst 2015 (GCNN); Boscaini, Masci, Rodol` a, B 2016 (ACNN); Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016 (MoNet); data: Bogo et al. 2014 (FAUST); benchmark: Kim et al. 2011

slide-74
SLIDE 74

39/49

Correspondence error: Blended Intrinsic Map

7.5%

Pointwise geodesic error (in % of geodesic diameter) Kim, Lipman, Funkhouser 2011

slide-75
SLIDE 75

39/49

Correspondence error: GCNN

7.5%

Pointwise geodesic error (in % of geodesic diameter) Masci, Boscaini, B, Vandergheynst 2015

slide-76
SLIDE 76

39/49

Correspondence error: ACNN

7.5%

Pointwise geodesic error (in % of geodesic diameter) Boscaini, Masci, Rodol` a, B 2016

slide-77
SLIDE 77

39/49

Correspondence error: MoNet

7.5%

Pointwise geodesic error (in % of geodesic diameter) Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-78
SLIDE 78

40/49

MoNet correspondence visualization

Reference Texture transferred from reference to query shapes Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-79
SLIDE 79

41/49

Correspondence with MoNet: Range images

7.5%

Pointwise geodesic error (in % of geodesic diameter) Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-80
SLIDE 80

42/49

Correspondence with MoNet: Range images

Reference Correspondence visualization (similar colors encode corresponding points) Training: FAUST / Testing: FAUST Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-81
SLIDE 81

42/49

Correspondence with MoNet: Range images

Reference Correspondence visualization (similar colors encode corresponding points) Training: FAUST / Testing: SCAPE+TOSCA Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-82
SLIDE 82

43/49

Summary

Construction of generalizable intrinsic convolutional neural networks Learnable, task-specific, intrinsic features State-of-the-art performance in a variety of applications in 3D shape analysis Beyond shapes: graphs, social networks, etc.

slide-83
SLIDE 83

44/49

Learning on graphs: MNIST classification

Regular grid Superpixels

Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-84
SLIDE 84

44/49

Learning on graphs: MNIST classification

Regular grid Superpixels

Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-85
SLIDE 85

45/49

Learning on graphs: MNIST classification

Dataset LeNet51 Spectral CNN2 MoNet3

∗Full grid

99.33% 99.14% 99.19%

∗ 1 4 grid

98.59% 97.51% 98.16% 300 Superpixels

  • 88.05%

97.30% 150 Superpixels

  • 80.94%

96.75% 75 Superpixels

  • 75.62%

91.11%

Classification accuracy of different methods on MNIST dataset

∗All images have the same graph 1LeCun et al. 1998; 2Defferrard, Bresson, Vandergheynst 2016; 3Monti, Boscaini,

Masci, Rodol` a, Svoboda, B 2016

slide-86
SLIDE 86

46/49

Learning on graphs: citation networks

Figure: Monti, Boscaini, Masci, Rodol` a, Svoboda, B 2016; data: Sen et al. 2008

slide-87
SLIDE 87

47/49

Learning on graphs: citation networks

Method Cora1 PubMed2 Manifold Regularization3 59.5% 70.7% Semidefinite Embedding4 59.0% 71.1% Label Propagation5 68.0% 63.0% DeepWalk6 67.2% 65.3% Planetoid7 75.7% 77.2% Graph Convolutional Net8 81.59±0.42% 78.72±0.25% MoNet9 81.69±0.48% 78.81±0.44%

Classification accuracy of different methods on citation network datasets

Data: 1,2Sen et al. 2008; methods: 3Belkin et al. 2006; 4Weston et al. 2012; 5Zhu et

  • al. 2003; 6Perozzi et al. 2014; 7Yang et al. 2016; 8Kipf, Welling 2016; 9Monti,

Boscaini, Masci, Rodol` a, Svoboda, B 2016

slide-88
SLIDE 88

48/49

  • D. Boscaini
  • J. Masci
  • E. Rodolà

Supported by

  • J. Svoboda
  • F. Monti
slide-89
SLIDE 89

49/49

Thank you!