Flat Metric Minimization with Applications in Generative Modeling - - PowerPoint PPT Presentation

flat metric minimization with applications in generative
SMART_READER_LITE
LIVE PREVIEW

Flat Metric Minimization with Applications in Generative Modeling - - PowerPoint PPT Presentation

Flat Metric Minimization with Applications in Generative Modeling Thomas M ollenhoff Daniel Cremers Motivation Latent concepts often induce an orientation of the data. 1/12 Simard et al. 1992, 1998; Rifai et al. 2011 Motivation Latent


slide-1
SLIDE 1

Flat Metric Minimization with Applications in Generative Modeling

Thomas M¨

  • llenhoff

Daniel Cremers

slide-2
SLIDE 2

Motivation

Latent concepts often induce an orientation of the data.

1/12 Simard et al. 1992, 1998; Rifai et al. 2011

slide-3
SLIDE 3

Motivation

Latent concepts often induce an orientation of the data. Tangent vectors to the “data manifold”: ▸ Stroke thickness or shear of MNIST digit. X ⊂ R28⋅28

1/12 Simard et al. 1992, 1998; Rifai et al. 2011

slide-4
SLIDE 4

Motivation

Latent concepts often induce an orientation of the data. Tangent vectors to the “data manifold”: ▸ Stroke thickness or shear of MNIST digit. ▸ Camera position, lighting/material in a 3D scene. X ⊂ R28⋅28

1/12 Simard et al. 1992, 1998; Rifai et al. 2011

slide-5
SLIDE 5

Motivation

Latent concepts often induce an orientation of the data. Tangent vectors to the “data manifold”: ▸ Stroke thickness or shear of MNIST digit. ▸ Camera position, lighting/material in a 3D scene. ▸ Arrow of time (videos, time-series data, ...) X ⊂ R28⋅28

1/12 Simard et al. 1992, 1998; Rifai et al. 2011

slide-6
SLIDE 6

Motivation

Latent concepts often induce an orientation of the data. Tangent vectors to the “data manifold”: ▸ Stroke thickness or shear of MNIST digit. ▸ Camera position, lighting/material in a 3D scene. ▸ Arrow of time (videos, time-series data, ...) X ⊂ R28⋅28 Contributions: ▸ We propose the novel perspective to represent oriented data with k-currents from geometric measure theory.

1/12 Simard et al. 1992, 1998; Rifai et al. 2011

slide-7
SLIDE 7

Motivation

Latent concepts often induce an orientation of the data. Tangent vectors to the “data manifold”: ▸ Stroke thickness or shear of MNIST digit. ▸ Camera position, lighting/material in a 3D scene. ▸ Arrow of time (videos, time-series data, ...) X ⊂ R28⋅28 Contributions: ▸ We propose the novel perspective to represent oriented data with k-currents from geometric measure theory. ▸ Using this viewpoint within the context of GANs, we learn a generative model which behaves equivariantly to specified tangent vectors.

1/12 Simard et al. 1992, 1998; Rifai et al. 2011

slide-8
SLIDE 8

An invitation to geometric measure theory (GMT)

2/12 Morgan 2016, Krantz & Parks 2008, Federer 1969

slide-9
SLIDE 9

An invitation to geometric measure theory (GMT)

▸ Differential geometry, generalized through measure theory to deal with surfaces that are not necessarily smooth.

2/12 Morgan 2016, Krantz & Parks 2008, Federer 1969

slide-10
SLIDE 10

An invitation to geometric measure theory (GMT)

▸ Differential geometry, generalized through measure theory to deal with surfaces that are not necessarily smooth. ▸ k-currents ≈ generalized (possibly quite irregular) oriented k-dimensional surfaces in d-dimensional space.

2/12 Morgan 2016, Krantz & Parks 2008, Federer 1969

slide-11
SLIDE 11

An invitation to geometric measure theory (GMT)

▸ Differential geometry, generalized through measure theory to deal with surfaces that are not necessarily smooth. ▸ k-currents ≈ generalized (possibly quite irregular) oriented k-dimensional surfaces in d-dimensional space. ▸ The class of currents we consider form a linear space. It includes oriented k-dimensional surfaces as elements.

2/12 Morgan 2016, Krantz & Parks 2008, Federer 1969

slide-12
SLIDE 12

Generalizing Wasserstein GANs to k-currents

Z S X T ▸ T and S are 1-currents representing the data and the (partially oriented) latents.

3/12 Inspired by the optimal transport perspective on GANs: Bottou et al. 2017, Genevay et al. 2017

slide-13
SLIDE 13

Generalizing Wasserstein GANs to k-currents

gθ ∶ Z → X Z S X gθ ♯S X T ▸ T and S are 1-currents representing the data and the (partially oriented) latents. ▸ Pushforward operator gθ ♯, yields transformed current gθ ♯S.

3/12 Inspired by the optimal transport perspective on GANs: Bottou et al. 2017, Genevay et al. 2017

slide-14
SLIDE 14

Generalizing Wasserstein GANs to k-currents

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T ▸ T and S are 1-currents representing the data and the (partially oriented) latents. ▸ Pushforward operator gθ ♯, yields transformed current gθ ♯S. ▸ We propose to use the flat metric Fλ as a distance between g♯S and T.

3/12 Inspired by the optimal transport perspective on GANs: Bottou et al. 2017, Genevay et al. 2017

slide-15
SLIDE 15

Generalizing Wasserstein GANs to k-currents

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T ▸ T and S are 1-currents representing the data and the (partially oriented) latents. ▸ Pushforward operator gθ ♯, yields transformed current gθ ♯S. ▸ We propose to use the flat metric Fλ as a distance between g♯S and T. ▸ For k = 0 the flat metric is closely related to the Wasserstein−1 distance and positive 0-currents with unit mass are probability distributions.

3/12 Inspired by the optimal transport perspective on GANs: Bottou et al. 2017, Genevay et al. 2017

slide-16
SLIDE 16

k-dimensional orientation in d-dimensional space

▸ Simple k-vectors v = v1 ∧ ⋯ ∧ vk ∈ ΛkRd describe oriented k-dimensional subspaces together with an area in Rd: v1 ∧ v2 v1 v2 −v1 −v2 −v1 ∧ −v2

1 2v1 ∧ 2v2

2v2

1 2v1

v2 ∧ v1 v1 v2

4/12 Graßmann 1844

slide-17
SLIDE 17

k-dimensional orientation in d-dimensional space

▸ Simple k-vectors v = v1 ∧ ⋯ ∧ vk ∈ ΛkRd describe oriented k-dimensional subspaces together with an area in Rd: v1 ∧ v2 v1 v2 −v1 −v2 −v1 ∧ −v2

1 2v1 ∧ 2v2

2v2

1 2v1

v2 ∧ v1 v1 v2 ▸ The set of simple k-vectors forms a nonconvex cone in the vector space ΛkRd.

4/12 Graßmann 1844

slide-18
SLIDE 18

k-dimensional orientation in d-dimensional space

▸ Simple k-vectors v = v1 ∧ ⋯ ∧ vk ∈ ΛkRd describe oriented k-dimensional subspaces together with an area in Rd: v1 ∧ v2 v1 v2 −v1 −v2 −v1 ∧ −v2

1 2v1 ∧ 2v2

2v2

1 2v1

v2 ∧ v1 v1 v2 ▸ The set of simple k-vectors forms a nonconvex cone in the vector space ΛkRd. ▸ For v = v1 ∧ ⋯ ∧ vk, w = w1 ∧ ⋯ ∧ wk: ⟨v, w⟩ = det(V⊺W), ∣v∣ = √ ⟨v, v⟩.

4/12 Graßmann 1844

slide-19
SLIDE 19

Oriented manifolds, differential forms and currents

▸ Orientation of a k-dimensional manifold M: continuous simple k-vector map τM ∶ M → ΛkRd, ∣τM(z)∣ = 1 and TzM “spanned” by τM(z)

5/12

slide-20
SLIDE 20

Oriented manifolds, differential forms and currents

▸ Orientation of a k-dimensional manifold M: continuous simple k-vector map τM ∶ M → ΛkRd, ∣τM(z)∣ = 1 and TzM “spanned” by τM(z) ▸ Differential form: k-covector field ω ∶ Rd → ΛkRd

5/12

slide-21
SLIDE 21

Oriented manifolds, differential forms and currents

▸ Orientation of a k-dimensional manifold M: continuous simple k-vector map τM ∶ M → ΛkRd, ∣τM(z)∣ = 1 and TzM “spanned” by τM(z) ▸ Differential form: k-covector field ω ∶ Rd → ΛkRd ▸ Integration of a k-form over an oriented k-dimensional manifold: ∫M ω ∶= ∫M⟨ω(z), τM(z)⟩dHk(z) = M(ω)

5/12

slide-22
SLIDE 22

Oriented manifolds, differential forms and currents

▸ Orientation of a k-dimensional manifold M: continuous simple k-vector map τM ∶ M → ΛkRd, ∣τM(z)∣ = 1 and TzM “spanned” by τM(z) ▸ Differential form: k-covector field ω ∶ Rd → ΛkRd ▸ Integration of a k-form over an oriented k-dimensional manifold: ∫M ω ∶= ∫M⟨ω(z), τM(z)⟩dHk(z) = M(ω) ▸ M is a k-current. In general, they are continuous linear functionals acting on compactly supported smooth k-forms

5/12

slide-23
SLIDE 23

Oriented manifolds, differential forms and currents

▸ Orientation of a k-dimensional manifold M: continuous simple k-vector map τM ∶ M → ΛkRd, ∣τM(z)∣ = 1 and TzM “spanned” by τM(z) ▸ Differential form: k-covector field ω ∶ Rd → ΛkRd ▸ Integration of a k-form over an oriented k-dimensional manifold: ∫M ω ∶= ∫M⟨ω(z), τM(z)⟩dHk(z) = M(ω) ▸ M is a k-current. In general, they are continuous linear functionals acting on compactly supported smooth k-forms 2-current discrete 2-current discrete 0-current

5/12

slide-24
SLIDE 24

Towards a distance between k-currents

▸ Mass of a k-current: M(T) = sup∥ω∥∗≤1 T(ω)

6/12

slide-25
SLIDE 25

Towards a distance between k-currents

▸ Mass of a k-current: M(T) = sup∥ω∥∗≤1 T(ω) ▸ The boundary operator ∂ maps a k-current to a (k − 1)-current: ∂T(ω) = T(dω)

6/12

slide-26
SLIDE 26

Towards a distance between k-currents

▸ Mass of a k-current: M(T) = sup∥ω∥∗≤1 T(ω) ▸ The boundary operator ∂ maps a k-current to a (k − 1)-current: ∂T(ω) = T(dω) ▸ Stokes’ theorem: ∫∂M ω = ∫M dω.

6/12

slide-27
SLIDE 27

Towards a distance between k-currents

▸ Mass of a k-current: M(T) = sup∥ω∥∗≤1 T(ω) ▸ The boundary operator ∂ maps a k-current to a (k − 1)-current: ∂T(ω) = T(dω) ▸ Stokes’ theorem: ∫∂M ω = ∫M dω. ▸ Normal currents T ∈ Nk,X (Rd): Finite mass and boundary mass M(T) + M(∂T) < ∞

6/12

slide-28
SLIDE 28

Towards a distance between k-currents

▸ Mass of a k-current: M(T) = sup∥ω∥∗≤1 T(ω) ▸ The boundary operator ∂ maps a k-current to a (k − 1)-current: ∂T(ω) = T(dω) ▸ Stokes’ theorem: ∫∂M ω = ∫M dω. ▸ Normal currents T ∈ Nk,X (Rd): Finite mass and boundary mass M(T) + M(∂T) < ∞ ▸ A geometric view on the Wasserstein−1 distance: W1(S, T) = min∂B=S−T M(B). Example: S = δx, T = δy: x y B ∂B = δx − δy

6/12

slide-29
SLIDE 29

The flat metric

Given two normal k-currents S ∈ Nk,X (Rd), T ∈ Nk,X (Rd) the flat metric as defined as Fλ(S, T) = min

S−T=∂B+AM(B) + λM(A) =

sup

∥ω∥∗≤λ ∥dω∥∗≤1

(S − T)(ω). B S T ∂B A = S − T −∂B

7/12 Whitney 1957, Federer & Fleming 1960

slide-30
SLIDE 30

The flat metric

Given two normal k-currents S ∈ Nk,X (Rd), T ∈ Nk,X (Rd) the flat metric as defined as Fλ(S, T) = min

S−T=∂B+AM(B) + λM(A) =

sup

∥ω∥∗≤λ ∥dω∥∗≤1

(S − T)(ω). B S T ∂B A = S − T −∂B Federer & Fleming 1960: The flat metric metrizes the weak∗ convergence on normal currents with uniformly bounded mass and boundary mass.

7/12 Whitney 1957, Federer & Fleming 1960

slide-31
SLIDE 31

Flat metric minimization: our theoretical result

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ Fλ(gθ ♯S, T)

8/12

slide-32
SLIDE 32

Flat metric minimization: our theoretical result

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ Fλ(gθ ♯S, T)

Assumptions: ▸ Normal currents S ∈ Nk,Z(Rl), T ∈ Nk,X (Rd). ▸ g ∶ Z × Θ → X smooth in z with uniformly bounded derivative, loc. Lipschitz in θ. ▸ Parameter space Θ is compact.

8/12

slide-33
SLIDE 33

Flat metric minimization: our theoretical result

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ Fλ(gθ ♯S, T)

Assumptions: ▸ Normal currents S ∈ Nk,Z(Rl), T ∈ Nk,X (Rd). ▸ g ∶ Z × Θ → X smooth in z with uniformly bounded derivative, loc. Lipschitz in θ. ▸ Parameter space Θ is compact.

  • Proposition. The map θ ↦ Fλ(gθ ♯S, T) is Lipschitz continuous.

8/12

slide-34
SLIDE 34

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ Fλ(gθ ♯S, T)

9/12

slide-35
SLIDE 35

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

(gθ ♯S − T)(ω)

9/12

slide-36
SLIDE 36

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

gθ ♯S(ω) − T(ω)

9/12

slide-37
SLIDE 37

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

S(gθ

♯ω) − T(ω)

9/12

slide-38
SLIDE 38

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

S(gθ

♯ω) − T(ω)

T = 1 N

N

i=1

δxi ∧ (Ti,1 ∧ ⋯ ∧ Ti,k), S = µ ∧ (e1 ∧ ⋯ ∧ ek)

9/12

slide-39
SLIDE 39

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

Ez∼µ [⟨ω ○ gθ, (∇zgθ ⋅ e1) ∧ ⋯ ∧ (∇zgθ ⋅ ek)⟩] − 1 N

N

i=1

⟨ω(xi), Ti,1 ∧ ⋯ ∧ Ti,k⟩

9/12

slide-40
SLIDE 40

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

Ez∼µ [⟨ω ○ gθ, (∇zgθ ⋅ e1) ∧ ⋯ ∧ (∇zgθ ⋅ ek)⟩] − 1 N

N

i=1

⟨ω(xi), Ti,1 ∧ ⋯ ∧ Ti,k⟩ ▸ Implement ω ∶ Rd → ΛkRd and gθ ∶ Z → X with deep nets

9/12

slide-41
SLIDE 41

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

Ez∼µ [⟨ω ○ gθ, (∇zgθ ⋅ e1) ∧ ⋯ ∧ (∇zgθ ⋅ ek)⟩] − 1 N

N

i=1

⟨ω(xi), Ti,1 ∧ ⋯ ∧ Ti,k⟩ ▸ Implement ω ∶ Rd → ΛkRd and gθ ∶ Z → X with deep nets ▸ Soft penalty for ∥ω(x)∥∗ ≤ λ, ∥dω(x)∥∗ ≤ 1 (similar to WGAN-GP)

9/12

slide-42
SLIDE 42

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

Ez∼µ [⟨ω ○ gθ, (∇zgθ ⋅ e1) ∧ ⋯ ∧ (∇zgθ ⋅ ek)⟩] − 1 N

N

i=1

⟨ω(xi), Ti,1 ∧ ⋯ ∧ Ti,k⟩ ▸ Implement ω ∶ Rd → ΛkRd and gθ ∶ Z → X with deep nets ▸ Soft penalty for ∥ω(x)∥∗ ≤ λ, ∥dω(x)∥∗ ≤ 1 (similar to WGAN-GP) ▸ Compute ∇zgθ ⋅ ei with two calls to autograd (rop), ⟨⋅, ⋅⟩ by k × k-determinant

9/12

slide-43
SLIDE 43

FlatGAN formulation and implementation

Fλ(gθ ♯S, T) gθ ∶ Z → X Z S X gθ ♯S X T min

θ∈Θ

sup

∥ω∥∗≤λ ∥dω∥∗≤1

Ez∼µ [⟨ω ○ gθ, (∇zgθ ⋅ e1) ∧ ⋯ ∧ (∇zgθ ⋅ ek)⟩] − 1 N

N

i=1

⟨ω(xi), Ti,1 ∧ ⋯ ∧ Ti,k⟩ ▸ Implement ω ∶ Rd → ΛkRd and gθ ∶ Z → X with deep nets ▸ Soft penalty for ∥ω(x)∥∗ ≤ λ, ∥dω(x)∥∗ ≤ 1 (similar to WGAN-GP) ▸ Compute ∇zgθ ⋅ ei with two calls to autograd (rop), ⟨⋅, ⋅⟩ by k × k-determinant ▸ Train model by alternating stochastic gradient ascent/descent

9/12

slide-44
SLIDE 44

Illustration on a 2D toy data set (5 points on a circle)

data T

  • it. 250
  • it. 500
  • it. 1000
  • it. 2000

k = 0

10/12

slide-45
SLIDE 45

Illustration on a 2D toy data set (5 points on a circle)

data T

  • it. 250
  • it. 500
  • it. 1000
  • it. 2000

k = 0 k = 1

10/12

slide-46
SLIDE 46

Learning equivariant latent representations

MNIST, k = 2

varying z1 (rotation) varying z2 (stroke width)

11/12

slide-47
SLIDE 47

Learning equivariant latent representations

MNIST, k = 2

varying z1 (rotation) varying z2 (stroke width)

smallNORB, k = 3

varying z1 (lighting) varying z2 (elevation) varying z3 (azimuth)

11/12

slide-48
SLIDE 48

Learning equivariant latent representations

MNIST, k = 2

varying z1 (rotation) varying z2 (stroke width)

smallNORB, k = 3

varying z1 (lighting) varying z2 (elevation) varying z3 (azimuth)

tinyvideos, k = 1

varying z1 (time)

11/12

slide-49
SLIDE 49

See you at our poster, Pacific Ballroom #16, 6:30 tonight!

Flat Metric Minimization with Applications in Generative Modeling

Thomas M¨

  • llenhoff

Daniel Cremers Technical University of Munich

REPRESENTING DATA WITH NORMAL CURRENTS

Contribution: We propose to view (partially) oriented data as a k-current. Fλ(gθ♯S,T) gθ : Z → X Z S X gθ♯S X T Intuitively, k-currents form a linear space that includes k-dimensional oriented man- ifolds as elements. The vector space of normal currents Nk,X(Rd) consists of cur- rents T with finite volume and finite volume of their boundary: M(T ) + M(∂T) < ∞.

THE FLAT METRIC

Fλ(S,T) = min

S−T=∂B+A

M(B) + λM(A) = sup

ω∗≤λ dω∗≤1

S(ω) − T(ω) For 0-currents: It is related to the Wasserstein−1 distance. x M x W1 x Fλ x y B Fλ(δx,δy) = min{λ,x − y} ∂B = δx − δy The intuition for 1-currents: B S T ∂B A = S − T −∂B

THEORETICAL RESULTS

Federer & Fleming 1960. The flat metric metrizes the weak∗ convergence on normal currents with uniformly bounded mass and boundary mass: Fλ(T,Ti) → 0 if and only if Ti

⇀ T, i.e., Ti(w) → T (w), for all ω ∈ C∞

c (Rd;ΛkRd).

  • Proposition. Let S ∈ Nk,Z,(Rl), T ∈ Nk,X(Rd) be normal currents. Assume gθ : Z →

X is smooth in z with uniformly bounded derivative and locally Lipschitz in θ. Then, the map θ → Fλ(gθ♯S,T) is Lipschitz continuous on any compact parameter set Θ.

Presented at the International Conference on Machine Learning (ICML), Los Angeles, 2019.

FLATGAN: LEARNING EQUIVARIANT REPRESENTATIONS

S = µ ∧ (e1 ∧ ... ∧ ek) T = 1

N

N

i=1δxi ∧ Ti

min

θ∈Θ

  • Fλ(gθ♯S,T) =

sup

ω∗≤λ dω∗≤1

− 1 N

N

  • i=1

ω(xi),Ti + Ez∼µ[ω ◦ gθ,(∇zgθ · e1) ∧ ... ∧ (∇zgθ · ek)]

  • .

Solving the above optimization problem yields a generator gθ which behaves equiv- ariantly to the specified tangent vectors. Illustration on a simple dataset in 2D:

T ∈ N0,X(Rd)

  • it. 500
  • it. 1000
  • it. 2000

T ∈ N1,X(Rd)

  • it. 500
  • it. 1000
  • it. 2000

tinyvideos, k = 1:

varying z1 (time)

MNIST, k = 2:

varying z1 (rotation) varying z2 (stroke width)

smallNORB, k = 3:

varying lighting (z1) varying elevation (z2) varying azimuth (z3)

GEOMETRIC MEASURE THEORY CHEAT SHEET & REFERENCES

  • k-vectors and k-covectors. ΛkRd is a vector space in which some of the elements describe oriented, k-dimensional

planes in Rd. These are called simple k-vectors: v1 ∧ ... ∧ vk. The dual space (k-covectors) is ΛkRd.

  • If v and w are simple, then we have v1 ∧ ... ∧ vk,w1 ∧ ... ∧ wk = det(V ⊤W).
  • A differential form is a k-covector field ω : Rd → ΛkRd. k-currents are the dual space of smooth, compact k-forms.
  • v = supw∗≤1v,w. Area of the k-dim. parallelotope spanned by the {vi} if v = v1 ∧ ... ∧ vk.
  • The mass M(T ) = supω∗≤1T (w) is the k-dimensional volume of the k-current T.
  • Boundary: ∂T (ω) = T(dω), where d is the exterior derivative (in R3: grad → curl → div)
  • Orientation: Continuous k-vector map τM : M → ΛkRd, τM(z) is simple with unit norm, spanning TzM for all z ∈ M.
  • Stokes’ theorem:
  • Mdω,τMdHk =
  • ∂Mω,τ∂MdHk−1, it follows that ∂M = ∂M.
  • Pullback: g♯ω,v1 ∧ ... ∧ vk = ω ◦ g,∇g · v1 ∧ ... ∧ ∇g · vk, pushforward: g♯T (ω) = T (g♯ω).

[1] H. Federer and W. H Fleming. Normal and integral currents. Annals of Mathematics, pages 458–520, 1960. [2] H. Federer. Geometric Measure Theory. Springer, 1969. [3] F. Morgan. Geometric Measure Theory: A Beginner’s Guide. Academic Press, 5th edition, 2016.

PyTorch implementation: https://github.com/moellenh/flatgan

12/12