Stochastic Composition Optimization Algorithms and Sample - - PowerPoint PPT Presentation

stochastic composition optimization
SMART_READER_LITE
LIVE PREVIEW

Stochastic Composition Optimization Algorithms and Sample - - PowerPoint PPT Presentation

Stochastic Composition Optimization Algorithms and Sample Complexities Mengdi Wang Joint works with Ethan X. Fang, Han Liu, and Ji Liu ORFE@Princeton ICCOPT, Tokyo, August 8-11, 2016 1 / 24 Collaborators M. Wang, X. Fang, and H. Liu.


slide-1
SLIDE 1

Stochastic Composition Optimization

Algorithms and Sample Complexities

Mengdi Wang Joint works with Ethan X. Fang, Han Liu, and Ji Liu

ORFE@Princeton

ICCOPT, Tokyo, August 8-11, 2016

1 / 24

slide-2
SLIDE 2

Collaborators

  • M. Wang, X. Fang, and H. Liu. Stochastic Compositional Gradient Descent: Algorithms for

Minimizing Compositions of Expected-Value Functions. Mathematical Programming, Submitted in 2014, to appear in 2016.

  • M. Wang and J. Liu. Accelerating Stochastic Composition Optimization. 2016.
  • M. Wang and J. Liu. A Stochastic Compositional Subgradient Method Using Markov Samples. 2016.

2 / 24

slide-3
SLIDE 3

Outline

1 Background: Why is SGD a good method? 2 A New Problem: Stochastic Composition Optimization 3 Stochastic Composition Algorithms: Convergence and Sample Complexity 4 Acceleration via Smoothing-Extrapolation

3 / 24

slide-4
SLIDE 4

Background: Why is SGD a good method?

Outline

1 Background: Why is SGD a good method? 2 A New Problem: Stochastic Composition Optimization 3 Stochastic Composition Algorithms: Convergence and Sample Complexity 4 Acceleration via Smoothing-Extrapolation

4 / 24

slide-5
SLIDE 5

Background: Why is SGD a good method?

Background

  • Machine learning is optimization

Learning from batch data Learning from online data minx∈ℜd 1

n

n

i=1 ℓ(x; Ai, bi) + ρ(x)

minx∈ℜd EA,b [ℓ(x; A, b)] + ρ(x)

  • Both problems can be formulated as Stochastic Convex Optimization

min

x

E [f (x, ξ)]

  • expectation over batch data set or unknown distribution

A general framework encompasses likelihood estimation, online learning, empirical risk minimization, multi-arm bandit, online MDP

  • Stochastic gradient descent (SGD) updates by taking sample gradients:

xk+1 = xk − α∇f (xk, ξk) A special case of stochastic approximation with a long history (Robbins and Monro, Kushner and Yin, Polyak and Juditsky, Benveniste et al., Ruszcy` nski, Borkar, Bertsekas and Tsitsiklis, and many)

5 / 24

slide-6
SLIDE 6

Background: Why is SGD a good method?

Background: Stochastic first-order methods

  • Stochastic gradient descent (SGD) updates by taking sample gradients:

xk+1 = xk − α∇f (xk, ξk) (1,410,000 results on Google Scholar Search and 24,400 since 2016!) Why is SGD a good method in practice?

  • When processing either batch or online data, a scalable algorithm needs to update using

partial information (a small subset of all data)

  • Answer: We have no other choice

Why is SGD a good method beyond practical reasons?

  • SGD achieves optimal convergence after processing k samples:
  • E [F(xk) − F ∗] = O(1/

√ k) for convex minimization

  • E [F(xk) − F ∗] = O(1/k) for strongly convex minimization

(Nemirovski and Yudin 1983, Agarwal et al. 2012, Rakhlin et al. 2012, Ghadimi and Lan 2012,2013, Shamir and Zhang 2013 and many more)

  • Beyond convexity: nearly optimal online PCA (Li, Wang, Liu, Zhang 2015)
  • Answer: Strong theoretical guarantees for data-driven problems

6 / 24

slide-7
SLIDE 7

A New Problem: Stochastic Composition Optimization

Outline

1 Background: Why is SGD a good method? 2 A New Problem: Stochastic Composition Optimization 3 Stochastic Composition Algorithms: Convergence and Sample Complexity 4 Acceleration via Smoothing-Extrapolation

7 / 24

slide-8
SLIDE 8

A New Problem: Stochastic Composition Optimization

Stochastic Composition Optimization

Consider the problem min

x∈X

  • F(x) := (f ◦ g)(x) = f (g(x))
  • ,

where the outer and inner functions are f : ℜm → ℜ, g : ℜn → ℜm f (y) = E [fv(y)] , g(x) = E [gw(x)] , and X is a closed and convex set in ℜn.

  • We focus on the case where the overall problem is convex (for now)
  • No structural assumptions on f , g (nonconvex/nonmonotone/nondifferentiable)
  • We may not know the distribution of v, w.

8 / 24

slide-9
SLIDE 9

A New Problem: Stochastic Composition Optimization

Expectation Minimization vs. Stochastic Composition Optimization

Recall the classical problem: min

x∈X

E [f (x, ξ)]

  • linear w.r.t the distribution of ξ

In stochastic composition optimization, the objective is no longer a linear functional of the (v, w) distribution: min

x∈X

E[fv(E [gw(x)])]

  • nonlinear w.r.t the distribution of (w,v)
  • In the classical problem, nice properties come from linearity w.r.t. data distribution
  • In stochastic composition optimization, they are all lost

A little nonlinearity takes a long way to go.

9 / 24

slide-10
SLIDE 10

A New Problem: Stochastic Composition Optimization

Motivating Example: High-Dimensional Nonparametric Estimation

  • Sparse Additive Model (SpAM):

yi =

d

  • j=1

hj(xij) + ǫi.

  • High-dimensional feature space with relatively few data samples:

Sample size n Features!d Features d Sample size!n n d n d

  • Optimization model for SpAM1:

min

x

E[fv(E [gw(x)])] ↔ min

hj ∈Hj

E

  • Y −

d

  • j=1

hj(Xj) 2 + λ

d

  • j=1
  • E
  • h2

j (Xj)

  • The term λ d

j=1

  • E
  • h2

j (Xj)

  • induces sparsity in the feature space.
  • 1P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman. Sparse additive models. Journal of the Royal

Statistical Society: Series B, 71(5):1009-1030, 2009.

10 / 24

slide-11
SLIDE 11

A New Problem: Stochastic Composition Optimization

Motivating Example: Risk-Averse Learning

Consider the mean-variance minimization problem min

x

Ea,b [ℓ(x; a, b)] + λVara,b[ℓ(x; a, b)], Its batch version is min

x

1 N

N

  • i=1

ℓ(x; ai, bi) + λ N

N

  • i=1
  • ℓ(x; ai, bi) − 1

N

N

  • i=1

ℓ(x; ai, bi) 2 .

  • The variance Var[Z] = E
  • (Z − E[Z])2

is a composition between two functions

  • Many other risk functions are equivalent to compositions of multiple expected-value

functions (Shapiro, Dentcheva, Ruszcy` nski 2014)

  • A central limit theorem for composite of multiple smooth functions has been established

for risk metrics (Dentcheva, Penev, Ruszcy` nski 2016)

  • No good way to optimize a risk-averse objective while learning from online data

11 / 24

slide-12
SLIDE 12

A New Problem: Stochastic Composition Optimization

Motivating Example: Reinforcement Learning

On-policy reinforcement learning is to learn the value-per-state of a stochastic system.

  • We want to solve a (huge) Bellman equations

γPπV π + rπ = V π, where Pπ is transition prob. matrix and rπ are rewards, both unknown.

  • On-policy learning aims to solve Bellman equation via blackbox simulation. It becomes a

special stochastic composition optimization problem: min

x

E[fv(E [gw(x)])] ↔ min

x∈ℜS

E[A]x − E[b]2, where E[A] = I − γPπ and E[b] = rπ.

12 / 24

slide-13
SLIDE 13

Stochastic Composition Algorithms: Convergence and Sample Complexity

Outline

1 Background: Why is SGD a good method? 2 A New Problem: Stochastic Composition Optimization 3 Stochastic Composition Algorithms: Convergence and Sample Complexity 4 Acceleration via Smoothing-Extrapolation

13 / 24

slide-14
SLIDE 14

Stochastic Composition Algorithms: Convergence and Sample Complexity

Problem Formulation

min

x∈X

  • F(x) :=

E[fv(E [gw(x)])]

  • nonlinear w.r.t the distribution of (w,v)
  • ,

Sampling Oracle (SO)

Upon query (x, y), the oracle returns:

  • Noisy inner sample gw(x) and its noisy subgradient ˜

▽gw(x);

  • Noisy outer gradient ▽fv(y)

Challenges

  • Stochastic gradient descent (SGD) method does not work since an “unbiased” sample of

the gradient ˜ ▽g(xk)▽f (g(xk)) is not available.

  • Fenchel dual does not work except for rare conditions
  • Sample average approximation (SAA) subject to curse of dimensionality.
  • Sample complexity unclear

14 / 24

slide-15
SLIDE 15

Stochastic Composition Algorithms: Convergence and Sample Complexity

Basic Idea

To approximate xk+1 = ΠX

  • xk − αk ˜

▽g(xk)▽f (g(xk))

  • ,

by a quasi-gradient iteration using estimates of g(xk)

Algorithm 1: Stochastic Compositional Gradient Descent (SCGD)

Require: x0, z0 ∈ ℜn, y0 ∈ ℜm, SO, K, stepsizes {αk}K

k=1, and {βk}K k=1.

Ensure: {xk}K

k=1

for k = 1, · · · , K do Query the SO and obtain ˜ ∇gwk (xk), gwk (xk), fvk (yk+1) Update by yk+1= (1 − βk)yk + βkgwk (xk), xk+1 = ΠX

  • xk − αk ˜

▽gwk (xk)▽fvk (yk+1)

  • ,

end for

Remarks

  • Each iteration makes simple updates by interacting with SO
  • Scalable with large-scale batch data and can process streaming data points online
  • Considered for the first time by (Ermoliev 1976) as a stochastic approximation method

without rate analysis

15 / 24

slide-16
SLIDE 16

Stochastic Composition Algorithms: Convergence and Sample Complexity

Sample Complexity (Wang et al., 2016)

Under suitable conditions (inner function nonsmooth, outer function smooth), and X is bounded, let the stepsizes be αk = k−3/4, βk = k−1/2, we have that, if k is large enough, E  F   2 k

k

  • t=k/2+1

xt   − F ∗   = O

  • 1

k1/4

  • .

(Optimal rate which matches the lowerbound for stochastic programming)

Sample Convexity in Strongly Convex Case (Wang et al., 2016)

Under suitable conditions (inner function nonsmooth, outer function smooth), suppose that the compositional function F(·) is strongly convex, let the stepsizes be αk = 1 k , and βk = 1 k2/3 , we have, if k is sufficiently large, E

  • xk − x∗2

= O

  • 1

k2/3

  • .

16 / 24

slide-17
SLIDE 17

Stochastic Composition Algorithms: Convergence and Sample Complexity

Outline of Analysis

  • The auxilary variable yk is taking a running estimate of g(xk) at biased query points

x0, . . . , xk: yk+1 =

k

  • t=0

(Πk

t′=tβt′)gwt (xt).

  • Two entangled stochastic sequences

{ǫk = yk+1 − g(xk)2}, {ξk = xk − x∗2}.

  • Coupled supermatingale analysis

E [ǫk+1 | Fk] ≤ (1 − βk)ǫk + O(β2

k + 1

βk xk+1 − xk2) E [ξk+1 | Fk] ≤ (1 + α2

k)ξk − αk(F(xk) − F ∗) + O(βk)ǫk

  • Almost sure convergence by using a Coupled Supermartingale Convergence Theorem

(Wang and Bertsekas, 2013)

  • Convergence rate analysis via optimizing over stepsizes and balancing noise-bias tradeoff

17 / 24

slide-18
SLIDE 18

Acceleration via Smoothing-Extrapolation

Outline

1 Background: Why is SGD a good method? 2 A New Problem: Stochastic Composition Optimization 3 Stochastic Composition Algorithms: Convergence and Sample Complexity 4 Acceleration via Smoothing-Extrapolation

18 / 24

slide-19
SLIDE 19

Acceleration via Smoothing-Extrapolation

Acceleration

When the function g(·) is smooth, can the algorithms be accelerated? Yes!

Algorithm 2: Accelerated SCGD

Require: x0, z0 ∈ ℜn, y0 ∈ ℜm, SO, K, stepsizes {αk}K

k=1, and {βk}K k=1.

Ensure: {xk}K

k=1

for k = 1, · · · , K do Query the SO and obtain ∇fvk (yk), ∇gwk (zk). Update by xk+1 = xk − αk∇g ⊤

wk (xk)∇fvk (yk).

Update auxiliary variables via extrapolation-smoothing: zk+1 =

  • 1 − 1

βk

  • xk + 1

βk xk+1, yk+1 = (1 − βk)yk + βkgwk+1(zk+1), where the sample gwk+1(zk+1) is obtained via querying the SO. end for

Key to the Acceleration

Bias reduction by averaging over extrapolated points (extrapolation-smoothing) yk =

k

  • t=0

(Πk

t′=tβt′)gwt (zt)

≈ g(xk) = g

  • k
  • t=0

(Πk

t′=tβt′)zt

  • .

19 / 24

slide-20
SLIDE 20

Acceleration via Smoothing-Extrapolation

Accelerated Sample Complexity (Wang et al. 2016)

Under suitable conditions (inner function smooth, outer function smooth), if the stepsizes are chosen as αk = k−5/7, βk = k−4/7, we have, E

  • F(ˆ

xk) − F ∗ = O

  • 1

k2/7

  • ,

where ˆ xk = 2

k

k

t=k/2+1 xt.

Strongly Convex Case (Wang et al. 2016)

Under suitable conditions (inner function smooth, outer function smooth), , and we assume that F is strongly-convex. If the stepsizes are chosen as αk = 1 k , and βk = 1 k4/5 , we have, E[xk − x∗2] = O

  • 1

k4/5

  • .

20 / 24

slide-21
SLIDE 21

Acceleration via Smoothing-Extrapolation

Regularized Stochastic Composition Optimization(Wang and Liu 2016)

min

x∈ℜn

(Evfv ◦ Ewgw)(x) + R(x) The penalty term R(x) is convex and nonsmooth.

Algorithm 3: Accelerated Stochastic Compositional Proximal Gradient (ASC-PG)

Require: x0, z0 ∈ ℜn, y0 ∈ ℜm, SO, K, stepsizes {αk}K

k=1, and {βk}K k=1.

Ensure: {xk}K

k=1

for k = 1, · · · , K do Query the SO and obtain ∇fvk (yk), ∇gwk (zk). Update the main iterate by xk+1 = proxαk R(·)

  • xk − αk∇g⊤

wk (xk)∇fvk (yk)

  • .

Update auxillary iterates by an extrapolation-smoothing scheme: zk+1 =

  • 1 − 1

βk

  • xk + 1

βk xk+1, yk+1 = (1 − βk)yk + βkgwk+1(zk+1), where the sample gwk+1(zk+1) is obtained via querying the SO. end for

21 / 24

slide-22
SLIDE 22

Acceleration via Smoothing-Extrapolation

Sample Complexity for Smooth Optimization (Wang and Liu 2016)

Under suitable conditions (inner function nonsmooth, outer function smooth), and X is bounded, let the stepsizes be chosen properly, we have that, if k is large enough, E  F   2 k

k

  • t=k/2+1

xt   − F ∗   = O

  • 1

k4/9

  • ,

If either the outer or the inner function is linear, the rate improves to be optimal: E  F   2 k

k

  • t=k/2+1

xt   − F ∗   = O

  • 1

k1/2

  • ,

Sample Convexity in Strongly Convex Case (Wang and Liu 2016)

Suppose that the compositional function F(·) is strongly convex, let the stepsizes be chosen properly, we have, if k is sufficiently large, E

  • xk − x∗2

= O

  • 1

k4/5

  • .

If either the outer or the inner function is linear, the rate improves to be optimal: E

  • xk − x∗2

= O 1 k1

  • .

22 / 24

slide-23
SLIDE 23

Acceleration via Smoothing-Extrapolation

When there are two nested uncertainties:

  • A class of two-timescale algorithms that update using first-order samples
  • Analyzing convergence becomes harder: two coupled stochastic processes and

smoothness-noise interplay

  • Convergence rates of stochastic algorithms establish sample complexity upper bounds for

the new problem class General Convex Strongly Convex Outer Nonsmooth, Inner Smooth O

  • k−1/4

O

  • k−2/3?

Outer and Inner Smooth O

  • k−4/9?

O

  • k−4/5?

Special Case: minx E [f (x; ξ)] O

  • k−1/2

O

  • k−1

Special Case: minx E [f (E[A]x − E[b]; ξ)] O

  • k−1/2

O

  • k−1

Table : Summary of best known sample complexities

Applications and computations

  • First scalable algorithm for sparse nonparametric estimation
  • Optimal algorithm for on-policy reinforcement learning

23 / 24

slide-24
SLIDE 24

Acceleration via Smoothing-Extrapolation

Summary

  • Stochastic Composition Optimization - a new and rich problem class

min

x

E[fv(E [gw(x)])]

  • nonlinear w.r.t the distribution of (w,v)
  • Applications in risk, data analysis, machine learning, real-time intelligent systems
  • A class of stochastic compositional gradient methods with convergence guarantees. Basic

sample complexity being developed:

General/Convex Strongly Convex Outer Nonsmooth, Inner Smooth O

  • k−1/4

O

  • k−2/3?

Outer and Inner Smooth O

  • k−4/9?

O

  • k−4/5?

Special Case: minx E [f (x; ξ)] O

  • k−1/2

O

  • k−1

Special Case: minx E [f (E[A]x − E[b]; ξ)] O

  • k−1/2

O

  • k−1
  • Many open questions remaining and more works needed!

Thank you very much!

24 / 24