New Algorithms for Sparse Representation of Discrete Signals Based on - - PowerPoint PPT Presentation

new algorithms for sparse representation of discrete
SMART_READER_LITE
LIVE PREVIEW

New Algorithms for Sparse Representation of Discrete Signals Based on - - PowerPoint PPT Presentation

New Algorithms for Sparse Representation of Discrete Signals Based on p - 2 Optimization New Algorithms for Sparse Representation of Discrete Signals Based on p - 2 Optimization Jie Yan and Wu-Sheng Lu Department of Electrical and


slide-1
SLIDE 1

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization

Jie Yan and Wu-Sheng Lu

Department of Electrical and Computer Engineering University of Victoria, Victoria, BC, Canada

August 25, 2011

1 / 30

slide-2
SLIDE 2

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization INTRODUCTION

OUTLINE

1

INTRODUCTION

2

PRELIMINARIES

3

ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

4

SIMULATIONS

5

CONCLUSIONS

2 / 30

slide-3
SLIDE 3

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization INTRODUCTION

Motivation

A central point in sparse signal processing is to seek and approximate to an ill-posed linear system while requiring that the solution has fewest nonzero entries. Many of the applications lead to minimizing the following ℓ1-ℓ2 function F(s) = x − Ψs2

2 + λs1.

F(s) is globally convex and its global minimizer can be identified.

3 / 30

slide-4
SLIDE 4

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization INTRODUCTION

Motivation Cont’d

For the ℓ1-ℓ2 problem, iterative-shrinkage algorithms have emerged as a family of highly effective numerical methods. Of particular interest, a state-of-the-art algorithm called FISTA/MFISTA was developed by A. Beck and M. Teboulle. Chartrand and Yin have proposed algorithms for ℓp-ℓ2 minimization for 0 < p < 1. Improved performance relative to that obtained by ℓ1-ℓ2 minimization was demonstrated.

4 / 30

slide-5
SLIDE 5

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization INTRODUCTION

Contribution

New algorithms for sparse representation based on ℓp-ℓ2

  • ptimization are proposed.

Our algorithms are built on MFISTA with several major changes. The soft-shrinkage step in MFISTA is replaced by a global solver for the minimization of a 1-D nonconvex ℓp-ℓ2 problem. Two efficient techniques for solving the 1-D ℓp-ℓ2 are proposed.

5 / 30

slide-6
SLIDE 6

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization PRELIMINARIES

OUTLINE

1

INTRODUCTION

2

PRELIMINARIES

3

ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

4

SIMULATIONS

5

CONCLUSIONS

6 / 30

slide-7
SLIDE 7

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization PRELIMINARIES

Sparse represenations in overcomplete bases

A typical sparse representation problem can be stated as finding the sparsest represenations of a discrete signal x under a (possibly overcomplete) dictionary Ψ. The problem can be described as minimizing s0 subject to x = Ψs or x − Ψs2 ≤ ǫ. Unfortunately, this problem is NP hard. A popular approach is to consider a relaxed ℓ1-ℓ2 unconstrained convex problem as min

s

F(s) = ||x − Ψs||2

2 + λ||s||1.

7 / 30

slide-8
SLIDE 8

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization PRELIMINARIES

Iterative shrinkage-thresholding algorithm (ISTA)

ISTA can be viewed as an extension of the classical gradient algorithm. Due to its simplicity, it is adequate for solving large-scale problem. A key step in its kth iteration is to approximate F(s) by an easy-to-deal-with upper-bound (up to a constant) convex function ˆ F(s) = L 2s − ck2

2 + λs1

The minimizer of ˆ F(s) is a soft shrinkage of vector ck with a constant threshold λ/L, as sk = Tλ/L(ck). ISTA provides a convergence rate O(1/k).

8 / 30

slide-9
SLIDE 9

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization PRELIMINARIES

FISTA and MFISTA

The FISTA is built on ISTA with an extra step in each iteration that, with the help of a sequence of scaling factors tk, creates an auxiliary iterate bk+1 by moving the current iterate sk along the direction of sk − sk−1 so as to improve the subsequent iterate sk+1. Furthermore, monotone FISTA (MFISTA) includes an additional step to FISTA to possess desirable monotone convergence. FISTA and MFISTA possess a much improved convergence rate of O(1/k2).

9 / 30

slide-10
SLIDE 10

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

OUTLINE

1

INTRODUCTION

2

PRELIMINARIES

3

ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

4

SIMULATIONS

5

CONCLUSIONS

10 / 30

slide-11
SLIDE 11

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

An interesting development in sparse representation and compressive sensing is to investigate a nonconvex variant

  • f the basis pursuit by replacing the ℓ1 norm term with an

ℓp norm with 0 < p < 1. Naturally, an ℓp-ℓ2 counterpart can be formulated as min

s

F(s) = ||x − Ψs||2

2 + λ||s||p p.

11 / 30

slide-12
SLIDE 12

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

The algorithms we propose in this paper will be developed within the framework of FISTA/MFISTA in that sk = argmin

s

L 2||s − ck||2

2 + λ||s||p p

  • (1)

With 0 < p < 1, the setting is closer to the ℓ0-norm problem, hence an improved sparse representation is expected. However, soft shrinkage fails to work as (1) is nonconvex.

12 / 30

slide-13
SLIDE 13

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

The computation of sk reduces to solving M

  • ne-dimensional (1-D) minimization problems, and it boils

down to solving the 1-D problem s∗ = argmin

s

{u(s) = L 2(s − c)2 + λ|s|p}. (2) We propose two techniques to find the global solution of (2) with 0 < p < 1.

13 / 30

slide-14
SLIDE 14

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

Method 1: When p is rational

Suppose p = a/b with a, b positive integers and a < b. Let us first consider s ≥ 0, then the problem is converted to minimizing v(z) = u(s)s=zb = L 2(zb − c)2 + λza whose gradient is ∇v(z) = Lbz2b−1 − Lcbzb−1 + λaza−1. The global minimizer z∗

+ must either be 0, or one of those

stationary points where ∇v(z) = 0. MATLAB function roots was applied to find all the roots of polynomial ∇v(z). After identifying z∗

+, we have s∗ + = (z∗ +)b as the solution that

minimizes u(s) for s ≥ 0.

14 / 30

slide-15
SLIDE 15

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

Method 1: When p is rational Cont’d

In a similar way, the global minimizer s∗

− that minimizes u(s)

for s ≤ 0 can be computed, and the global minimizer s∗ is

  • btained as s∗ = argmins {u(s) : s = s∗

+, s∗ −}.

The above ℓp solver is incorporated into an FISTA/MFISTA type algorithm. In each iteration, the computational complexity is O(M(2b − 1)3). The method proposed above works well whenever p is rational with a small denominator integer such as p ∈ {1/4, 1/3, 1/2, 2/3, 3/4}.

15 / 30

slide-16
SLIDE 16

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

Method 2: When p is an arbitrary real in (0, 1)

−2 −1 1 2 3 4 5 10 20 30 40 50 60 a(s)=L(s−c)2/2 b(s)=λ|s|p u(s)=a(s)+b(s)

Let us examine the function to minimize, i.e., u(s) = L

2(s − c)2 + λ|s|p.

If c = 0, s∗ = 0. Next, we consider the case of c > 0.

16 / 30

slide-17
SLIDE 17

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

Method 2: When p is an arbitrary real in (0, 1) Cont’d

It can be observed that the global minimizer s∗ lies in [0, c] where the function of interest becomes u(s) = L 2(s − c)2 + λsp for s ∈ [0, c]. The convexity of u(s) can be analyzed by examining the 2nd-order derivative, i.e., u′′(s) = L + λp(p − 1)sp−2.

17 / 30

slide-18
SLIDE 18

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

Method 2: When p is an arbitrary real in (0, 1) Cont’d

The stationary point that makes u′′(s) = 0 is sc = [ λp(1−p)

L

]1/(2−p). For 0 ≤ s < sc, u(s) is concave as u′′(s) < 0; for s > sc, u(s) is convex as u′′(s) > 0.

18 / 30

slide-19
SLIDE 19

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

Case (a): sc ≥ c

sc c

  • u(s) is concave in [0, c]. As a result, s∗ must be either 0 or
  • c. Namely, s∗ = argmins {u(s) : s = 0, c}.

19 / 30

slide-20
SLIDE 20

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

Case (b): sc < c

sc c

  • u(s) is concave in [0, sc] and convex in [sc, c]. We argue that

s∗ must be either at the point st that minimizes convex function u(s) in [sc, c], or at the boundary point 0.

20 / 30

slide-21
SLIDE 21

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

To this end, we have proposed two techniques for the global minimization of the 1-D nonconvex ℓp subproblem. Based on this, an MFISTA type algorithm for the proposed ℓp-ℓ2 problem can be developed by replacing the shrinkage step of the conventional MFISTA with the above 1-D ℓp solver. The algorithm we developed will be referred to as the modified MFISTA.

21 / 30

slide-22
SLIDE 22

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization SIMULATIONS

OUTLINE

1

INTRODUCTION

2

PRELIMINARIES

3

ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

4

SIMULATIONS

5

CONCLUSIONS

22 / 30

slide-23
SLIDE 23

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization SIMULATIONS

Test signal x: Bumps signal of length N = 256.

50 100 150 200 250 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Our objective is to find a representation vector s ∈ R3N×1 for signal x such that x ≈ Ψs with s as sparse as possible.

23 / 30

slide-24
SLIDE 24

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization SIMULATIONS

The dictionary adopted is a combination of three

  • rthonormal bases Ψ = [Ψ1 Ψ2 Ψ3] ∈ RN×3N where Ψ1 is

the Dirac basis, Ψ2 is the 1-D DCT basis and Ψ3 is the wavelet basis generated by orthogonal Daubechies wavelet D8. To this end we solve the ℓp-ℓ2 problem with p = 1, 0.95, 0.9, 0.85, 0.8 and 0.75, respectively.

24 / 30

slide-25
SLIDE 25

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization SIMULATIONS

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 R (relative equation error) Z (percentage of zeros) p=1 p=0.95 p=0.9 p=0.85 p=0.8 p=0.75

Comparison of ℓp-ℓ2 sparse representation of “bumps” signal for p = 1, 0.95, 0.9, 0.85, 0.8, 0.75 in terms of relative equation error and signal sparsity in the dictionary domain.

25 / 30

slide-26
SLIDE 26

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization SIMULATIONS

Several observations

1 For a fixed relative equation error, the sparsity improves as

a smaller power p was used;

2 For a fixed level of sparsity, the relative equation error

decreases as a smaller power p was used;

3 The performance improvement appears to be kind of

nonlinear with respect to the change in power p.

26 / 30

slide-27
SLIDE 27

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization SIMULATIONS 100 200 300 400 500 600 700 −0.03 −0.02 −0.01 0.01 0.02 Sparse signal computed with p=1 100 200 300 400 500 600 700 −0.03 −0.02 −0.01 0.01 0.02 Sparse signal computed with p=0.75

Sparse representation of the “bumps” signal based on ℓ1 and ℓ0.75 reconstruction.

27 / 30

slide-28
SLIDE 28

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization SIMULATIONS

For a fair comparison, both solutions yield the same relative equation error of 0.00905. The sparsity achieved was found to be 87.24% for p = 0.75 versus 81.77% for p = 1.

28 / 30

slide-29
SLIDE 29

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization CONCLUSIONS

OUTLINE

1

INTRODUCTION

2

PRELIMINARIES

3

ALGORITHMS FOR ℓp-ℓ2 OPTIMIZATION

4

SIMULATIONS

5

CONCLUSIONS

29 / 30

slide-30
SLIDE 30

New Algorithms for Sparse Representation of Discrete Signals Based on ℓp-ℓ2 Optimization CONCLUSIONS

New algorithms for sparse representation based on ℓp-ℓ2

  • ptimization with 0 < p < 1 are proposed.

In particular, the soft shrinkage step in MFISTA is replaced by a global solver for the minimization of a 1-D nonconvex ℓp problem. Two efficient techniques for solving the 1-D ℓp problem in question are proposed. Simulation studies for sparse representations are presented to evaluate the performance of the proposed algorithms with various values of p and compare with the basis pursuit (BP) benchmark with p = 1.

30 / 30