ICASSP 2018, Calgary, Canada
Linear Quantization by Effective Resistance Sampling
Yining Wang
Carnegie Mellon University
Joint work with Aarti Singh
Linear Quantization by Yining Wang Effective Resistance Sampling - - PowerPoint PPT Presentation
ICASSP 2018, Calgary, Canada Linear Quantization by Yining Wang Effective Resistance Sampling Carnegie Mellon University Joint work with Aarti Singh Q UANTIZED LINEAR SENSING The linear model: y = X 0 X : n by p design matrix,
ICASSP 2018, Calgary, Canada
Yining Wang
Carnegie Mellon University
Joint work with Aarti Singh
❖ The linear model:
✴ X: n by p “design” matrix, with full knowledge ✴ y: n-dim vector, the sensing result ✴ β0: p-dim unknown signal to be recovered
❖ The linear model: ❖ The quantized sensing problem:
✴ Measurements of y cannot be made in arbitrary precision ✴ A total of k bits allocated to each measurement yi ✴ Each yi rounded to the nearest integer with ki binary bits.
❖ The linear model: ❖ The quantized sensing problem:
✴ Measurements of y cannot be made in arbitrary precision ✴ A total of k bits allocated to each measurement yi ✴ Each yi rounded to the nearest integer with ki binary bits.
Range of y
❖ The linear model: ❖ The quantized sensing problem:
✴ Measurements of y cannot be made in arbitrary precision
❖ Example applications:
✴ Brain activity measurements: total signal strength limited ✴ Distributed sensing: signal communication limited
❖ The linear model: ❖ The quantized sensing problem: ❖ Question: how to allocate measurement bits to achieve
❖ “Dithering”:
✴ Introducing artificial noise for independent statistical error ✴ Equivalent model:
i ] ≤ 4−(ki+1)M 2
❖ “Dithering”:
✴ Introducing artificial noise for independent statistical error ✴ Equivalent model:
i ] ≤ 4−(ki+1)M 2
Uniform noise between two values
❖ “Dithering”: ❖ Weighted Ordinary Least Squares (OLS)
i ] ≤ 4−(ki+1)M 2
❖ “Dithering”: ❖ Weighted Ordinary Least Squares (OLS)
i ] ≤ 4−(ki+1)M 2
W = diag(w1, w2, · · · , wn)
= diag(4k1+1, 4k2+1, · · · , 4kn+1)
❖ Weighted Ordinary Least Squares (OLS) ❖ Optimal quantization:
2 M 2 · tr
i=1
i
k tr[X>WX]1
❖ Weighted Ordinary Least Squares (OLS) ❖ Optimal quantization:
2 M 2 · tr
i=1
i
k tr[X>WX]1
❖ Weighted Ordinary Least Squares (OLS) ❖ Optimal quantization:
2 M 2 · tr
i=1
i
k tr[X>WX]1
Combinatorial… hard!
❖ Continuously relaxed optimal quantization: ❖ Still a challenging problem…
✴ Non-convexity of objectives!
k tr[X>WX]1
❖ Continuously relaxed optimal quantization: ❖ Still a challenging problem…
✴ Non-convexity of objectives!
k tr[X>WX]1
❖ Continuously relaxed optimal quantization: ❖ A re-formulation:
k tr[X>WX]1
i=1
i
n
i=1
i=1
i
n
i=1
❖ Continuously relaxed optimal quantization: ❖ A re-formulation:
k tr[X>WX]1
i=1
i
n
i=1
i=1
i
n
i=1
Convex objective
❖ Continuously relaxed optimal quantization: ❖ A re-formulation:
k tr[X>WX]1
i=1
i
n
i=1
i=1
i
n
i=1
Convex objective Non-convex feasible set
❖ Continuously relaxed optimal quantization: ❖ A re-formulation:
k tr[X>WX]1
i=1
i
n
i=1
i=1
i
i=1
❖ Continuously relaxed optimal quantization: ❖ A re-formulation:
k tr[X>WX]1
i=1
i
n
i=1
i=1
i
i=1
Convex objective
❖ Continuously relaxed optimal quantization: ❖ A re-formulation:
k tr[X>WX]1
i=1
i
n
i=1
i=1
i
i=1
Convex objective concave objective
❖ Continuously relaxed optimal quantization: ❖ A re-formulation:
✴ DC (Difference of Convex functions) programming:
k tr[X>WX]1
i=1
i
n
i=1
❖ Continuously relaxed optimal quantization: ❖ How to obtain integral solutions? “Sparsify” k
✴ Idea 1: round to the nearest integer ✴ Problem: might cause objective to increase significantly
k tr[X>WX]1
❖ Continuously relaxed optimal quantization: ❖ How to obtain integral solutions? “Sparsify” k
✴ Idea 2: simple sampling
❖
Sample i from the distribution normalized by k
❖
k(i) = k(i) + 1
✴ Problem: slow convergence (require large budget k)
k tr[X>WX]1
❖ Continuously relaxed optimal quantization: ❖ How to obtain integral solutions? “Sparsify” k
✴ Idea 3: effective resistance sampling ✴ Advantage: fast convergence (k independent of condition
numbers of X or W.
k tr[X>WX]1
❖ Continuously relaxed optimal quantization: ❖ How to obtain integral solutions? “Sparsify” k
✴ Idea 3: effective resistance sampling ✴ Advantage: fast convergence (k independent of condition
numbers of X or W.
k tr[X>WX]1
Effective resistance:
t [W ⇤]1xt
❖ Most important question: how to solve (continuous) ❖ Some ideas:
✴ Is the objective quasi-convex or directional convex? ✴ Are local minima also global, or approximately global?
❖
Escaping saddle point methods?
✴ Are there adequate convex relaxations?
min tr " n X
i=1
4ki+1xix>
i
#1 s.t.
n
X
i=1
ki ≤ k