Generalized Majorization-Minimization
Sobhan Naderi Kun He Reza Aghajani Stan Sclaroff Pedro Felzenszwalb
Google Research Facebook Reality Labs UCSD Boston University Brown University
ICML 2019 Long Beach, CA, USA (Presenter)
Generalized Majorization-Minimization Sobhan Naderi Kun He - - PowerPoint PPT Presentation
Generalized Majorization-Minimization Sobhan Naderi Kun He Reza Aghajani Stan Sclaroff Pedro Felzenszwalb Google Research Facebook Reality Labs UCSD Boston University
Sobhan Naderi Kun He Reza Aghajani Stan Sclaroff Pedro Felzenszwalb
Google Research Facebook Reality Labs UCSD Boston University Brown University
ICML 2019 Long Beach, CA, USA (Presenter)
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
MM constraint:
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
MM constraint: Non-increasing sequence
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
MM constraint: Non-increasing sequence
Is this touching constraint necessary?
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
MM constraint: Non-increasing sequence
Is this touching constraint necessary?
○ Expectation Maximization (EM) ○ Convex Concave Procedure (CCP)
valid bounds at iteration t : family of bounds
Bound selection strategies:
valid bounds at iteration t : family of bounds
Bound selection strategies:
valid bounds at iteration t : family of bounds
○ E.g. MM corresponds to .
valid bounds at iteration t
Bound selection strategies:
: family of bounds
G-MM constraint:
G-MM constraint:
G-MM constraint: Non-increasing sequence
Theorem 2: Theorem 1: G-MM constraint: Non-increasing sequence
Non-increasing sequence G-MM constraint: Theorem 2: Theorem 1:
Qualitative analysis of the solutions found by MM (figure b) and G-MM (figure c).
○ G-MM is less sensitive to initialization. ○ G-MM converges to solutions that have better objective value and perform better on the task. ○ G-MM can inject randomness to the optimization framework by choosing . ○ G-MM can incorporate biases into the optimization framework by choosing .