SLIDE 1 A Conditional-Gradient-Based Augmented Lagrangian Framework
Alp Yurtsever
alp.yurtsever@epfl.ch Télécom ParisTech Ecole Polytechnique Fédérale de Lausanne (EPFL) ICML2019 - Long Beach joint work with Olivier Fercoq & Volkan Cevher
Télécom ParisTech EPFL
SLIDE 2
Algorithm 1 CGM for smooth minimization Input: x1 2 X for k = 1, 2, . . . , do ηk = 2/(k + 1) sk = arg minx∈X ⌦ rf(xk), x ↵ xk+1 = xk + ηk(sk xk) end for
X
{x : f(x) Æ f(xk)} ≠Òf(xk) xk sk xk+1
min
x∈X f(x)
<latexit sha1_base64="3gBcBo+cM9T3mVcxfXZMDy1T/mU=">ACGHicbVBNS8NAEN34WeNX1aOXxSLopSYq6FH04rGC1UJTymY7sUs3m7A7kZYQ/4UX/4oXD4p49ea/cVt70NYHA4/3ZpiZF6ZSGPS8L2dmdm5+YbG05C6vrK6tlzc2b0ySaQ51nshEN0JmQAoFdRQoZFqYHEo4TbsXQz923vQRiTqGgcptGJ2p0QkOEMrtcsHQaY61gfM+zQigYxwy5nMm8UR4g9DGPhSoK98GN9vr7XLFq3oj0Gnij0mFjFrlz+DTsKzGBRyYxp+l6KrZxpFxC4QaZgZTxHruDpqWKxWBa+eixgu5apUOjRNtSEfq74mcxcYM4tB2Ds82k95Q/M9rZhidtnKh0gxB8Z9FUSYpJnSYEu0IDRzlwBLGtbC3Ut5lmnG0Wbk2BH/y5Wlyc1j1j6qHV8eVs/NxHCWyTXbIHvHJCTkjl6RG6oSTR/JMXsmb8+S8O/Ox0/rjDOe2SJ/4Hx+A7peoMo=</latexit>
Conditional Gradient Method (CGM)
(Frank & Wolfe, 1956) (Hazan, 2008) (Jaggi, 2013)
. X ⊂ Rn is a convex compact set . f : X → R is a smooth convex function
<latexit sha1_base64="NSslLu+o8lefHuUQqE4Y+ElquS4=">ADFXicdVLjtMwFHXCayivDizZWLRISEhVWhagWQ2wQWIzIDpTqelUjnuTWONHZN9UVFH5CDb8ChsWIMQWiR1/g5PJjOZRrhTl6Nzr4+NjJ4UDqPobxBeuXrt+o2tm51bt+/cvdfdvr/vTGk5jLmRxk4S5kAKDWMUKGFSWGAqkXCQHL2u+wdLsE4Y/QFXBcwUy7RIBWfoqfl28DROZelyCSl2+jFawXQmwYosx/6nfqwY5pzJarKmsSsTB0gbLkmq9+tD3afCUa50Uv46H+qYBxpMxVvkEt3zgmiOSPWSjlDOYnimpeW10s9xLukP/L3i4aCUzsQRN64SYpYoVm8VOhd6eWhGqkHBixR9q3u1Fg6gpehkMW9Ajbe3Nu3/iheGlAo1cMuemw6jAWcUsCi5h3YlLBz6xI5bB1EPNFLhZ1dzqmj72zIKmxvpPI23YsysqpxbqcRP1tbdxV5NbupNS0xfzCqhixJB8+ON0lJSn179ROhCWOAoVx4wboX3SnOrL9Z/5A6PoThxSNfBvujwfDZYPRu1Nt91caxR6SR+QJGZLnZJe8IXtkTHjwOfgafA9+hF/Cb+HP8NfxaBi0ax6QcxX+/gcB9fso</latexit>
SLIDE 3
(Hazan, 2008) (Yurtsever et al., 2017)
Motivation: Solving Large-Scale SDP min
x∈X f(x)
<latexit sha1_base64="3gBcBo+cM9T3mVcxfXZMDy1T/mU=">ACGHicbVBNS8NAEN34WeNX1aOXxSLopSYq6FH04rGC1UJTymY7sUs3m7A7kZYQ/4UX/4oXD4p49ea/cVt70NYHA4/3ZpiZF6ZSGPS8L2dmdm5+YbG05C6vrK6tlzc2b0ySaQ51nshEN0JmQAoFdRQoZFqYHEo4TbsXQz923vQRiTqGgcptGJ2p0QkOEMrtcsHQaY61gfM+zQigYxwy5nMm8UR4g9DGPhSoK98GN9vr7XLFq3oj0Gnij0mFjFrlz+DTsKzGBRyYxp+l6KrZxpFxC4QaZgZTxHruDpqWKxWBa+eixgu5apUOjRNtSEfq74mcxcYM4tB2Ds82k95Q/M9rZhidtnKh0gxB8Z9FUSYpJnSYEu0IDRzlwBLGtbC3Ut5lmnG0Wbk2BH/y5Wlyc1j1j6qHV8eVs/NxHCWyTXbIHvHJCTkjl6RG6oSTR/JMXsmb8+S8O/Ox0/rjDOe2SJ/4Hx+A7peoMo=</latexit>
. X ⊂ Rn is a convex compact set . f : X → R is a smooth convex function
<latexit sha1_base64="NSslLu+o8lefHuUQqE4Y+ElquS4=">ADFXicdVLjtMwFHXCayivDizZWLRISEhVWhagWQ2wQWIzIDpTqelUjnuTWONHZN9UVFH5CDb8ChsWIMQWiR1/g5PJjOZRrhTl6Nzr4+NjJ4UDqPobxBeuXrt+o2tm51bt+/cvdfdvr/vTGk5jLmRxk4S5kAKDWMUKGFSWGAqkXCQHL2u+wdLsE4Y/QFXBcwUy7RIBWfoqfl28DROZelyCSl2+jFawXQmwYosx/6nfqwY5pzJarKmsSsTB0gbLkmq9+tD3afCUa50Uv46H+qYBxpMxVvkEt3zgmiOSPWSjlDOYnimpeW10s9xLukP/L3i4aCUzsQRN64SYpYoVm8VOhd6eWhGqkHBixR9q3u1Fg6gpehkMW9Ajbe3Nu3/iheGlAo1cMuemw6jAWcUsCi5h3YlLBz6xI5bB1EPNFLhZ1dzqmj72zIKmxvpPI23YsysqpxbqcRP1tbdxV5NbupNS0xfzCqhixJB8+ON0lJSn179ROhCWOAoVx4wboX3SnOrL9Z/5A6PoThxSNfBvujwfDZYPRu1Nt91caxR6SR+QJGZLnZJe8IXtkTHjwOfgafA9+hF/Cb+HP8NfxaBi0ax6QcxX+/gcB9fso</latexit>
When X is PSD-cone with bounded trace → lmo is cheap (Arithmetic Scalability) → updates are rank-1 (Storage Scalability)
<latexit sha1_base64="83+K2bLu4OmRXK0DOuczSjX2e0=">ACoHicbVFNj9MwEHXC1K+ChwRkrUtUjlslRQhOC4shz2A6Kp0W9FU1cSZbKw6drAnQBWVv8X/4Ma/we32sB+MZOnpzbyZ8Zu0UtJRFP0Nwhs3b92+s3e3de/+g4eP2o+fnDpTW4FjYZSx0xQcKqlxTJIUTiuLUKYKJ+nyaJOfEfrpNFfaFXhvIQzLXMpgDy1aP9OclW7QmFOrUmBmneTEqgQoJrpusul48PRhwNhNPIfkgqemlpnmHGyIJAnySzqv8Zy3uom32rIEjLdXwnhT79Jo0qz3jQBULFe+s15dIUvCRbw+pVJWL32Pi+K6yoDQcbDILejlQcx7IzIWzvCSbNHuRP1oG/w6iHegw3YxXLT/JkRdYmahALnZnFU0bwB6xdSuG4ltcMKxNIPmnmoUQ3b7YGr/kLz2Q8N9Y/TXzLXlQ0UDq3KlNfuXHPXc1tyP/lZjXlb+eN1FVNqMX5oLxWnAzfXItn0qIgtfIAhDfQmycK8N6Tv2nLmxBf/fJ1cDrox6/6g5NB5/D9zo49oztsx6L2Rt2yI7ZkI2ZCJ4HR8H4FO4Hx6Hn8OT89Iw2GmesksRfv0HWJ7MHw=</latexit>
SLIDE 4
(Hazan, 2008) (Yurtsever et al., 2017)
This paper: A new CGM-type method based on augmented Lagrangian
Motivation: Solving Large-Scale SDP
X → R . A : X → Rd is a given linear map . K is a simple convex set
<latexit sha1_base64="NSslLu+o8lefHuUQqE4Y+ElquS4=">ADFXicdVLjtMwFHXCayivDizZWLRISEhVWhagWQ2wQWIzIDpTqelUjnuTWONHZN9UVFH5CDb8ChsWIMQWiR1/g5PJjOZRrhTl6Nzr4+NjJ4UDqPobxBeuXrt+o2tm51bt+/cvdfdvr/vTGk5jLmRxk4S5kAKDWMUKGFSWGAqkXCQHL2u+wdLsE4Y/QFXBcwUy7RIBWfoqfl28DROZelyCSl2+jFawXQmwYosx/6nfqwY5pzJarKmsSsTB0gbLkmq9+tD3afCUa50Uv46H+qYBxpMxVvkEt3zgmiOSPWSjlDOYnimpeW10s9xLukP/L3i4aCUzsQRN64SYpYoVm8VOhd6eWhGqkHBixR9q3u1Fg6gpehkMW9Ajbe3Nu3/iheGlAo1cMuemw6jAWcUsCi5h3YlLBz6xI5bB1EPNFLhZ1dzqmj72zIKmxvpPI23YsysqpxbqcRP1tbdxV5NbupNS0xfzCqhixJB8+ON0lJSn179ROhCWOAoVx4wboX3SnOrL9Z/5A6PoThxSNfBvujwfDZYPRu1Nt91caxR6SR+QJGZLnZJe8IXtkTHjwOfgafA9+hF/Cb+HP8NfxaBi0ax6QcxX+/gcB9fso</latexit>
min
x∈X f(x) s.t. Ax ∈ K
<latexit sha1_base64="191mwyDnT7yA3kobj1EO9egnCxM=">ACPnicbVA9j9NAEF3fF8H3gTlKmtVFJ+Uay84hQRmgQaIJ0uUuUhxF6804W9tnbHKJHl/DEafgMdJQ0FCNFSsk5S3CU30mqf3punmXlxLoXBIPju7O0fHB49aTx1j09Oz5z89vTVZoDj2eyUz3Y2ZACgU9FCihn2tgaSzhLp69r/W7z6CNyNQNLnIYpmyiRCI4Q0uNvF5UqLHVAcs5jYSiUcpwypks+1VRghzLFOhqspdLt2kNb+q/zVtfPRX9Nst58dq5DUDP1gV3QXhBjTJproj71s0zniRgkIumTGDMhxWDKNgkuo3KgwkDM+YxMYWKhYCmZYrs6v6KVlxjTJtH0K6Yq97yhZaswijW1nvaLZ1mryMW1QYPJmWAqVFwiKrwclhaSY0TpLOhYaOMqFBYxrYXelfMo042gTdW0I4fbJu+C27YfXfvTq2bn3SaOBnlJLkiLhOQ16ZAPpEt6hJMv5Af5RX47X52fzh/n7p1z9l4XpAH5fz7D6Dzr+U=</latexit>
min
x∈X f(x)
<latexit sha1_base64="3gBcBo+cM9T3mVcxfXZMDy1T/mU=">ACGHicbVBNS8NAEN34WeNX1aOXxSLopSYq6FH04rGC1UJTymY7sUs3m7A7kZYQ/4UX/4oXD4p49ea/cVt70NYHA4/3ZpiZF6ZSGPS8L2dmdm5+YbG05C6vrK6tlzc2b0ySaQ51nshEN0JmQAoFdRQoZFqYHEo4TbsXQz923vQRiTqGgcptGJ2p0QkOEMrtcsHQaY61gfM+zQigYxwy5nMm8UR4g9DGPhSoK98GN9vr7XLFq3oj0Gnij0mFjFrlz+DTsKzGBRyYxp+l6KrZxpFxC4QaZgZTxHruDpqWKxWBa+eixgu5apUOjRNtSEfq74mcxcYM4tB2Ds82k95Q/M9rZhidtnKh0gxB8Z9FUSYpJnSYEu0IDRzlwBLGtbC3Ut5lmnG0Wbk2BH/y5Wlyc1j1j6qHV8eVs/NxHCWyTXbIHvHJCTkjl6RG6oSTR/JMXsmb8+S8O/Ox0/rjDOe2SJ/4Hx+A7peoMo=</latexit>
. X ⊂ Rn is a convex compact set . f : X → R is a smooth convex function
<latexit sha1_base64="NSslLu+o8lefHuUQqE4Y+ElquS4=">ADFXicdVLjtMwFHXCayivDizZWLRISEhVWhagWQ2wQWIzIDpTqelUjnuTWONHZN9UVFH5CDb8ChsWIMQWiR1/g5PJjOZRrhTl6Nzr4+NjJ4UDqPobxBeuXrt+o2tm51bt+/cvdfdvr/vTGk5jLmRxk4S5kAKDWMUKGFSWGAqkXCQHL2u+wdLsE4Y/QFXBcwUy7RIBWfoqfl28DROZelyCSl2+jFawXQmwYosx/6nfqwY5pzJarKmsSsTB0gbLkmq9+tD3afCUa50Uv46H+qYBxpMxVvkEt3zgmiOSPWSjlDOYnimpeW10s9xLukP/L3i4aCUzsQRN64SYpYoVm8VOhd6eWhGqkHBixR9q3u1Fg6gpehkMW9Ajbe3Nu3/iheGlAo1cMuemw6jAWcUsCi5h3YlLBz6xI5bB1EPNFLhZ1dzqmj72zIKmxvpPI23YsysqpxbqcRP1tbdxV5NbupNS0xfzCqhixJB8+ON0lJSn179ROhCWOAoVx4wboX3SnOrL9Z/5A6PoThxSNfBvujwfDZYPRu1Nt91caxR6SR+QJGZLnZJe8IXtkTHjwOfgafA9+hF/Cb+HP8NfxaBi0ax6QcxX+/gcB9fso</latexit>
When X is PSD-cone with bounded trace → lmo is cheap (Arithmetic Scalability) → updates are rank-1 (Storage Scalability)
<latexit sha1_base64="83+K2bLu4OmRXK0DOuczSjX2e0=">ACoHicbVFNj9MwEHXC1K+ChwRkrUtUjlslRQhOC4shz2A6Kp0W9FU1cSZbKw6drAnQBWVv8X/4Ma/we32sB+MZOnpzbyZ8Zu0UtJRFP0Nwhs3b92+s3e3de/+g4eP2o+fnDpTW4FjYZSx0xQcKqlxTJIUTiuLUKYKJ+nyaJOfEfrpNFfaFXhvIQzLXMpgDy1aP9OclW7QmFOrUmBmneTEqgQoJrpusul48PRhwNhNPIfkgqemlpnmHGyIJAnySzqv8Zy3uom32rIEjLdXwnhT79Jo0qz3jQBULFe+s15dIUvCRbw+pVJWL32Pi+K6yoDQcbDILejlQcx7IzIWzvCSbNHuRP1oG/w6iHegw3YxXLT/JkRdYmahALnZnFU0bwB6xdSuG4ltcMKxNIPmnmoUQ3b7YGr/kLz2Q8N9Y/TXzLXlQ0UDq3KlNfuXHPXc1tyP/lZjXlb+eN1FVNqMX5oLxWnAzfXItn0qIgtfIAhDfQmycK8N6Tv2nLmxBf/fJ1cDrox6/6g5NB5/D9zo49oztsx6L2Rt2yI7ZkI2ZCJ4HR8H4FO4Hx6Hn8OT89Iw2GmesksRfv0HWJ7MHw=</latexit>
SLIDE 5
(QP formulation)
min
x∈X f(x) + λ
2 dist2(Ax, K)
<latexit sha1_base64="c5FGNOcpjL+nMLfUTg9I62oJXws=">ACTXicbVDPaxNBGJ1N1db1R6M9ehkMQoSdqNQj9VeBC8VTBvIxjA7+207dGZ2mfm2JAzTP9CL4M3/wosHRUpnkyDa+mDg8d73a15eS2ExSb5FnY1bt+9sbt2N791/8HC7+jxka0aw2HEK1mZc4sSKFhAIljGsDTOUSjvOzg9Y/PgdjRaU/4qKGqWInWpSCMwzSrFtkjS6CD+jmNBOaZorhKWfSjb13GcIcnRLa+/giLvzXfqcZqVh3GUyLCmYd0O/6jHKFeFe/2nYfzN/8WfMe78bz7q9ZJAsQW+SdE16ZI3DWfdrVlS8UaCRS2btJE1qnDpmUHAJPs4aCzXjZ+wEJoFqpsBO3TINT58FpaBlZcLTSJfq3x2OKWsXKg+V7ZH2uteK/MmDZavp07oukHQfLWobCTFirbR0kIY4CgXgTBuRLiV8lMWwsIQcBtCev3LN8nRcJC+HAw/vOrtv13HsUWekKekT1KyR/bJO3JIRoSTz+Q7+Ul+RV+iH9Hv6HJV2onWPTvkH3Q2rwBmdLUZ</latexit>
(QP formulation) (Original problem)
λ → +∞
<latexit sha1_base64="XM3c62Ime9vOYu13gJF2rGJGAKw=">AB/HicbVDLSsNAFJ3UV62vaJduBosgCWpgi6LblxWsA9oQplMJu3QySTM3Aih1F9x40IRt36IO/GaZuFth4YOJxzLnPvCVLBNTjOt1VaW9/Y3CpvV3Z29/YP7MOjk4yRVmbJiJRvYBoJrhkbeAgWC9VjMSBYN1gfDvzu49MaZ7IB8hT5sdkKHnEKQEjDeyqJ0w4JNiDBJ97XEaQD+yaU3fmwKvELUgNFWgN7C8vTGgWMwlUEK37rpOCPyEKOBVsWvEyzVJCx2TI+oZKEjPtT+bLT/GpUIcJco8CXiu/p6YkFjrPA5MiYw0sveTPzP62cQXfsTLtMmKSLj6JMYHPorAkcsUoiNwQhU3u2I6IopQMH1VTAnu8smrpNOouxf1xv1lrXlT1FGx+gEnSEXaEmukMt1EYU5egZvaI368l6sd6tj0W0ZBUzVfQH1ucPAIuUWg=</latexit>
Start with some λ apply CGM step w.r.t. QP form. increase λ at each iteration
(Yurtsever et al., 2018)
CGM via Quadratic Penalty: HCGM dist(Ax, K) = O(1/ √ k)
<latexit sha1_base64="OSLX8/3YLlj80sOL/2GIvb/UrcA=">ACIHicbVDLSgMxFM3UV62vUZdugkVoQepMFepGqLoRXFjBPqBTSiaTtqGZh0lGLMN8iht/xY0LRXSnX2NmWkRbDwROzrmXe+xA0aFNIxPLTM3v7C4lF3OrayurW/om1sN4Yckzr2mc9bNhKEUY/UJZWMtAJOkGsz0rSH54nfvCNcUN+7kaOAdFzU92iPYiSV1NUrlovkgLuRo2bFhdP7/VTAiEWXcRGewJ/vVwDyxy2U0jItdPW+UjBRwlpgTkgcT1Lr6h+X4OHSJzFDQrRNI5CdCHFJMSNxzgoFCRAeoj5pK+ohl4hOlB4Ywz2lOLDnc/U8CVP1d0eEXCFGrq0qk3XFtJeI/3ntUPaOxH1glASD48H9UIGpQ+TtKBDOcGSjRBmFO1K8QDxBGWKtOcCsGcPnmWNMol87BUvj7KV8mcWTBDtgFBWCqiC1ADdYDBA3gCL+BVe9SetTftfVya0SY92+APtK9vtomjSQ=</latexit>
|f(x) − f ?| = O(1/ √ k)
<latexit sha1_base64="9bcySFKQzU1QH+qgfiUk1pL4Rs=">ACE3icbVDJSgNBEO2JW4xb1KOXxiAkgnEmCnoRgl68GcEskImhp9OTNOlZ7K4RwyT/4MVf8eJBEa9evPk3dpaDJj4oeLxXRVU9JxRcgWl+G4m5+YXFpeRyamV1bX0jvblVUEkKSvTQASy5hDFBPdZGTgIVgslI54jWNXpXgz96j2Tigf+DfRC1vBI2+cupwS01Ezv93sQw4fYPfWVkBkH59h2yPQoUTEV4OsdWirOwlxd5BrpjNm3hwBzxJrQjJoglIz/W3Ahp5zAcqiFJ1ywyhERMJnAo2SNmRYiGhXdJmdU194jHViEc/DfCeVlrYDaQuH/BI/T0RE0+pnufozuG5atobiv959Qjc0bM/TAC5tPxIjcSGAI8DAi3uGQURE8TQiXt2LaIZJQ0DGmdAjW9MuzpFLIW0f5wvVxpng+iSOJdtAuyiILnaAiukQlVEYUPaJn9IrejCfjxXg3PsatCWMys43+wPj8AemVnPU=</latexit>
&
SLIDE 6 6
Performance of HCGM
MaxCut Clustering Generalized Eig.Vec.
SLIDE 7 (AL formulation)
Lλ(x, y) := f(x) + min
r∈K
⇢ hy, Ax ri + λ 2 kAx rk2
- <latexit sha1_base64="aAq+ZI+MB8c3pgFwUqLgxNF5l78=">ACbnicbVFdaxQxFM2MX+36tVXog0UMLsIW6zKzCkpBqPoi6EMFty1s1uFO9s5uaCYzJBnZIZ3H/kHf/A2+BPM7C5FWy+EnJx7Tj5O0lIKY6PoZxBeu37j5q2Nzc7tO3fv3e9uPTgyRaU5jnghC32SgkEpFI6sBJPSo2QpxKP09MPbf/4O2ojCvXV1iVOcpgpkQkO1lNJ95zlYOcpPvcJI5J75xC01/s1bt0/y3N+otd+pyXKjEacqEoheGT01DmcTMudnUDOJtN6j7xYvFCv1t6aeAXG7uh95y1Enb2behlYja3rEm6vWgQLYteBfEa9Mi6DpPuDzYteJWjslyCMeM4Ku3EgbaCS2w6rDJYAj+FGY49VJCjmbhlXA195pkpzQrth7J0yf7tcJAbU+epV7ZvNZd7Lfm/3riy2ZuJE6qsLCq+OirJLUFbOnU6GRW1l7AFwLf1fK5+Dzsf6HOj6E+PKTr4Kj4SB+ORh+edU7eL+OY4PskKekT2LymhyQj+SQjAgnv4Kt4FGwE/wOt8PH4ZOVNAzWnofknwr7fwC3nboW</latexit>
max
y∈Rd min x∈X Lλ(x, y)
<latexit sha1_base64="EVRly1/nP+L5VfUjLAiv0qOSe54=">ACOHicbVBNSwMxEM36WetX1aOXYBEUpOyqoMeiFw+CVWwtdOuSzaZtMJtdkqx0Ceu/8uLP8CZePCji1V9g+gHV1oGBN+/NMDPjxmVyrZfrKnpmdm5+dxCfnFpeW1sLZek1EiMKniEWi7iNJGOWkqhipB4LgkKfkRv/7rSn39wTIWnEr1Uak2aI2py2KEbKUF7hwg1R19MpdCmHBquO7+ur7DbITEW5p7sjBSOm61kGH0bleZpl5l1Acp2unvprlco2iW7H3ASOENQBMOoeIVnN4hwEhKuMENSNhw7Vk2NhKYkSzvJpLECN+hNmkYyFIZFP3H8/gtmEC2IqESa5gn/09oVEoZRr6prN3sRzXeuR/WiNRreOmpjxOFOF4sKiVMKgi2HMRBlQrFhqAMKCmlsh7iCBsDJe540JzvjLk6C2X3IOSvuXh8XydCOHNgEW2AHOAIlMEZqIAqwOARvIJ38GE9W/Wp/U1aJ2yhjMb4E9Y3z8Umq3C</latexit>
where
Start with some λ apply CGM step on x apply dual ascent step on y increase λ at each iteration
CGM via augmented Lagrangian: CGAL dist(Ax, K) = O(1/ √ k)
<latexit sha1_base64="OSLX8/3YLlj80sOL/2GIvb/UrcA=">ACIHicbVDLSgMxFM3UV62vUZdugkVoQepMFepGqLoRXFjBPqBTSiaTtqGZh0lGLMN8iht/xY0LRXSnX2NmWkRbDwROzrmXe+xA0aFNIxPLTM3v7C4lF3OrayurW/om1sN4Yckzr2mc9bNhKEUY/UJZWMtAJOkGsz0rSH54nfvCNcUN+7kaOAdFzU92iPYiSV1NUrlovkgLuRo2bFhdP7/VTAiEWXcRGewJ/vVwDyxy2U0jItdPW+UjBRwlpgTkgcT1Lr6h+X4OHSJzFDQrRNI5CdCHFJMSNxzgoFCRAeoj5pK+ohl4hOlB4Ywz2lOLDnc/U8CVP1d0eEXCFGrq0qk3XFtJeI/3ntUPaOxH1glASD48H9UIGpQ+TtKBDOcGSjRBmFO1K8QDxBGWKtOcCsGcPnmWNMol87BUvj7KV8mcWTBDtgFBWCqiC1ADdYDBA3gCL+BVe9SetTftfVya0SY92+APtK9vtomjSQ=</latexit>
|f(x) − f ?| = O(1/ √ k)
<latexit sha1_base64="9bcySFKQzU1QH+qgfiUk1pL4Rs=">ACE3icbVDJSgNBEO2JW4xb1KOXxiAkgnEmCnoRgl68GcEskImhp9OTNOlZ7K4RwyT/4MVf8eJBEa9evPk3dpaDJj4oeLxXRVU9JxRcgWl+G4m5+YXFpeRyamV1bX0jvblVUEkKSvTQASy5hDFBPdZGTgIVgslI54jWNXpXgz96j2Tigf+DfRC1vBI2+cupwS01Ezv93sQw4fYPfWVkBkH59h2yPQoUTEV4OsdWirOwlxd5BrpjNm3hwBzxJrQjJoglIz/W3Ahp5zAcqiFJ1ywyhERMJnAo2SNmRYiGhXdJmdU194jHViEc/DfCeVlrYDaQuH/BI/T0RE0+pnufozuG5atobiv959Qjc0bM/TAC5tPxIjcSGAI8DAi3uGQURE8TQiXt2LaIZJQ0DGmdAjW9MuzpFLIW0f5wvVxpng+iSOJdtAuyiILnaAiukQlVEYUPaJn9IrejCfjxXg3PsatCWMys43+wPj8AemVnPU=</latexit>
&
SLIDE 8
Poster today: Pacific Ballroom #194
CGAL vs HCGM
MaxCut Clustering Generalized Eig.Vec.
SLIDE 9 Coming soon: Sketchy-CGAL
MaxCut with (1`441`295 x 1`441`295) dimensional belgium-osm Street Network from DIMACS10 Implementation Challenge Library (Cevher-Tropp-Yurtsever, 2019)
minimize
x
1 4hL, Xi subject to diag(X) = 1, trace(X) = n, X is PSD.
<latexit sha1_base64="LkG14K+NEdBgrFmJ4nKE4zZriaw=">ACjXicbVHbtQwEHXCrQ2XLvDYF4sVUpHKilFXASoAiR4GERbBtps1o5zmRrajvBdlAXy/s1fFHf+jc4myBy0iWj8+ZGXuO85ozbeL4PAivXL12/cbGZnTz1u07W4O79w51SgKE1rxSqU50cCZhIlhkNaKyAi53CUn7xr9aMfoDSr5FezrGEmyEKyklFiPDUf/MoaWXgdjD1NjNwaqxgkgn2E5yLVhF+nJWKUJs4u+9wxolcMCfdnGKM7U+RNn3hRV6ub/BtQg03lcCf4TRBzrIQtGFm4nfQRfo2T3Wi1+sMb3x96QbZCuqaMY3HX9670XwjEfxOvBlkPRgiPoYzwdnWVHRoA0lBOtp0lcm5klyjDKwUVZo6Em9IQsYOqhJAL0zK7dPihZwpcVsovafCa/bvCEqH1UuQ+sx1AX9Ra8n/atDHl85lsm4MSNpdVDbce4Xbr8EFU946vSAUMX8WzE9Jt4c4z8o8iYkF0e+DA73RsmT0d7n/eHB296ODbSNHqAdlKBn6AB9RGM0QTSIgjh4EbwMt8Kn4avwTZcaBn3NfRPhB9+AyBJxRg=</latexit>