distributed robustness analysis
play

Distributed Robustness Analysis Anders Hansson Division of - PowerPoint PPT Presentation

Distributed Robustness Analysis Anders Hansson Division of Automatic Control Link oping University August 2223, 2016 Outline Robustness Analysis Chordal Sparsity in Semidefinite Programming Domain- and Range-Space Decomposition


  1. Distributed Robustness Analysis Anders Hansson Division of Automatic Control Link¨ oping University August 22–23, 2016

  2. Outline Robustness Analysis Chordal Sparsity in Semidefinite Programming Domain- and Range-Space Decomposition Proximal Splitting Methods Domain-Space Decomposition Revisited Interior-Point Methods Summary

  3. Robustness Analysis Consider the following uncertain system, p = Gq , q = ∆( p ) , (1) where G ∈ RH p × m is a transfer function matrix, and ∞ ∆ : L p 2 → L m 2 is a bounded and causal operator. The uncertain system in (1) is said to be robustly stable if the interconnection between G and ∆ remains stable for all ∆ in some class.

  4. Integral Quadratic Constraints Let ∆ : L p 2 → L m 2 be a bounded and causal operator. This operator is said to satisfy the IQC defined by Π, i.e., ∆ ∈ IQC(Π), if � � T � � � ∞ v v ∀ v ∈ L p Π dt ≥ 0 , 2 , (2) ∆( v ) ∆( v ) 0 where Π is a bounded and self-adjoint operator. Assuming that Π is linear time-invariant and has a transfer function matrix representation, the IQC in (2) can be written in the frequency domain as � � ∗ � � � ∞ v ( j ω ) � v ( j ω ) � Π( j ω ) d ω ≥ 0 , (3) � � ∆( v )( j ω ) ∆( v )( j ω ) −∞ v and � where ˆ ∆( v ) are the Fourier transforms of the signals

  5. Stability Theorem Theorem (IQC analysis) The uncertain system in (1) is robustly stable, if 1. for all τ ∈ [0 , 1] the interconnection described in (1) , with τ ∆ , is well-posed; 2. for all τ ∈ [0 , 1] , τ ∆ ∈ IQC(Π) ; 3. there exists ǫ > 0 such that � G ( j ω ) � ∗ � G ( j ω ) � Π( j ω ) � − ǫ I , ∀ ω ∈ [0 , ∞ ] . (4) I I Proof. See Megretski and Rantzer, 1997.

  6. Example If ∆ is a linear operator, i.e. q = ∆ p , where ∆ = δ I , δ ∈ [ − 1 , 1], then � X ( j ω ) � Y ( j ω ) Π( j ω ) = Y ( j ω ) ∗ − X ( j ω ) where X ( j ω ) = X ( j ω ) ∗ � 0 and Y ( j ω ) = − Y ( j ω ) ∗ . Typically Π is parameterized with basis functions.

  7. Collection of Uncertain Systems Consider a collection of uncertain systems: p i = G i pq q i + G i pw w i z i = G i zq q i + G i zw w i (5) q i = ∆ i ( p i ) , and let p = ( p 1 , . . . , p N ), q = ( q 1 , . . . , q N ), w = ( w 1 , . . . , w N ) and z = ( z 1 , . . . , z N ).

  8. Interconnection of Uncertain Systems       w 1 z 1 Γ 11 Γ 12 · · · Γ 1 N       w 2 z 2 Γ 21 Γ 22 · · · Γ 2 N       = (6)       . . . . . ... . . . . .       . . . . . w N z N Γ N 1 Γ N 2 · · · Γ NN � �� � � �� � � �� � w z Γ Each of the blocks Γ ij are 0-1 matrices. Interconnected uncertain system: p = G pq q + G pw w z = G zq q + G zw w (7) q = ∆( p ) w = Γ z , where G ⋆ • = diag( G 1 ⋆ • , . . . , G N ⋆ • ) and ∆ = diag(∆ 1 , . . . , ∆ N ).

  9. Lumped Formulation Eliminate w : p = ¯ Gq , q = ∆( p ) , (8) where ¯ G = G pq + G pw ( I − Γ G zw ) − 1 Γ G zq . The interconnected uncertain system is robustly stable if there exists a matrix ¯ Π such that � ¯ � ∗ � ¯ � G ( j ω ) G ( j ω ) ¯ Π( j ω ) � − ǫ I , ∀ ω ∈ [0 , ∞ ] , (9) I I for some ǫ > 0. LMI is dense .

  10. Sparse Formulation Theorem Let ∆ ∈ IQC(¯ Π) . If there exist ¯ Π and X = xI ≻ 0 such that   ∗     ¯ ¯ G pq G pw Π 11 0 Π 12 0 G pq G pw       − Γ T X Γ Γ T X 0 0 G zq G zw G zq G zw        � − ǫ I , (10)    ¯ ¯   I 0 Π 21 0 Π 22 0 I 0 0 0 X Γ 0 − X 0 I I for ǫ > 0 and for all ω ∈ [0 , ∞ ] , then the interconnected uncertain system in (7) is robustly stable.

  11. Sparsity in SDPs General SDP (new definition of x ): c T x minimize (11a) S , x m � F 0 + x i F i + S = 0 , subject to S � 0 . (11b) i =1 with S ∈ S n , x ∈ R m , c ∈ R m and F i ∈ S n for i = 0 , . . . , m . Slack variable S inherits sparsity pattern from problem data. Solvers like DSDP (Benson and Ye, 2005) and and SMCP (Andersen, Dahl and Vandenberghe, 2010) make use of this structure.

  12. Sparsity Graph A sparsity pattern is a set E ⊆ {{ i , j } | i , j ∈ { 1 , 2 , . . . , n }} . A matrix A ∈ S n is said to have a sparsity pattern E if A i , j = A j , i = 0, whenever i � = j and { i , j } / ∈ E , or equivalently A ∈ S n E . The graph G = ( V , E ) with V = { 1 , 2 , . . . , n } is called the sparsity graph associated with the sparsity pattern.

  13. Chordal Graphs and Sparsity Patterns   0 0 0 0 0 x x x   x x x 0 0 0 0 0     x x x x x 0 0 0     0 0 x x x 0 0 0   A =   0 0 x x x x ∗ x     0 0 0 0 0 x x x     0 0 0 0 ∗ x x x 0 0 0 0 0 x x x

  14. Cliques and Clique Trees A maximal clique C i is a maximal subset of V such that its induced subgraph is complete. A tree of maximal cliques for which C i ∩ C j for i � = j is contained in all the cliques on the path connecting C i and C j is said to have the clique intersection property . (Always exists.)

  15. Sparse Cholesky Factorization A sparsity pattern E is chordal if and only if any positive definite E has a Cholesky factorization PAP T = LDL T with matrix A ∈ S n P T ( L + L T ) P ∈ S n E for some permutation matrix P , which is related to the clique intersection property. After permutation sparse postive definite matrices with chordal sparsity pattern have sparse Cholesky factorizations with no fill-in.

  16. Chain of Uncertain Systems δ 1 δ 2 δ N p 1 q 1 p 2 q 2 p N q N z 1 z 2 z N − 1 2 2 G 1 ( s ) G 2 ( s ) G N ( s ) · · · z 2 z 3 z N 1 1

  17. Average CPU Time 10 3 SeDuMi (lumped) SDPT3 (lumped) 10 2 DSDP (sparse) SMCP (sparse) CPU time (seconds) 10 1 10 0 10 − 1 10 − 2 0 20 40 60 80 100 120 140 160 180 200 N

  18. Test for Positive Semidefiniteness (Grone et al., 1984) A partially specified matrix A ∈ S n can be completed to a positive semidefinite matrix if and only if A C i � 0 where C i are the maximal cliques of the graph for the specified entries. ( A C i denotes the sub-matrices obtained by picking out the columns and rows indexed by C i ) Example:   � 1 � � 1 � 1 1 / 2 ? 1 / 2 1 / 3   � 0 ⇔ 1 / 2 1 1 / 3 � 0 & � 0 1 / 2 1 1 / 3 1 ? 1 / 3 1

  19. Dual SDP Primal problem again: c T x minimize (12a) S , x m � F 0 + x i F i + S = 0 , subject to S � 0 . (12b) i =1 with chordal S with cliques C j , j = 1 , . . . , p . Dual SDP: tr ZF 0 minimize (13a) Z subject to tr ZF i = c i , i = 1 , . . . , m (13b) Z � 0 (13c)

  20. Domain-Space Decomposition (Fukuda et al., 2000) Write F i = � p j =1 E j F i j E T with E j containing columns of identity j matrix indexed by clique C j . (Not unique) Since tr ZF i = � p j =1 tr E T j ZE j F i j , equivalent dual problem is: p � tr Z C j F 0 minimize (14a) j Z j =1 p � tr Z C j F i subject to j = c i , i = 1 , . . . , m (14b) j =1 Z C j � 0 i = 1 , . . . , p (14c)

  21. Consensus Constraints Equivalantly in decoupeled form: p � tr Z j F 0 minimize (15a) j Z j =1 p � tr Z j F i subject to j = c i , i = 1 , . . . , m (15b) j =1 Z j � 0 , j = 1 , . . . , p (15c) � � E T E i Z i E T − E j Z j E T E i , j = 0 , (15d) i , j i j ∀ i , j , where i are children of j in a clique tree with the clique intersection property, and where j are all non-leaf nodes of the tree. E i , j contains the columns of the identity matrix indexed by C i ∩ C j .

  22. Range-Space Decomposition (Fukuda et al., 2000) The dual of the previous problem is c T x minimize (16a) x , U m � subject to F 0 x i F i j + j + G j ( U ) � 0 j = 1 , . . . , p (16b) i =1 where x ∈ R m , with � G j ( U ) = E T k E k , j U k , j E T E T j E i , j U i , j E T k , j E k − i , j E j i ∈ ch ( j ) where U i , j ∈ S | C i ∩ C j | , and where k is the parent of j in the clique tree. (For the root and for the leafs some of the terms are not there) Often the above LMIs are loosely coupled, i.e. many F i j are zero.

  23. Example Find x = ( x 1 , . . . , x 4 ) such that   0 x 1 x 2   � 0 x 2 x 1 x 3 0 x 3 x 4 is equivalent to find ( x , u ) such that � x 1 � � − u � x 2 x 3 � 0 & � 0 x 1 + u x 2 x 3 x 4

  24. Decomposition and Product Space Formulation Feasibility problem from range-space decomposition can with v = ( x , U ) be phrased as find (17a) v subject to v ∈ C j , j = 1 , . . . , p , (17b) where � � m � � � F 0 x i F i C j = v j + j + G j ( U ) � 0 i =1 Let C j = { s j ∈ R |J j | | E T J j s j ∈ C i } , ¯ j = 1 , . . . , p , (18) such that s j ∈ ¯ J j s j ∈ C j , where E J j are composed of C j implies E T rows of the identity matrix indexed by the set J j , which is the set of i such that v i is constrained by C j . Let I i = { k | i ∈ J k } , i.e. the set of indeces of constraints, which depends on v i .

  25. Example revisited Find ( x , u ) = ( x 1 , x 2 , x 3 , x 4 , u ) such that � x 1 � � − u � x 2 x 3 � 0 & � 0 x 2 x 1 + u x 3 x 4 Hence J 1 = { 1 , 2 , 5 } ; J 2 = { 3 , 4 , 5 } and I 1 = { 1 } ; I 2 = { 1 } ; I 3 = { 2 } ; I 4 = { 2 } ; I 5 = { 1 , 2 }

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend