tensor completion with hierarchical tensors
play

Tensor completion with hierarchical tensors R. Schneider (TUB - PowerPoint PPT Presentation

Tensor completion with hierarchical tensors R. Schneider (TUB Matheon), joint work with H. Rauhut and Z. Stojanac Berlin December 2015 I. Classical and novel tensor formats B {1,2,3,4,5} B B {1,2,3} {4,5} B U U 4 U {1,2} 3 5 U U


  1. Tensor completion with hierarchical tensors R. Schneider (TUB Matheon), joint work with H. Rauhut and Z. Stojanac Berlin December 2015

  2. I. Classical and novel tensor formats B {1,2,3,4,5} B B {1,2,3} {4,5} B U U 4 U {1,2} 3 5 U U 1 2 U {1,2} U {1,2,3} (Format � representation closed under linear algebra manipulations)

  3. Setting - Tensors of order d - hyper matrices high-order tensors - multi-indexed arrays (hyper matrices) x = ( x 1 , . . . , x d ) �→ U = U [ x 1 , . . . , x d ] ∈ H i = 1 R n = R ( n d ) � d = � d H := i = 1 V i , e.g.: H Main problem: Let e.g. V = R n d dim V = O ( n d ) − − Curse of dimensionality! e.g. n = 10 , d = 23 , . . . , 100 , 200 � dim H ∼ 10 23 , . . . 10 100 , 10 200 , 6 , 1 · 10 23 Avogadro number, 10 200 is a number much larger than the estimated number of all atoms in the universe! Approach: Some higher order tensors can be constructed (data-) sparsely from lower order quantities. As for matrices, incomplete SVD: reduces only to √ d 2 = C ♯ DOFs ≥ Cn N curse of dimensionality! r r � � � � u [ x 1 , k ] · ˜ ˜ A [ x 1 , x 2 ] ≈ u k [ x 1 ] ⊗ v k [ x 2 ] = v [ x 2 , k ] k = 1 k = 1

  4. Setting - Tensors of order d - hyper matrices high-order tensors - multi-indexed arrays (hyper matrices) x = ( x 1 , . . . , x d ) �→ U = U [ x 1 , . . . , x d ] ∈ H i = 1 R n = R ( n d ) � d = � d H := i = 1 V i , e.g.: H Main problem: Let e.g. V = R n d dim V = O ( n d ) − − Curse of dimensionality! e.g. n = 10 , d = 23 , . . . , 100 , 200 � dim H ∼ 10 23 , . . . 10 100 , 10 200 , 6 , 1 · 10 23 Avogadro number, 10 200 is a number much larger than the estimated number of all atoms in the universe! Approach: Some higher order tensors can be constructed (data-) sparsely from lower order quantities. As for matrices, incomplete SVD: reduces only to √ d 2 = C ♯ DOFs ≥ Cn N curse of dimensionality! r r � � � � u [ x 1 , k ] · ˜ ˜ A [ x 1 , x 2 ] ≈ u k [ x 1 ] ⊗ v k [ x 2 ] = v [ x 2 , k ] k = 1 k = 1

  5. Setting - Tensors of order d - hyper matrices high-order tensors - multi-indexed arrays (hyper matrices) x = ( x 1 , . . . , x d ) �→ U = U [ x 1 , . . . , x d ] ∈ H i = 1 R n = R ( n d ) � d = � d H := i = 1 V i , e.g.: H Main problem: Let e.g. V = R n d dim V = O ( n d ) − − Curse of dimensionality! e.g. n = 10 , d = 23 , . . . , 100 , 200 � dim H ∼ 10 23 , . . . 10 100 , 10 200 , 6 , 1 · 10 23 Avogadro number, 10 200 is a number much larger than the estimated number of all atoms in the universe! Approach: Some higher order tensors can be constructed (data-) sparsely from lower order quantities. We do NOT use: Canonical decomposition for order- d - r tensors: � ⊗ d � � U [ x 1 , . . . , x d ] ≈ i = 1 u i [ x i , k ] . k = 1

  6. Low Rank Matrix Approximation r � U [ x , y ] = U 1 [ x , k ] U 2 [ y , k ] , ♯ = rn 1 + rn 2 << n 1 × n 2 k = 1 Compressive sensing techniques - matrix completion by Candes, Recht & .... Various ways to reshape U [ x 1 , . . . , x d ] into a matrix. Let t ⊂ { 1 , . . . , d } , ♯ t =: j M t ( U ) = ( A x , y ) , x = ( x t 1 , . . . , x t j ) example x := ( x 1 , . . . , x j ) , x := ( x j + 1 , . . . , x d ) , t = { 1 , . . . , j } Basic Assumption Low dimensional subspace assumption M t ( U ) ≈ M ǫ t ( U ) where r t := rank M ǫ t ( U ) = O ( d ) = O ( f ( ǫ ) log n d )) (e.g. f ( ǫ ) = 1 ǫ 2 motivated by Johnson-Lindenstrauß Lemma.)

  7. Low Rank Matrix Approximation ♯ M t ( U ) = O ( rn d − j + rn j ) curse of dimensions!!! A single low rank matrix factorization cannot circumvent the curse of dimensions! Can we benefit from various matricisation M t 1 ( U ) , M t 2 ( U ) , . . . ? Yes, we can! Idea replicate low rank matrix factorization (HT) � U [ x 1 , . . . , x j , x j + 1 , . . . , x d ] = U L [ x 1 , . . . , x j , k ] U R [ k , x j + 1 , . . . , x d ] k � U LL [ k ′ , k , x 1 , . . . ] U LR [ . . . , x j , k ′ ] etc . U L [ k , x 1 , . . ., . . . , x j ] = k ′ Prototype example. TT tensor trains r 1 � U [ x 1 , x 2 , . . . , x d ] = U 1 [ x 1 , k 1 ] V 1 [ k 1 , x 2 , . . . , x d ] k 1 = 1 r 2 � V 1 [ k 1 , x 2 , x 3 , . . . , x d ] = U 2 [ k 1 , x 2 , k 2 ] V 2 [ k 2 , x 3 , . . . , x d ] etc . k 2 = 1 � � U [ x 1 , . . . , x d ] = U 1 [ x 1 , k 1 ] U 2 [ k 1 , x 2 , k 2 ] · · · U i [ k i − 1 , x i , k i ] · · · U d [ k d − 1 , x d ] k 1 ,..., k d − 1

  8. Low Rank Matrix Approximation ♯ M t ( U ) = O ( rn d − j + rn j ) curse of dimensions!!! A single low rank matrix factorization cannot circumvent the curse of dimensions! Can we benefit from various matricisation M t 1 ( U ) , M t 2 ( U ) , . . . ? Yes, we can! Idea replicate low rank matrix factorization (HT) � U [ x 1 , . . . , x j , x j + 1 , . . . , x d ] = U L [ x 1 , . . . , x j , k ] U R [ k , x j + 1 , . . . , x d ] k � U LL [ k ′ , k , x 1 , . . . ] U LR [ . . . , x j , k ′ ] etc . U L [ k , x 1 , . . ., . . . , x j ] = k ′ Prototype example. TT tensor trains r 1 � U [ x 1 , x 2 , . . . , x d ] = U 1 [ x 1 , k 1 ] V 1 [ k 1 , x 2 , . . . , x d ] k 1 = 1 r 2 � V 1 [ k 1 , x 2 , x 3 , . . . , x d ] = U 2 [ k 1 , x 2 , k 2 ] V 2 [ k 2 , x 3 , . . . , x d ] etc . k 2 = 1 � � U [ x 1 , . . . , x d ] = U 1 [ x 1 , k 1 ] U 2 [ k 1 , x 2 , k 2 ] · · · U i [ k i − 1 , x i , k i ] · · · U d [ k d − 1 , x d ] k 1 ,..., k d − 1

  9. Low Rank Matrix Approximation ♯ M t ( U ) = O ( rn d − j + rn j ) curse of dimensions!!! A single low rank matrix factorization cannot circumvent the curse of dimensions! Can we benefit from various matricisation M t 1 ( U ) , M t 2 ( U ) , . . . ? Yes, we can! Idea replicate low rank matrix factorization (HT) � U [ x 1 , . . . , x j , x j + 1 , . . . , x d ] = U L [ x 1 , . . . , x j , k ] U R [ k , x j + 1 , . . . , x d ] k � U LL [ k ′ , k , x 1 , . . . ] U LR [ . . . , x j , k ′ ] etc . U L [ k , x 1 , . . ., . . . , x j ] = k ′ Prototype example. TT tensor trains r 1 � U [ x 1 , x 2 , . . . , x d ] = U 1 [ x 1 , k 1 ] V 1 [ k 1 , x 2 , . . . , x d ] k 1 = 1 r 2 � V 1 [ k 1 , x 2 , x 3 , . . . , x d ] = U 2 [ k 1 , x 2 , k 2 ] V 2 [ k 2 , x 3 , . . . , x d ] etc . k 2 = 1 � � U [ x 1 , . . . , x d ] = U 1 [ x 1 , k 1 ] U 2 [ k 1 , x 2 , k 2 ] · · · U i [ k i − 1 , x i , k i ] · · · U d [ k d − 1 , x d ] k 1 ,..., k d − 1

  10. Hierarchical subspace approximation, e.g. TT Let U ∈ H . For all j = 1 , . . . , d − 1 we reshape U into matrices U [ x 1 , . . . , x j , x j + 1 , . . . , x d ] =: M j ( U )[ x , y ] ∈ V j x ⊗ ( V j y ) ′ where V j x := V 1 ⊗ · · · ⊗ V j , V j y := V j + 1 ⊗ · · · ⊗ V d 1. Low dim. subspace assumption : ∀ j = 1 , . . . , d − 1, dim V j x =: r j is moderate (sub-space approximation) V j = span { φ k j [ x ] = φ k j [ x 1 , . . . , x j ] : k j = 1 , . . . , r j } and V j := V j ⊗ V j + 1 ⊗ · · · ⊗ V d ⊂ V j ⊗ V j + 1 ⇒ nestedness V j + 1 ⊂ V j V j + 1 ⇒ x we have a tensorial multi-resolution analysis, � a tensor MRA or T-MRA. However we have modify the concept slightly. The unbalanced tree for TT is only an example for general dimension trees T

  11. Hierarchical subspace approximation (e.g. TT) and tensor MRA Nestedness: V j + 1 ⊂ V j , V j = V j + 1 + W j + 1 ⇒ V j + 1 ⊂ V j ⊗ V j + 1 so far W j + 1 has been ignored!!! recursive SVD (HSVD) � 2-scale refinement rel.: 1 ≤ k j ≤ r j r j − 1 � φ k j [ x 1 , . . . , x j − 1 , x j ] := U j [ k j − 1 , α j , k j ] φ k j − 1 [ x 1 , . . . , x j − 1 ] ⊗ e α j [ x j ] k j − 1 = 1 for simplicity let us take e α j [ x j ] = δ α j , x j . We need only U j [ k j − 1 , x j , k j ] , j = 1 , . . . , d to define full tensor U ⇒ complexity O ( nr 2 d ) � U [ x 1 , . . . , x d ] = U 1 [ x 1 , k 1 ] U 2 [ k 1 , x 2 , k 2 ] · · · U i [ k i − 1 , x i , k i ] · · · U d [ k d − 1 , x d ] k 1 ,..., k d − 1 This is an adaptive MRA, or non stationary sub-division like algorithm where V d = span { φ d } , φ d [ x 1 , . . . , x d ] = U [ x 1 , . . . , x d ] , dim V d = 1 !

  12. General Hierarchical Tensor (HT) format ⊲ General hierarchical tensor setting ⊲ Subspace approach (Hackbusch/K¨ uhn, 2009) (Example: d = 5 , U i ∈ R n × k i , B t ∈ R k t × k t 1 × k t 2 )

  13. General Hierarchical Tensor (HT) format ⊲ Given dimension tree � a manifold! ⊲ Subspace approach (Hackbusch/K¨ uhn, 2009) (Example: d = 5 , U i ∈ R n × k i , B t ∈ R k t × k t 1 × k t 2 )

  14. General Hierarchical Tensor (HT) format ⊲ Given dimension tree � a manifold! ⊲ Subspace approach (Hackbusch/K¨ uhn, 2009) (Example: d = 5 , U i ∈ R n × k i , B t ∈ R k t × k t 1 × k t 2 )

  15. General Hierarchical Tensor (HT) format ⊲ Given dimension tree � a manifold! ⊲ Subspace approach (Hackbusch/K¨ uhn, 2009) B {1,2,3,4,5} B B {1,2,3} {4,5} B U U 4 U {1,2} 3 5 U U 1 2 (Example: d = 5 , U i ∈ R n × k i , B t ∈ R k t × k t 1 × k t 2 )

  16. General Hierarchical Tensor (HT) format ⊲ Given dimension tree � a manifold! ⊲ Subspace approach (Hackbusch/K¨ uhn, 2009) B {1,2,3,4,5} B B {1,2,3} {4,5} B U U 4 U {1,2} 3 5 U U 1 2 U {1,2} (Example: d = 5 , U i ∈ R n × k i , B t ∈ R k t × k t 1 × k t 2 )

  17. General Hierarchical Tensor (HT) format ⊲ Given dimension tree � a manifold! ⊲ Subspace approach (Hackbusch/K¨ uhn, 2009) B {1,2,3,4,5} B B {1,2,3} {4,5} B U U 4 U {1,2} 3 5 U U 1 2 U {1,2} U {1,2,3} (Example: d = 5 , U i ∈ R n × k i , B t ∈ R k t × k t 1 × k t 2 )

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend