computing the rank profile matrix and some bonus
play

Computing the Rank profile matrix (and some bonus) Joun ees - PowerPoint PPT Presentation

Computing the Rank profile matrix (and some bonus) Joun ees Nationales du Calcul Formel Cl ement Pernet Laboratoire de lInformatique du Parall elisme, Univ. Grenoble Alpes, Univ. de Lyon, CNRS, Inria Cluny. 2 novembre 2015 C.


  1. Choosing the underlying arithmetic Using machine word arithmetic Computing over fixed size integers How to compute with (machine word size) integers efficiently? 1 use CPU’s integer arithmetic units + vectorization y += a * b : correct if | ab + y | < 2 63 � | a | , | b | < 2 31 movq (%rax,%rdx,8), %rax vpmuludq %xmm3, %xmm0,%xmm0 imulq -56(%rbp), %rax vpaddq %xmm2,%xmm0,%xmm0 addq %rax, %rcx vpsllq $32 , %xmm0,%xmm0 movq -80(%rbp), %rax 2 use CPU’s floating point units + vectorization y += a * b : correct if | ab + y | < 2 53 � | a | , | b | < 2 26 movsd (%rax,%rdx,8), %xmm0 vinsertf128 $0x1, 16(%rcx,%rax), %ymm0, mulsd -56(%rbp), %xmm0 vmulpd %ymm1, %ymm0, %ymm0 addsd %xmm0, %xmm1 vaddpd (%rsi,%rax),%ymm0, %ymm0 movq %xmm1, %rax vmovapd %ymm0, (%rsi,%rax) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 8 / 51

  2. Choosing the underlying arithmetic Using machine word arithmetic Exploiting in-core parallelism Ingredients SIMD: Single Instruction Multiple Data: 1 arith. unit acting on a vector of data 4 x 64 = 256 bits MMX 64 bits SSE 128bits x[0] x[1] x[2] x[3] + AVX 256 bits y[0] y[1] y[2] y[3] AVX-512 512 bits x[0]+y[0] x[1]+y[1] x[2]+y[2] x[3]+y[3] C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 9 / 51

  3. Choosing the underlying arithmetic Using machine word arithmetic Exploiting in-core parallelism Ingredients SIMD: Single Instruction Multiple Data: 1 arith. unit acting on a vector of data 4 x 64 = 256 bits MMX 64 bits SSE 128bits x[0] x[1] x[2] x[3] + AVX 256 bits y[0] y[1] y[2] y[3] AVX-512 512 bits x[0]+y[0] x[1]+y[1] x[2]+y[2] x[3]+y[3] Pipeline: amortize the latency of an operation when used repeatedly throughput of 1 op/ Cycle for all arithmetic ops considered here C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 9 / 51

  4. Choosing the underlying arithmetic Using machine word arithmetic Exploiting in-core parallelism Ingredients SIMD: Single Instruction Multiple Data: 1 arith. unit acting on a vector of data 4 x 64 = 256 bits MMX 64 bits SSE 128bits x[0] x[1] x[2] x[3] + AVX 256 bits y[0] y[1] y[2] y[3] AVX-512 512 bits x[0]+y[0] x[1]+y[1] x[2]+y[2] x[3]+y[3] Pipeline: amortize the latency of an operation when used repeatedly throughput of 1 op/ Cycle for all arithmetic ops considered here Execution Unit parallelism: multiple arith. units acting simulatneously on distinct registers C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 9 / 51

  5. Choosing the underlying arithmetic Using machine word arithmetic SIMD and vectorization Intel Sandybridge micro-architecture 4 x 64 = 256 bits Scheduler Performs at every clock cycle: FMUL Port 0 ◮ 1 Floating Pt. Mul × 4 Int MUL ◮ 1 Floating Pt. Add × 4 Port 1 Or: FADD ◮ 1 Integer Mul × 2 Int ADD ◮ 2 Integer Add × 2 Port 5 Int ADD 2 x 64 = 128 bits C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 10 / 51

  6. Choosing the underlying arithmetic Using machine word arithmetic SIMD and vectorization Intel Haswell micro-architecture 4 x 64 = 256 bits Scheduler FMA Port 0 Performs at every clock cycle: Int MUL ◮ 2 Floating Pt. Mul & Add × 4 Or: Port 1 FMA ◮ 1 Integer Mul × 4 Int ADD ◮ 2 Integer Add × 4 Port 5 Int ADD FMA: Fused Multiplying & Accumulate, c += a * b C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 11 / 51

  7. Choosing the underlying arithmetic Using machine word arithmetic SIMD and vectorization AMD Bulldozer micro-architecture 2 x 64 = 128 bits Scheduler FMA Port 0 Performs at every clock cycle: Int MUL ◮ 2 Floating Pt. Mul & Add × 2 Or: Port 1 FMA ◮ 1 Integer Mul × 2 Int ADD ◮ 2 Integer Add Port 2 × 2 Port 3 Int ADD 2 x 64 = 128 bits FMA: Fused Multiplying & Accumulate, c += a * b C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 12 / 51

  8. Choosing the underlying arithmetic Using machine word arithmetic SIMD and vectorization Intel Nehalem micro-architecture 2 x 64 = 128 bits Scheduler Performs at every clock cycle: FMUL Port 0 ◮ 1 Floating Pt. Mul × 2 Int MUL ◮ 1 Floating Pt. Add × 2 Port 1 FADD Or: ◮ 1 Integer Mul × 2 Int ADD ◮ 2 Integer Add × 2 Port 5 Int ADD 2 x 64 = 128 bits C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 13 / 51

  9. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 AVX2 FP 256 2 8 3.5 56 Intel Sandybridge INT AVX1 FP AMD Bulldozer INT FMA4 FP Intel Nehalem INT SSE4 FP AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  10. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT AVX1 FP AMD Bulldozer INT FMA4 FP Intel Nehalem INT SSE4 FP AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  11. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 AVX1 FP 256 1 1 4 3.3 26.4 AMD Bulldozer INT FMA4 FP Intel Nehalem INT SSE4 FP AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  12. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 12.1 AVX1 FP 256 1 1 4 3.3 26.4 24.6 AMD Bulldozer INT FMA4 FP Intel Nehalem INT SSE4 FP AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  13. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 12.1 AVX1 FP 256 1 1 4 3.3 26.4 24.6 AMD Bulldozer INT 128 2 1 2 2.1 8.4 FMA4 FP 128 2 4 2.1 16.8 Intel Nehalem INT SSE4 FP AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  14. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 12.1 AVX1 FP 256 1 1 4 3.3 26.4 24.6 AMD Bulldozer INT 128 2 1 2 2.1 8.4 6.44 FMA4 FP 128 2 4 2.1 16.8 13.1 Intel Nehalem INT SSE4 FP AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  15. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 12.1 AVX1 FP 256 1 1 4 3.3 26.4 24.6 AMD Bulldozer INT 128 2 1 2 2.1 8.4 6.44 FMA4 FP 128 2 4 2.1 16.8 13.1 Intel Nehalem INT 128 2 1 2 2.66 10.6 SSE4 FP 128 1 1 2 2.66 10.6 AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  16. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 12.1 AVX1 FP 256 1 1 4 3.3 26.4 24.6 AMD Bulldozer INT 128 2 1 2 2.1 8.4 6.44 FMA4 FP 128 2 4 2.1 16.8 13.1 Intel Nehalem INT 128 2 1 2 2.66 10.6 4.47 SSE4 FP 128 1 1 2 2.66 10.6 9.6 AMD K10 INT SSE4a FP Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  17. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 12.1 AVX1 FP 256 1 1 4 3.3 26.4 24.6 AMD Bulldozer INT 128 2 1 2 2.1 8.4 6.44 FMA4 FP 128 2 4 2.1 16.8 13.1 Intel Nehalem INT 128 2 1 2 2.66 10.6 4.47 SSE4 FP 128 1 1 2 2.66 10.6 9.6 AMD K10 INT 64 2 1 1 2.4 4.8 SSE4a FP 128 1 1 2 2.4 9.6 Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  18. Choosing the underlying arithmetic Using machine word arithmetic Summary: 64 bits AXPY throughput Speed in practice CPU Freq. (Ghz) Speed of Light # daxpy /Cycle (Gfops) (Gfops) # Multipliers Register size # Adders # FMA Intel Haswell INT 256 2 1 4 3.5 28 23.3 AVX2 FP 256 2 8 3.5 56 49.2 Intel Sandybridge INT 128 2 1 2 3.3 13.2 12.1 AVX1 FP 256 1 1 4 3.3 26.4 24.6 AMD Bulldozer INT 128 2 1 2 2.1 8.4 6.44 FMA4 FP 128 2 4 2.1 16.8 13.1 Intel Nehalem INT 128 2 1 2 2.66 10.6 4.47 SSE4 FP 128 1 1 2 2.66 10.6 9.6 AMD K10 INT 64 2 1 1 2.4 4.8 SSE4a FP 128 1 1 2 2.4 9.6 8.93 Speed of light: CPU freq × ( # daxpy / Cycle) × 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 14 / 51

  19. Choosing the underlying arithmetic Using machine word arithmetic Computing over fixed size integers: ressources Micro-architecture bible: Agner Fog’s software optimization resources [ www.agner.org/optimize ] Experiments: dgemm ( double ): OpenBLAS [ http://www.openblas.net/ ] igemm ( int64 t ): Eigen [ http://eigen.tuxfamily.org/ ] & FFLAS-FFPACK [ linalg.org/projects/fflas-ffpack ] C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 15 / 51

  20. Choosing the underlying arithmetic Using machine word arithmetic Looking into the near future Intel Skylake & Knights Landing: AVX512-F 2016 (2017 on Xeons) ◮ Enlarge SIMD register width to 512 bits (8 double or int64 t ) ◮ same micro arch : FMA for FP and seprate mul/add for INT. C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 16 / 51

  21. Choosing the underlying arithmetic Using machine word arithmetic Looking into the near future Intel Skylake & Knights Landing: AVX512-F 2016 (2017 on Xeons) ◮ Enlarge SIMD register width to 512 bits (8 double or int64 t ) ◮ same micro arch : FMA for FP and seprate mul/add for INT. Cannonlake: AVX512-IFMA > 2017 ◮ AVX512 extension: IFMA (Integer FMA): y += a*b on int64 t ◮ But limited to the lower 52 bits of the output (uses the FP FMA) � no advantage for int64 t over double C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 16 / 51

  22. Choosing the underlying arithmetic Using machine word arithmetic Integer Packing 32 bits: half the precision twice the speed double double double double float float float float float float float float Gfops double float int64 t int32 t Intel SandyBridge 24.7 49.1 12.1 24.7 Intel Haswell 49.2 77.6 23.3 27.4 AMD Bulldozer 13.0 20.7 6.63 11.8 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 17 / 51

  23. Choosing the underlying arithmetic Using machine word arithmetic Computing over fixed size integers fgemm C = A x B n = 2000 50 float mod p 45 double mod p 40 int64 mod p Speed (Gfops) 35 30 25 20 15 10 5 0 0 5 10 15 20 25 30 35 Modulo p (bitsize) SandyBridge i5-3320M@3.3Ghz. n = 2000 . Take home message ◮ Floating pt. arith. delivers the highest speed (except in corner cases) ◮ 32 bits twice as fast as 64 bits C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 18 / 51

  24. Choosing the underlying arithmetic Using machine word arithmetic Computing over fixed size integers fgemm C = A x B n = 2000 fgemm C = A x B n = 2000 50 600 float mod p float mod p 45 double mod p double mod p 500 40 int64 mod p int64 mod p Bit speed (Gbops) Speed (Gfops) 35 400 30 25 300 20 200 15 10 100 5 0 0 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 Modulo p (bitsize) Modulo p (bitsize) SandyBridge i5-3320M@3.3Ghz. n = 2000 . Take home message ◮ Floating pt. arith. delivers the highest speed (except in corner cases) ◮ 32 bits twice as fast as 64 bits ◮ best bit computation throughput for double precision floating points. C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 18 / 51

  25. Choosing the underlying arithmetic Larger field sizes Larger finite fields: log 2 p ≥ 32 As before: 1 Use adequate integer arithmetic 2 reduce modulo p only when necessary Which integer arithmetic? 1 big integers (GMP) 2 fixed size multiprecision (Givaro-RecInt) 3 Residue Number Systems (Chinese Remainder theorem) � using moduli delivering optimum bitspeed C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 19 / 51

  26. Choosing the underlying arithmetic Larger field sizes Larger finite fields: log 2 p ≥ 32 As before: 1 Use adequate integer arithmetic 2 reduce modulo p only when necessary Which integer arithmetic? 1 big integers (GMP) 2 fixed size multiprecision (Givaro-RecInt) 3 Residue Number Systems (Chinese Remainder theorem) � using moduli delivering optimum bitspeed log 2 p 50 100 150 GMP 58.1s 94.6s 140s n = 1000 , on an Intel SandyBridge. RecInt 5.7s 28.6s 837s RNS 0.785s 1.42s 1.88s C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 19 / 51

  27. Choosing the underlying arithmetic Larger field sizes In practice fgemm C = A x B n = 2000 fgemm C = A x B n = 2000 50 600 float mod p float mod p 45 double mod p double mod p 500 40 int64 mod p int64 mod p Bit speed (Gbops) Speed (Gfops) RNS mod p RNS mod p 35 400 30 25 300 20 200 15 10 100 5 0 0 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 Modulo p (bitsize) Modulo p (bitsize) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 20 / 51

  28. Choosing the underlying arithmetic Larger field sizes In practice fgemm C = A x B n = 2000 fgemm C = A x B n = 2000 50 600 float mod p float mod p 45 double mod p double mod p 500 40 int64 mod p int64 mod p Bit speed (Gbops) Speed (Gfops) RNS mod p RNS mod p 35 400 30 25 300 20 200 15 10 100 5 0 0 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 Modulo p (bitsize) Modulo p (bitsize) fgemm C = A x B n = 2000 600 float mod p double mod p 500 int64 mod p Bit speed (Gbops) RNS mod p 400 300 200 100 0 0 20 40 60 80 100 120 140 Modulo p (bitsize) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 20 / 51

  29. Choosing the underlying arithmetic Larger field sizes In practice fgemm C = A x B n = 2000 fgemm C = A x B n = 2000 50 600 float mod p float mod p 45 double mod p double mod p 500 40 int64 mod p int64 mod p Bit speed (Gbops) Speed (Gfops) RNS mod p RNS mod p 35 400 30 25 300 20 200 15 10 100 5 0 0 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 Modulo p (bitsize) Modulo p (bitsize) fgemm C = A x B n = 2000 600 float mod p double mod p 500 int64 mod p Bit speed (Gbops) RNS mod p 400 RecInt mod p 300 200 100 0 0 20 40 60 80 100 120 140 Modulo p (bitsize) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 20 / 51

  30. Reductions and building blocks Outline Choosing the underlying arithmetic 1 Using machine word arithmetic Larger field sizes Reductions and building blocks 2 Gaussian elimination 3 Which reduction Computing rank profiles Algorithmic instances Relation to other decompositions The small rank case C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 21 / 51

  31. Reductions and building blocks Reductions to building blocks Huge number of algorithmic variants for a given computation. � Need to structure the design for a set of routines : ◮ Focus tuning effort on a single routine ◮ Some operations deliver better efficiency: ⊲ in practice: memory access pattern ⊲ in theory: better algorithms C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 22 / 51

  32. Reductions and building blocks Memory access pattern 22 CPU arithmetic throughput 20 Memory speed 18 ◮ The memory wall: communication speed 16 14 Speed 12 improves slower than arithmetic 10 8 6 4 2 1980 1985 1990 1995 2000 2005 2010 2015 Time C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 23 / 51

  33. Reductions and building blocks Memory access pattern 22 CPU arithmetic throughput 20 Memory speed 18 ◮ The memory wall: communication speed 16 14 Speed 12 improves slower than arithmetic 10 8 6 ◮ Deep memory hierarchy 4 2 1980 1985 1990 1995 2000 2005 2010 2015 Time RAM L3 L2 L1 CPU C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 23 / 51

  34. Reductions and building blocks Memory access pattern 22 CPU arithmetic throughput 20 Memory speed 18 ◮ The memory wall: communication speed 16 14 Speed 12 improves slower than arithmetic 10 8 6 ◮ Deep memory hierarchy 4 2 1980 1985 1990 1995 2000 2005 2010 2015 � Need to overlap communications by Time computation RAM Design of BLAS 3 [Dongarra & Al. 87] L3 ◮ Group all ops in Matrix products gemm : Work O ( n 3 ) >> Data O ( n 2 ) L2 MatMul has become a building block in practice L1 CPU C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 23 / 51

  35. Reductions and building blocks Sub-cubic linear algebra < 1969 : O ( n 3 ) for everyone (Gauss, Householder, Danilevsk˘ ıi, etc) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 24 / 51

  36. Reductions and building blocks Sub-cubic linear algebra < 1969 : O ( n 3 ) for everyone (Gauss, Householder, Danilevsk˘ ıi, etc) Matrix Multiplication � O ( n ω ) O ( n 2 . 807 ) [Strassen 69]: . . . O ( n 2 . 52 ) [Sch¨ onhage 81] . . . O ( n 2 . 375 ) [Coppersmith, Winograd 90] . . . O ( n 2 . 3728639 ) [Le Gall 14] C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 24 / 51

  37. Reductions and building blocks Sub-cubic linear algebra < 1969 : O ( n 3 ) for everyone (Gauss, Householder, Danilevsk˘ ıi, etc) Matrix Multiplication � O ( n ω ) Other operations O ( n 2 . 807 ) [Strassen 69]: Inverse in O ( n ω ) [Strassen 69]: . . . QR in O ( n ω ) [Sch¨ onhage 72]: O ( n 2 . 52 ) [Sch¨ onhage 81] LU in O ( n ω ) [Bunch, Hopcroft 74]: . . . Rank in O ( n ω ) [Ibarra & al. 82]: O ( n 2 . 375 ) [Coppersmith, Winograd 90] [Keller-Gehrig 85]: CharPoly in . . O ( n ω log n ) . O ( n 2 . 3728639 ) [Le Gall 14] C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 24 / 51

  38. Reductions and building blocks Sub-cubic linear algebra < 1969 : O ( n 3 ) for everyone (Gauss, Householder, Danilevsk˘ ıi, etc) Matrix Multiplication � O ( n ω ) Other operations O ( n 2 . 807 ) [Strassen 69]: Inverse in O ( n ω ) [Strassen 69]: . . . QR in O ( n ω ) [Sch¨ onhage 72]: O ( n 2 . 52 ) [Sch¨ onhage 81] LU in O ( n ω ) [Bunch, Hopcroft 74]: . . . Rank in O ( n ω ) [Ibarra & al. 82]: O ( n 2 . 375 ) [Coppersmith, Winograd 90] [Keller-Gehrig 85]: CharPoly in . . O ( n ω log n ) . O ( n 2 . 3728639 ) [Le Gall 14] MatMul has become a building block in theoretical reductions C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 24 / 51

  39. Reductions and building blocks Reductions: theory HNF( Z ) SNF( Z ) Det( Z ) LinSys( Z ) MM( Z ) LinSys( Z p ) Det( Z p ) MinPoly( Z p ) LU( Z p ) CharPoly( Z p ) TRSM( Z p ) MM( Z p ) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 25 / 51

  40. Reductions and building blocks Reductions: theory Common mistrust HNF( Z ) SNF( Z ) Fast linear algebra is Det( Z ) ✗ never faster ✗ numerically unstable LinSys( Z ) MM( Z ) LinSys( Z p ) Det( Z p ) MinPoly( Z p ) LU( Z p ) CharPoly( Z p ) TRSM( Z p ) MM( Z p ) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 25 / 51

  41. Reductions and building blocks Reductions: theory and practice Common mistrust HNF( Z ) SNF( Z ) Fast linear algebra is Det( Z ) ✗ never faster ✗ numerically unstable LinSys( Z ) Lucky coincidence MM( Z ) ✓ same building block in theory LinSys( Z p ) Det( Z p ) MinPoly( Z p ) and in practice � reduction trees are still relevant LU( Z p ) CharPoly( Z p ) TRSM( Z p ) MM( Z p ) C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 25 / 51

  42. Reductions and building blocks Reductions: theory and practice Common mistrust HNF( Z ) SNF( Z ) Fast linear algebra is Det( Z ) ✗ never faster ✗ numerically unstable LinSys( Z ) Lucky coincidence MM( Z ) ✓ same building block in theory LinSys( Z p ) Det( Z p ) MinPoly( Z p ) and in practice � reduction trees are still relevant LU( Z p ) CharPoly( Z p ) Road map towards efficiency in practice TRSM( Z p ) Tune the MatMul building block. 1 Tune the reductions. 2 MM( Z p ) New reductions. C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix 3 JNCF, Cluny, 2 nov. 2015 25 / 51

  43. Reductions and building blocks Putting it together: MatMul building block over Z /p Z Ingedients [FFLAS-FFPACK library] ◮ Compute over Z and delay modular reductions � 2 � p − 1 < 2 mantissa � k 2 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 26 / 51

  44. Reductions and building blocks Putting it together: MatMul building block over Z /p Z Ingedients [FFLAS-FFPACK library] ◮ Compute over Z and delay modular reductions � 2 � p − 1 < 2 53 � k 2 ◮ Fastest integer arithmetic: double ◮ Cache optimizations � numerical BLAS C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 26 / 51

  45. Reductions and building blocks Putting it together: MatMul building block over Z /p Z Ingedients [FFLAS-FFPACK library] ◮ Compute over Z and delay modular reductions � 2 � 9 ℓ � k � � p − 1 < 2 53 2 ℓ 2 ◮ Fastest integer arithmetic: double ◮ Cache optimizations � numerical BLAS ◮ Strassen-Winograd 6 n 2 . 807 + . . . C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 26 / 51

  46. Reductions and building blocks Putting it together: MatMul building block over Z /p Z Ingedients [FFLAS-FFPACK library] ◮ Compute over Z and delay modular reductions � 2 � 9 ℓ � k � � p − 1 < 2 53 2 ℓ 2 ◮ Fastest integer arithmetic: double ◮ Cache optimizations � numerical BLAS ◮ Strassen-Winograd 6 n 2 . 807 + . . . with memory efficient schedules [Boyer, Dumas, P. and Zhou 09] Tradeoffs: Fully in-place in Extra memory 7 . 2 n 2 . 807 + . . . Overwriting input Leading constant C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 26 / 51

  47. Reductions and building blocks Sequential Matrix Multiplication i5−3320M at 2.6Ghz with AVX 1 60 2n 3 /time/10 9 (Gfops equiv.) 50 40 30 20 10 OpenBLAS sgemm 0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 matrix dimension C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 27 / 51

  48. Reductions and building blocks Sequential Matrix Multiplication i5−3320M at 2.6Ghz with AVX 1 60 2n 3 /time/10 9 (Gfops equiv.) 50 40 30 20 10 FFLAS fgemm over Z/83Z OpenBLAS sgemm 0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 matrix dimension p = 83 , � 1 mod / 10000 mul. C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 27 / 51

  49. Reductions and building blocks Sequential Matrix Multiplication i5−3320M at 2.6Ghz with AVX 1 60 2n 3 /time/10 9 (Gfops equiv.) 50 40 30 20 10 FFLAS fgemm over Z/83Z FFLAS fgemm over Z/821Z OpenBLAS sgemm 0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 matrix dimension p = 83 , � 1 mod / 10000 mul. p = 821 , � 1 mod / 100 mul. C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 27 / 51

  50. Reductions and building blocks Sequential Matrix Multiplication i5−3320M at 2.6Ghz with AVX 1 60 2n 3 /time/10 9 (Gfops equiv.) 50 40 30 20 FFLAS fgemm over Z/83Z FFLAS fgemm over Z/821Z OpenBLAS sgemm 10 FFLAS fgemm over Z/1898131Z FFLAS fgemm over Z/18981307Z OpenBLAS dgemm 0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 matrix dimension p = 83 , � 1 mod / 10000 mul. p = 1898131 , � 1 mod / 10000 mul. p = 821 , � 1 mod / 100 mul. p = 18981307 , � 1 mod / 100 mul. C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 27 / 51

  51. Reductions and building blocks Reductions in dense linear algebra LU decomposition ◮ Block recursive algorithm � reduces to MatMul � O ( n ω ) 1000 5000 10000 15000 20000 n LAPACK-dgetrf 0.024s 2.01s 14.88s 48.78s 113.66 fflas-ffpack 0.058s 2.46s 16.08s 47.47s 105.96s Intel Haswell E3-1270 3.0Ghz using OpenBLAS-0.2.9 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 28 / 51

  52. Reductions and building blocks Reductions in dense linear algebra LU decomposition ◮ Block recursive algorithm � reduces to MatMul � O ( n ω ) 1000 5000 10000 15000 20000 n LAPACK-dgetrf 0.024s 2.01s 14.88s 48.78s 113.66 fflas-ffpack 0.058s 2.46s 16.08s 47.47s 105.96s Intel Haswell E3-1270 3.0Ghz using OpenBLAS-0.2.9 Characteristic Polynomial ◮ A new reduction to matrix multiplication in O ( n ω ) . n 1000 2000 5000 10000 1.38s 24.28s 332.7s 2497s magma-v2.19-9 0.532s 2.936s 32.71s 219.2s fflas-ffpack Intel Ivy-Bridge i5-3320 2.6Ghz using OpenBLAS-0.2.9 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 28 / 51

  53. Reductions and building blocks Reductions in dense linear algebra LU decomposition ◮ Block recursive algorithm � reduces to MatMul � O ( n ω ) 1000 5000 10000 15000 20000 n × 7 . 63 LAPACK-dgetrf 0.024s 2.01s 14.88s 48.78s 113.66 × 6 . 59 fflas-ffpack 0.058s 2.46s 16.08s 47.47s 105.96s Intel Haswell E3-1270 3.0Ghz using OpenBLAS-0.2.9 Characteristic Polynomial ◮ A new reduction to matrix multiplication in O ( n ω ) . n 1000 2000 5000 10000 × 7 . 5 1.38s 24.28s 332.7s 2497s magma-v2.19-9 × 6 . 7 0.532s 2.936s 32.71s 219.2s fflas-ffpack Intel Ivy-Bridge i5-3320 2.6Ghz using OpenBLAS-0.2.9 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 28 / 51

  54. Gaussian elimination Outline Choosing the underlying arithmetic 1 Using machine word arithmetic Larger field sizes Reductions and building blocks 2 Gaussian elimination 3 Which reduction Computing rank profiles Algorithmic instances Relation to other decompositions The small rank case C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 29 / 51

  55. Gaussian elimination Which reduction The case of Gaussian elimination Which reduction to MatMul ? Slab iterative Slab recursive LAPACK FFLAS-FFPACK Tile iterative Tile recursive PLASMA FFLAS-FFPACK C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 30 / 51

  56. Gaussian elimination Which reduction The case of Gaussian elimination Which reduction to MatMul ? Slab recursive FFLAS-FFPACK Tile recursive FFLAS-FFPACK ◮ Sub-cubic complexity: recursive algorithms C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 30 / 51

  57. Gaussian elimination Which reduction The case of Gaussian elimination Which reduction to MatMul ? Tile recursive FFLAS-FFPACK ◮ Sub-cubic complexity: recursive algorithms ◮ Data locality C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 30 / 51

  58. Gaussian elimination Computing rank profiles Computing rank profiles Rank profiles: first linearly independent columns ◮ Major invariant of a matrix (echelon form) ◮ Gr¨ obner basis computations (Macaulay matrix) ◮ Krylov methods Gaussian elimination revealing echelon forms: A = L S P [Ibarra, Moran and Hui 82] X A = R [Keller-Gehrig 85] = A P L E [Jeannerod, P. and Storjohann 13] C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 31 / 51

  59. Gaussian elimination Computing rank profiles Computing rank profiles Lessons learned (or what we thought was necessary): ◮ treat rows in order ◮ exhaust all columns before considering the next row ◮ slab block splitting required (recursive or iterative) � similar to partial pivoting Need for a more flexible pivoting C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 32 / 51

  60. Gaussian elimination Computing rank profiles Computing rank profiles Lessons learned (or what we thought was necessary): ◮ treat rows in order ◮ exhaust all columns before considering the next row ◮ slab block splitting required (recursive or iterative) � similar to partial pivoting Need for a more flexible pivoting Gathering linear independence invariants Two ways to look at a matrix: row- or column-wise ◮ Row rank profile, column echelon form, ◮ Column rank profile, row echelon form, Can’t a unique invariant capture all information ? C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 32 / 51

  61. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  62. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R Theorem 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 ◮ RowRP and ColRP read directly on R ( A ) 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 ◮ Same holds for any ( i, j ) -leading submatrix. RowRP = { 1 } ColRP = { 1 } C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  63. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R Theorem 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 ◮ RowRP and ColRP read directly on R ( A ) 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 ◮ Same holds for any ( i, j ) -leading submatrix. RowRP = { 1,2 } ColRP = { 1,3 } C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  64. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R Theorem 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 ◮ RowRP and ColRP read directly on R ( A ) 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 ◮ Same holds for any ( i, j ) -leading submatrix. RowRP = { 1,4 } ColRP = { 1,2 } C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  65. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R Theorem 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 ◮ RowRP and ColRP read directly on R ( A ) 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 ◮ Same holds for any ( i, j ) -leading submatrix. RowRP = { 1,4 } ColRP = { 1,2 } � L � � I r � � U � 0 V A = PLUQ = P Q 0 M I m − r I n − r C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  66. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R Theorem 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 ◮ RowRP and ColRP read directly on R ( A ) 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 ◮ Same holds for any ( i, j ) -leading submatrix. RowRP = { 1,4 } ColRP = { 1,2 } � L � � I r � � U � 0 V P T P QQ T A = PLUQ = P Q 0 M I m − r I n − r C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  67. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R Theorem 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 ◮ RowRP and ColRP read directly on R ( A ) 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 ◮ Same holds for any ( i, j ) -leading submatrix. RowRP = { 1,4 } ColRP = { 1,2 } � L � � I r � � U � 0 V P T Q T A = PLUQ = P P Q Q M I m − r 0 I n − r � �� � � �� � � �� � Π P,Q L U C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  68. Gaussian elimination Computing rank profiles The rank profile matrix [Dumas, P. and Sultan’15] Definition (Rank Profile matrix) The unique R A ∈ { 0 , 1 } m × n such that any pair of R A ( i, j ) -leading sub-matrix of R A and of A have the same rank. A R Theorem 1 2 3 4 1 0 0 0 2 4 5 8 0 0 1 0 ◮ RowRP and ColRP read directly on R ( A ) 1 2 3 4 0 0 0 0 3 5 9 12 0 1 0 0 ◮ Same holds for any ( i, j ) -leading submatrix. RowRP = { 1,4 } ColRP = { 1,2 } � L � � I r � � U � 0 V P T Q T A = PLUQ = P P Q Q M I m − r 0 I n − r � �� � � �� � � �� � Π P,Q L U When does Π P,Q = R ( A ) ? C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 33 / 51

  69. Gaussian elimination Computing rank profiles Anatomy of a PLUQ decomposition Four types of elementary operations: Search: finding a pivot C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 34 / 51

  70. Gaussian elimination Computing rank profiles Anatomy of a PLUQ decomposition Four types of elementary operations: Search: finding a pivot Permutation: moving the pivot to the main diagonal C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 34 / 51

  71. Gaussian elimination Computing rank profiles Anatomy of a PLUQ decomposition Four types of elementary operations: Search: finding a pivot Permutation: moving the pivot to the main diagonal Normalization: computing L : l i,k ← a i,k a k,k C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 34 / 51

  72. Gaussian elimination Computing rank profiles Anatomy of a PLUQ decomposition Four types of elementary operations: Search: finding a pivot Permutation: moving the pivot to the main diagonal Normalization: computing L : l i,k ← a i,k a k,k Update: applying the elimination a i,j ← a i,j − a i,k a k,j a k,k C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 34 / 51

  73. Gaussian elimination Computing rank profiles Impact on the PLUQ decomposition Normalization: determines whether L or U is unit diagonal C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 35 / 51

  74. Gaussian elimination Computing rank profiles Impact on the PLUQ decomposition Normalization: determines whether L or U is unit diagonal Update: no impact on the decomposition, only in the scheduling: ◮ iterative, tile/slab iterative, recursive, ◮ left/right looking, Crout C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 35 / 51

  75. Gaussian elimination Computing rank profiles Impact on the PLUQ decomposition Normalization: determines whether L or U is unit diagonal Update: no impact on the decomposition, only in the scheduling: ◮ iterative, tile/slab iterative, recursive, ◮ left/right looking, Crout Search: defines the first r values of P and Q C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 35 / 51

  76. Gaussian elimination Computing rank profiles Impact on the PLUQ decomposition Normalization: determines whether L or U is unit diagonal Update: no impact on the decomposition, only in the scheduling: ◮ iterative, tile/slab iterative, recursive, ◮ left/right looking, Crout Search: defines the first r values of P and Q Permutation: impacts all values of P and Q C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 35 / 51

  77. Gaussian elimination Computing rank profiles Impact on the PLUQ decomposition Normalization: determines whether L or U is unit diagonal Update: no impact on the decomposition, only in the scheduling: ◮ iterative, tile/slab iterative, recursive, ◮ left/right looking, Crout Search: defines the first r values of P and Q Permutation: impacts all values of P and Q Problem (Reformulation) Under what conditions on the Search and Permutation operations does a PLUQ decomposition algorithm reveals RowRP, ColRP or R A ? C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 35 / 51

  78. Gaussian elimination Computing rank profiles The Pivoting matrix Definition (The pivoting matrix) Given a PLUQ decomposition A = PLUQ with rank r , define � I r � Π P,Q = P Q. Locates the position of the pivots in the matrix A . C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 36 / 51

  79. Gaussian elimination Computing rank profiles The Pivoting matrix Definition (The pivoting matrix) Given a PLUQ decomposition A = PLUQ with rank r , define � I r � Π P,Q = P Q. Locates the position of the pivots in the matrix A . Problem (Rank profile revealing PLUQ decompositions) Under which conditions ◮ Π P,Q = R A C. Pernet (LIP, U. Grenoble Alpes) Computing the Rank Profile Matrix JNCF, Cluny, 2 nov. 2015 36 / 51

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend