A High Performance Computing Course Guided by the LU Factorization - - PowerPoint PPT Presentation

a high performance computing course guided by the lu
SMART_READER_LITE
LIVE PREVIEW

A High Performance Computing Course Guided by the LU Factorization - - PowerPoint PPT Presentation

A High Performance Computing Course Guided by the LU Factorization Gregorio Bernab , Javier Cuenca, Domingo Gimnez, Luis P . Garca and Sergio Rivas Universidad de Murcia/Universidad Politcnica de Cartagena Scientific Computing and


slide-1
SLIDE 1

A High Performance Computing Course Guided by the LU Factorization

Gregorio Bernabé, Javier Cuenca, Domingo Giménez, Luis P . García and Sergio Rivas

Universidad de Murcia/Universidad Politécnica de Cartagena Scientific Computing and Parallel Programming Group

International Conference on Computational Science June 10-12, 2014

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 1 / 24

slide-2
SLIDE 2

Outline

1

General organization of the course

2

The LU factorization

3

Development of the course

4

Evaluating Teaching

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 2 / 24

slide-3
SLIDE 3

Outline

1

General organization of the course

2

The LU factorization

3

Development of the course

4

Evaluating Teaching

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 3 / 24

slide-4
SLIDE 4

Course description

Parallel Programming and High Performance Computing Master in New Technologies in Computer Science Specialization of High Performance Architectures and Supercomputing Small class ⇒ high level students, interested in the subject Initiation to research ⇒ techniques for the Master’s Thesis Guided by the LU factorization

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 4 / 24

slide-5
SLIDE 5

Syllabus

Parallel programming environments OpenMP , MPI, CUDA Matrix computation Sequential algorithms, Algorithms by blocks, Out-of-core algorithms, Parallel algorithms Numerical libraries BLAS, LAPACK, MKL, PLASMA, MAGMA, ScaLAPACK

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 5 / 24

slide-6
SLIDE 6

Proposed problem

LU factorization of large matrices in today’s heterogeneous computational systems Students use LU factorization to develop their own implementations based on Efficient use of optimized libraries Use of different parallel programming paradigms Out-of-core techniques for large matrices Combination of the different approaches for clusters of multicore+GPU

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 6 / 24

slide-7
SLIDE 7

Outline

1

General organization of the course

2

The LU factorization

3

Development of the course

4

Evaluating Teaching

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 7 / 24

slide-8
SLIDE 8

LU factorization by blocks

A LU factorization basic version is explained to the students to work. This version is based on four steps:   A00 A01 A02 A10 A11 A12 A20 A21 A22   =   L00 L10 L11 L20 L21 L22   ∗   U00 U01 U02 U11 U12 U22   Step 1: A00 = L00 ∗ U00 (LU no blocks factorization) Step 2: A0i = L00 ∗ U0i (multiple lower triangular systems) Step 3: Ai0 = Li0 ∗ U00 (multiple upper triangular systems) Step 4: Aij = Aij − Li0 ∗ U0j (update south-east blocks)

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 8 / 24

slide-9
SLIDE 9

Implementations

Different implementations based on the structure by blocks: Shared-memory assignation of the work with the blocks to different threads use of multithread libraries Message-passing distribution of blocks to the processes communication of blocks needed for local computation GPU use of libraries for GPU assignation of blocks to CPU and GPU Out-of-core blocks stored in secondary memory brought to main memory for computation Heterogeneous systems balanced assignation of blocks to the computational components

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 9 / 24

slide-10
SLIDE 10

Outline

1

General organization of the course

2

The LU factorization

3

Development of the course

4

Evaluating Teaching

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 10 / 24

slide-11
SLIDE 11

Organization and methodology

Students with different knowledge

from different universities, degrees and specializations

and interests

from companies

  • ptional subject

HPC used in their Master’s Thesis Master’s Thesis on HPC

⇒ Problem-based learning, favors autonomous work and individual supervision.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 11 / 24

slide-12
SLIDE 12

Initial sessions

Presentation presents the course, its organization, the problem to work with and the tasks to be done by the students OpenMP and MPI two sessions are organized outside the general course timetable for students without knowledge of parallel programming

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 12 / 24

slide-13
SLIDE 13

Matrix algorithms

Basic concepts of sparse and dense basic linear algebra routines. Column and row major storage schemes, concept of leading dimension. Algorithms by blocks. Basic routines. LU factorization, versions without blocks and by blocks. Precision issues.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 13 / 24

slide-14
SLIDE 14

Numerical libraries

General structure of numerical libraries Centered on dense linear algebra libraries:

Basic routines: structure of BLAS multithread implementations (MKL, GotoBLAS, ATLAS) auto-tuning (ATLAS) Higher level routines: structure of LAPACK multithread implementations (MKL) alternative approaches (PLAPACK) recent efforts of optimization for multicore (PLASMA)

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 14 / 24

slide-15
SLIDE 15

Practical on basic algorithms and multithread libraries

Compare the execution time of versions of the LU: Sequential without and with blocks, Blocks with matrix multiplication with different basic libraries (MKL, GotoBLAS and ATLAS) Direct calls to LU in MKL and PLASMA.

  • !"!

"# $%&#

Speed-up of different versions of the LU factorization with respect to the sequential implementation. In a NUMA with 4 hexa-cores.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 15 / 24

slide-16
SLIDE 16

GPU

Basic concepts of GPU programming with CUDA In the second semester a course on Advanced Programming of Multicore Architectures No implementations of LU for GPU Use of linear algebra libraries for GPU (CULA, CUBLAS, MAGMA) Load balancing CPU-GPU Cost of data transference

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 16 / 24

slide-17
SLIDE 17

Shared-memory algorithms

OpenMP versions reusing the ideas from block algorithms Multilevel parallelism:

two-level OpenMP routines OpenMP + multithread libraries different numbers of threads at BLAS level and higher level in MKL routines

In the practical, study of the optimal number of OpenMP threads and library threads.

  • !

Comparison of the execution time of different OpenMP+MKL versions.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 17 / 24

slide-18
SLIDE 18

Out-of-core algorithms

Scientific problems with large memory requirement Out-of-core linear algebra libraries In/Out libraries Algorithms for out-of-core LU factorization In the practical, out-of-core implementations and combination with OpenMP .

  • ! "# $

Comparison of the execution time of different out-of-core versions.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 18 / 24

slide-19
SLIDE 19

Message-passing algorithms

Some basic ideas for the development of CPU+GPU and message-passing versions of the LU are discussed

Message-passing linear algebra routines Libraries for distributed systems (ScaLAPACK) Distributed memory LU factorization

In the practical, combination of the paradigms studied with MPI to implement LU for large matrices in an heterogeneous cluster with 52 cores and 10 GPUs: One quad-core + 1 GPU gforce 112 cores. One NUMA with 4 hexa-cores + 1 GPU Kepler 2048 cores. Two hexa-cores, each with 1 GPU gforce 512 cores. One node with 2 hexa-cores + 4 GPU gforce each 512 cores + 2 GPU Tesla each 448 cores. There are many possible combinations. The students decide which to explore, depending on their interest and the possible application to their work for the Master’s Thesis.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 19 / 24

slide-20
SLIDE 20

Final sessions

There are additional sessions: Research on the SCPP group, so that students can identify some directions to apply in their Master’s Thesis:

Research on optimization and auto-tuning of parallel linear algebra routines. Linear algebra routines for more recent systems (Kepler, MIC) Application of efficient linear algebra routines for large scientific problems (molecule simulation, electromagnetism...) Other scientific applications of HPC (parallel metaheuristics, statistic models...)

Control sessions to discuss the approaches and problems of the students when working in the practicals and to guide their work.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 20 / 24

slide-21
SLIDE 21

Outline

1

General organization of the course

2

The LU factorization

3

Development of the course

4

Evaluating Teaching

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 21 / 24

slide-22
SLIDE 22

Test to evaluate if the teaching objectives fulfilled

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 22 / 24

slide-23
SLIDE 23

Conclusions

The problem-based learning approach has proved positive for a reduced group of students with different knowledge and interests. The autonomous work has contributed to the understanding of the different issues, and to learn how to tackle practical aspects. The students are allowed to center some of the practicals on the issues they are more interested in. It allowed us to connect the course with their work for the Master’s Thesis. The experience is positive, but for successive courses we will try to center the course more on the subject of the Master’s Thesis, which is difficult because not all the students have decided from the beginning

  • f the course the subject of the Thesis, and because not all of them

can be easily related with High Performance Computation.

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 23 / 24

slide-24
SLIDE 24

A High Performance Computing Course Guided by the LU Factorization

Gregorio Bernabé, Javier Cuenca, Domingo Giménez, Luis P . García and Sergio Rivas

Universidad de Murcia/Universidad Politécnica de Cartagena Scientific Computing and Parallel Programming Group

International Conference on Computational Science June 10-12, 2014

Bernabé et al. (SCPPG) HPC course guided by the LU WTCS, June 10-12, 2014 24 / 24