University of Iceland High Performance Computing An introduction M - - PowerPoint PPT Presentation

university of iceland high performance computing
SMART_READER_LITE
LIVE PREVIEW

University of Iceland High Performance Computing An introduction M - - PowerPoint PPT Presentation

University of Iceland High Performance Computing An introduction M ani and Hj olli August 2017 M ani, Hj olli IHPC August 2017 1 / 15 In operation Gardar - Decomissioned Since 2011 12 cores per node 162 nodes (currently)


slide-1
SLIDE 1

University of Iceland High Performance Computing

An introduction M´ ani and Hj¨

  • lli

August 2017

M´ ani, Hj¨

  • lli

IHPC August 2017 1 / 15

slide-2
SLIDE 2

In operation

Gardar

  • Decomissioned

Since 2011 12 cores per node 162 nodes (currently) 24GB memory per node

Garpur

Since 2016 24/32 cores per node 44 nodes + 3 GPU nodes 128/256GB memory per node 2x Tesla M2090 in each GPU node Is getting an expansion

  • tunn

Since 2016 24 cores per node 4 nodes 128GB memory per node

M´ ani, Hj¨

  • lli

IHPC August 2017 2 / 15

slide-3
SLIDE 3

Cluster layout - Garpur

M´ ani, Hj¨

  • lli

IHPC August 2017 3 / 15

slide-4
SLIDE 4

Cluster Software

Gardar Rocks Cluster Distribution Garpur OpenHPC OS: Centos 7.2 GCC & Intel compilers OpenMPI Python, R, Matlab VASP, GROMACS, PISM

M´ ani, Hj¨

  • lli

IHPC August 2017 4 / 15

slide-5
SLIDE 5

Application Process

Are you studying/working at an Icelandic University? Doing a project supported by RANNIS? → then send an email to support-hpc@hi.is Working at a Nordic University? Try the Dellingr resource sharing project https://dellingr.neic.no/apply/

M´ ani, Hj¨

  • lli

IHPC August 2017 5 / 15

slide-6
SLIDE 6

What do I get with an account?

SSH login Disk space

Home partition: 300GB Work partition: Unlimited1 Jotunn disk space is more limited

Unlimited CPU hours1 Support from us1

1Within resonable limits M´ ani, Hj¨

  • lli

IHPC August 2017 6 / 15

slide-7
SLIDE 7

Cluster workflow

You should have recieved your login credentials by email.

1 Connect with ssh

ssh mani@jotunn.rhi.hi.is

2 Check cluster status

sinfo squeue

3 Load modules or compile program on login node

module avail module load . . .

4 Create job file 5 Submit job to queue

sbatch myjob.sh

6 Check results

Use slurm directives to send email when job completes

M´ ani, Hj¨

  • lli

IHPC August 2017 7 / 15

slide-8
SLIDE 8

Modules

Software on the cluster is provided in modules. Missing software? Only you use it? → install it yourself in your home folder. Other users also need this software? → send us a request. Important commands module avail module load . . . module purge module show . . .

M´ ani, Hj¨

  • lli

IHPC August 2017 8 / 15

slide-9
SLIDE 9

Modules

Easy to create your own module

M´ ani, Hj¨

  • lli

IHPC August 2017 9 / 15

slide-10
SLIDE 10

Job Scheduler

Typical slurm job workflow:

1 Decide how many nodes you need and on which partition (himem,

default, gpu)

2 Create bash script with slurm directives

#SBATCH -J jobname #SBATCH -N 2 #SBATCH –ntasks-per-node=2 #SBATCH –mail-user mani@hi.is #SBATCH –mail-type=END #SBATCH –array=0-15

3 Submit to queue

sbatch myjob.sh

4 . . . or try running an interactive job

salloc -N 1 Note: this creates a subshell

M´ ani, Hj¨

  • lli

IHPC August 2017 10 / 15

slide-11
SLIDE 11

Rules of thumb

1 Be respectful of others. Don’t submit 10 jobs requiring 1 node each

at once.

2 Allocate your job to 1 core, half a node or the whole node. 3 Keep in mind resources other than CPU cores (e. g. memory) 4 If you know how long your job will run for, allocate only the needed

walltime

M´ ani, Hj¨

  • lli

IHPC August 2017 11 / 15

slide-12
SLIDE 12

System status

Check the status of the queue with squeue

  • r

squeue -u mani We also have a website with the system status ihpc.is

M´ ani, Hj¨

  • lli

IHPC August 2017 12 / 15

slide-13
SLIDE 13

Example - IMB

[mani ~]$ ssh jotunn.rhi.hi.is [m@j]$ curl -O https://software.intel.com/sites/default/... [m@j]$ tar -xf IMB_2017_Update2.tgz [m@j]$ cd imb/src [m@j]$ sed -i s/mpiicc/mpicc/ make_ict [m@j]$ module load gnu openmpi [m@j]$ make [m@j]$ vim test-pingpong.sh ... [m@j]$ chmod +x test-pingpong.sh [m@j]$ sbatch test-pingpong.sh

M´ ani, Hj¨

  • lli

IHPC August 2017 13 / 15

slide-14
SLIDE 14

Example - IMB #2

Contents of test − pingpong.sh #!/bin/bash #SBATCH -J imb #SBATCH -N 2 #SBATCH --ntasks-per-node 1 module purge module load gnu openmpi OMP_NUM_THREADS=1 mpirun --report-bindings IMB-MPI1 PingPong The job creates a file slurm-34618.out

M´ ani, Hj¨

  • lli

IHPC August 2017 14 / 15

slide-15
SLIDE 15

Support

Any questions? Send them to support-hpc@hi.is

M´ ani, Hj¨

  • lli

IHPC August 2017 15 / 15