Ab Initio modelling of surfaces and surfactants Outline Background - - PowerPoint PPT Presentation

ab initio modelling of surfaces and surfactants outline
SMART_READER_LITE
LIVE PREVIEW

Ab Initio modelling of surfaces and surfactants Outline Background - - PowerPoint PPT Presentation

Ab Initio modelling of surfaces and surfactants Outline Background System studied Hardware used MPI programming Background Flotation of minerals is a multi billion dollar industry Much is allready known but still much left to investigate


slide-1
SLIDE 1

Ab Initio modelling of surfaces and surfactants

slide-2
SLIDE 2

Outline

Background System studied Hardware used MPI programming

slide-3
SLIDE 3

Background

Flotation of minerals is a multi billion dollar industry Much is allready known but still much left to investigate

slide-4
SLIDE 4

Flotation principles

  • Collectors
  • Frothers
  • Depresants
slide-5
SLIDE 5

Increasing C

H2O

  • In-situ adsorption
  • Surfactants

Example of application Ore flotation

  • ZnS, PbS,…
  • Collectors, e.g.

xanthates and dithiophosphates

Background - Background - adsorption dsorption

slide-6
SLIDE 6

Quantum chemical modelling Quantum chemical modelling

Modelled structure

Modelled spectra Experimental infrared spectra

1200 1150 1100 1050 1000 HepXads HepX ion absorbance wavenumber (1/cm)

1200 1180 1160 1140 1120 1100 1080 1060 1040 1020 1000 1042 1102 1135 1131 1090 1066 1055 1023 1005 HepXads

intensity wavenumber (1/cm)

HepX ion

slide-7
SLIDE 7

Aim of the w ork

Investigate how a collectors interacts with surfaces Introduce the pseudopotential concept into chemistry Collaboration between experiments and modelling

slide-8
SLIDE 8

Methods used

SCF Self Consistent Field MP2 Second order Moller-Plesset DFT Density Functional theory (NWChem) DFT in combination with pseudopotentials (AIMPRO) Experimental IR/RAMAN spectroscopy

slide-9
SLIDE 9

Geometrical optimization and vibrational mode calcultaions of ethyl xanthate

slide-10
SLIDE 10

Execution time for the simulation

70467 106 DZVP DFT B3LYP 25722 43 STO-3G MP2 402577 143 6-311G* MP2 455127 160 6-311G** MP2 143921 160 6-311G** SCF Time Functions Basis set Method 21408 106 DZVP DFT LDA 100838 143 6-311G* SCF 8406 43 STO-3G SCF

Timings using NWChem with different methods and basis sets

slide-11
SLIDE 11

Molecule in a box vs cluster calculations 139 32379 Gamma 139 137 1 Speedup 32580 32154 233 Timing (s) 14 kp 4 kp Cluster Geometrical optimization of Ethyl Xanthate

Box size 15x25x15 Å K-points converges to the gamma point Cluster calculations is about 18 times faster than NWChem using DZVP basis set and LDA About 400 times faster than NWChem using MP2

131 13539 Gamma 130 128 1 Speedup 73037 71911 561 Timing (s) 14 kp 4 kp Cluster Vibrational freqeuncy calculations of Ethyl Xanthate

slide-12
SLIDE 12

Excellent agreement w ith both all electron DFT calculations and experimental results

Less than 3.8% deviation from all electron calculations Less than 4% deviation from experimental results Cluster calculations as accurate as supercell

slide-13
SLIDE 13

(Potassium) O,O-Dibutyldithiophosphate

Several different geometrical confromations Important for mining industry (flotation process) Shortchained species vital in lubrications

slide-14
SLIDE 14

Vibrational frequency calculations

Calculated vibrational spectra compared with experimtal

slide-15
SLIDE 15

Adsorption of Heptyl Xanthate on a Germanium surface

Calculations of vibrational frequencies Good agreement with experiments ATR-FTIR experiments Bridgeing conformation on the surface 175 atoms in supercell 6 k-points Big basis

slide-16
SLIDE 16

Hardw are used

The HPC2N at Umeå University

  • Sarek 384 nodes
  • Seth 256 nodes

The PDC facilities at KTH

  • Lenngren 886 nodes
slide-17
SLIDE 17

Sarek the HPC2N Opteron Cluster

  • Sarek has a total of 384 processors and 1.54 Tbyte of memory
  • 190 HP DL145 nodes, with dual AMD Opteron 248 (2.2GHz)
  • 2 HP DL585, with dual AMD Opteron 248 (2.2GHz)
  • 1.69 Tflops/s peak performance
  • 1.33 Tflops/s HP Linpack
  • 8 GB memory per node
  • Myrinet 2000 high speed interconnect
slide-18
SLIDE 18

The new ork

Myrinet-2000 with MX-2G software

  • MX or MPI latency 3.2µs
  • MX or MPI unidirectional data rate

– 247 MBytes/s (one-port NICs) – 495 MBytes/s (two-port NICs)

  • TCP/IP data rate (MX ethernet emulation)

– 1.98 Gbits/s (one-port NICs) – 3.95 Gbits/s (two-port NICs)

slide-19
SLIDE 19

The nodes

  • 384 CPUs

– 64-bit AMD Opteron 2.2 GHz – 64 kB + 64 kB L1 Cache (2-way associative) – 1024 kB unified L2 Cache (16-way associative)

  • 192 Nodes
  • 11.2 GB/s memory bandwidth
  • 8.8 Gflops/s peak performance
slide-20
SLIDE 20

Softw are

  • Ubuntu 6.06 LTS
  • OpenAFS AFS client
  • MX
  • MPICH-MX
  • Goto Blas
  • ScaLAPACK
  • BLACS
  • FFTW
  • PGI Compiler suite
  • PathScale Compiler suite
slide-21
SLIDE 21

The top 500 november 2006

367000 280600 2005 131072 BlueGene/L - eServer Blue Gene Solution IBM DOE/NNSA/LLNL United States 1 6025 4999 2005 886 Lenngren - PowerEdge 1850, 3.4 GHz, Infiniband Dell KTH - Royal Institute of Technology Sweden 128 Rpeak GFlops Rmax GFlops Year Processors Computer Site Rank

Sarek not even on the list any longer

  • 168 June 2004
  • 224 November 2004
  • 400 June 2005
  • Top 30 1993
slide-22
SLIDE 22

MPI an Introduction

  • Background
  • Basics of MPI message passing

– Fundamental concepts – Simple examples in C

  • Point-to-point communication
  • Collective communication
slide-23
SLIDE 23

What is MPI

  • A message passing library specifiction

– not a language or compiler specification – not a specific implementation or product

  • For parallel computers, clusters, and heterogeneous

networks

  • Designed to provide access to advanced parallel

hardware for – end users – library writers – tool developers

slide-24
SLIDE 24

Why MPI

Early vendor systems were not portable

  • Early portable systems were mainly research efforts

by individual groups

  • MPI provides a portable way to express parallel

programs

  • MPI-forum organized in 1992 with broad participation
  • MPI Standard (1.0) released 1994
  • MPI Standard (2.0) released 1997
slide-25
SLIDE 25

The MPI Architecture

  • SPMD: Single Program Multiple Data

– given P processors, run the same program on each processor

  • Data types

– the standard way to describe data in MPI

  • Communicators

– an abstraction for selecting the participants in a set of communications

  • Two sided (pair-wise) communications

– one party sends data and the other receives

  • Collective Communication

– Reductions, broadcasts, etc

slide-26
SLIDE 26

A minimal MPI Program

slide-27
SLIDE 27

A better version

slide-28
SLIDE 28
slide-29
SLIDE 29

Message Passing

slide-30
SLIDE 30

MPI identifications

  • A process is identified by its rank in the group associated with

acommunicator – There is a default communicator whose group contains all initial processes, MPI_COM_WORLD – Can create new communicators using MPI_COM_SPLIT

  • All communications are labeled by type, MPI_INT…

– Support communication between processes on machines with different memory representations and lengths of elementary datatypes (heterogeneous communication)

  • MPI_TAG assist the receiving process in identifying the message.
slide-31
SLIDE 31

Blocking send

MPI_SEND (start, count, datatype, dest,tag, comm)

  • The message buffer is described by start, count, datatype
  • The target process is specified by dest, which is the rank of the target

process in the communicator specified by comm.

  • When this function returns, the data has been delivered to the system

and the buffer can be reused. The message may not have been received by the target process.

slide-32
SLIDE 32

Blocking recive

MPI_RECV(start, count, datatype, source, tag,comm, status)

  • Waits until a matching (on source and tag) message is received from the

system, and the buffer can be used

  • source is the rank in communicator specified by comm
  • status contains further information
  • Receiving fewer than count occurrences of datatype is OK, but receiving

more is an error.

slide-33
SLIDE 33

MPI is simple

  • Many parallel programs can be written using just these six functions,
  • nly two of which are non-trivial

– MPI_INIT – MPI_FINALIZE – MPI_COMM_SIZE – MPI_COMM_RANK – MPI_SEND – MPI_RECV

slide-34
SLIDE 34

Collective Communication

  • Exists several collective primitives in MPI, for example

– Broadcast: MPI_BCAST – Gather: MPI_Gather, MPI_Gatherv – Scatter: MPI_Scatter, MPI_Scatterv – All-to-all: MPI_Alltoall, MPI_ AlltoAllv – Reduction: MPI_Reduce, MPI_Allreduce – Barrier: MPI_BARRIER

slide-35
SLIDE 35

MPI Summary

  • The parallel computing community has cooperated on the

development of a standard for message-passing libraries

  • There are many implementations, on nearly all platforms
  • MPI subsets are easy to learn and use
  • Lots of MPI material is available