Design of Locality-aware MPI-IO for Scalable Shared File Write Performance
Kohei Sugihara1, Osamu Tatebe2
1 Department of Computer Science, University of Tsukuba 2 Center for Computational Sciences, University of Tsukuba
Design of Locality-aware MPI-IO for Scalable Shared File Write - - PowerPoint PPT Presentation
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance Kohei Sugihara 1 , Osamu Tatebe 2 1 Department of Computer Science, University of Tsukuba 2 Center for Computational Sciences, University of Tsukuba P0 P1 P0 P1
1 Department of Computer Science, University of Tsukuba 2 Center for Computational Sciences, University of Tsukuba
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
2 P0 File P1 File SSF
P0 P1 P0 P1
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
3
P0 P1 P2 P3 P0 P1 P2 P3
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
4
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
HOLE
P0 File P1 File
HOLE HOLE HOLE HOLE
(a) N-1 Segmented w/o Resize (b) N-1 Strided w/ Resize (c) N-1 Strided w/ Resize P0 P1 P0 P1 P0 P1
HOLE HOLE HOLE HOLE
5
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
6 P0 P1 MPI_File_open(X)
File X P0 P1 MPI_File_open(X)
File X.0 File X.1
(a) Conventional MPI-IO (b) Ours. Locality-aware MPI-IO
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
7 P0 P1 MPI_File_open(X)
File X.0 File X.1 Gfarm Filesystem File X.0 File X.1
Node #0 Node #1
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
8
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
9
Lustre bandwidth is saturated when # of processes exceeds # of OSTs Proposal is scalable BeeOND is not scalable even it uses same node-local storage
IOR (non-collective) S3D-IO LES-IO VPIC-IO
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
10
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
11
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
[6] P . Carns, K. Harms, W. Allcock, C. Bacon, S. Lang, R. Latham, and R. Ross, “Understanding and Improving Computational Science Storage Access Through Continuous Characterization,” ACM Trans. Storage, vol. 7, no. 3, pp. 8:1–8:26, Oct. 2011. [7] Summit. [Online]. Available: https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/ [8] TSUBAME 3.0. [Online]. Available: https://www.t3.gsic.titech.ac.jp/en/hardware [9] ABCI. [Online]. Available: https://abci.ai/en/about_abci/computing_resource.html [25] O. Tatebe, K. Hiraga, and N. Soda, “Gfarm grid file system,” New Generation Computing, vol. 28, no. 3, pp. 257–275, Jul 2010.
12
Design of Locality-aware MPI-IO for Scalable Shared File Write Performance (HPS 2020)
13