400g demonstrator for isc 13
play

400G Demonstrator for ISC 13 Post ISC phase 2013 Wolfgang Wnsch, - PowerPoint PPT Presentation

400G Demonstrator for ISC 13 Post ISC phase 2013 Wolfgang Wnsch, Technische Universitt Dresden Eduard Beier, T-Systems International 1 Agenda Partner Purpose Project Structure Topology just click! Turbine Development


  1. 400G Demonstrator for ISC ‘13 Post ISC phase 2013 Wolfgang Wünsch, Technische Universität Dresden Eduard Beier, T-Systems International 1

  2. Agenda  Partner  Purpose  Project Structure  Topology just click!  Turbine Development  Climate Computing (hype perlinked)  Service Recipient Relations  Data Path  Throughput Targets  The Big Picture  Project Lifetime  Timeline  DATE  Test items – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13

  3. Partner Back to Agenda – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 3

  4. Purpose The purpose of the project is: to demonstrate, that bandwidth beyond 100GBit/s is feasible and useful Back to Agenda – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 4

  5. Project Structure Project Boar ard Prof. of. Dr. A. Bod ode / Prof. of. Dr Dr .W. Nagel Dr. A. Kluge F. Schneider Prof. f. Dr. W. Ge Gentzsc sch R. Wieneke M. Zappolino Dr. A. Ge Geiger M. Roose sen M. Fuchs A. Clauberg T. Wese selowsk ski Jan Heichler Projec ect Man anag agem emen ent E.Beier W. Wünsc ünsch n.n. WP2 WP1 WP3 WP4 WP5 WP6 WP7 WP8 Parallel Filesyste tems ms Perfo forma mancetT tTests sts Server & Sto torage Transp nsport Layer 2/3 SDN & NFV Applicati tions ns Public Relati tions ns Klaus Gottsc ottscha halk Andy Georgi Beier/Wünsc sch Mask skos / Ma Mayer Dani niel Nowara Ralf f Braun Ferdinand Jami mitz tzky Udo Schä häfe fer Planning / / Engineering Server & Storage Router Project Applications Project Filesystem System Performance Project Management SDN & NFV & Project Marketing WDM Project Optim izing Metering Management Security Management Management Back to Agenda – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 5

  6. WP1: Performance Tests • Performance of subsystems (e.g. storage) and total performance measurements • Feedback for subsystem optimization • Conformance to measurement standards • Input for publications • • WP lead: ad: Andy Georgi Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 6

  7. WP2: Parallel File System • Planning, roll out, optimization and operation of the Parallel File system in coordination with other WPs and partners • Configure and parameterize the Parallel File System (e.g. TCP buffers) • Coordinate the communication between Clusters, File System and Network (IP Concept) • Input for publications • • WP lead ad: : Klau aus Gottschal alk Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 7

  8. WP3: Server & Storage & IB & etc • Planning, roll out, optimization and operation of the server, storage and Infiniband infrastructure in coordination with other WPs and partners • Input for publications • • WP lead ad : Projekt Man anag agement Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 8

  9. WP4: Transport • Planning, roll out, optimization and operation of the fiber and WDM infrastructure in coordination with other WPs and partners • Input for publications • • WP lead: ad: Stefan an Mas askos (Planning) / Heinz May ayer (Te Technology) Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 9

  10. WP5: Layer 2/3 • Planning, roll out, optimization and operation of the Router infrastructure in coordination with other WPs and partners • Input for publication • • WP WP-Leiter: : Dan aniel Nowar ara Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 10

  11. WP6: SDN & NFV & Security • attend SDN and NFV approaches • set up a security concept in coordination with the partners • Implement that concept • input for publications • WP lead ad Ral alf Brau aun (T-Labs abs) Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 11

  12. WP7: Applications • Coordination of the application teams • Input for publications • • WP lead: ad: Ferdinan and Jam amitzky Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 12

  13. WP8: Public Relations • Coordinate partners and activities towards optimum project marketing • Coordinate press release activities • Produce and attend project PR material (flyer, articles, etc) • Coordinate ISC booth activities (flyer, logo, sessions, poster, give aways, etc) • Coordinate the ISC application demonstration (incl. Internet access) • • WP lead: ad: Udo Schäf äfer Back to Projec ect Structure – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 13

  14. Topology 400G Demonstrator 10GbE for Demonstrator Computing Center Euro Industriepark München Back to Agenda – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 14

  15. Turbine Development  Cooperation with DLR  Workflow Demonstration  Preprocessing  Solver 1  Solver 2  Postprocessing  Turbine model calculation with n Eigenmodes and m Phase Angels Back to Agenda

  16. Turbine Development: Benefits of GPFS Usage on 400G Detai ails:  Data volume: ~ 1TB  Overall Workflow:  Multitude of independent simulation runs (HTC).  Simulations running on HPC resources at different sites.  Every simulation produces input data for subsequent simulations.  Subsequent simulations again run at different sites. Thus to avoid knock-on delays in workflow execution data instantly should be available at different sites! GPFS:  Adopted feature: Active File Management (AFM) and Stretched Cluster  Cross site data replication allows running simulations without prior copying  Implicit data consistent backup via AFM data replication Back to Tu Turbine Developm pment

  17. Turbine Development: Benefits of GPFS Usage on 400G  No. cores b = 5 Solver 1 Solver 2 a = 6 n * m = 28 ≥ a * b = 30 Δ t 240m tim e in Back to Tu Turbine Developm pment

  18. Turbine Development: Benefits of GPFS Usage on 400G 400G: : Ban andw dwidt dth req equirem emen ents for di differ eren ent job di distribu bution set etups ps  Extreme/HTC setup with a = n * m = 300, b = 1:  Assuming jobs all writing within 15min to disk an avg. file size of 150MB (i.e. write peak):  Required bandwidth: 400GBit/s  Required machine size: > 19200 Cores (when single jobs run on 64 cores)  „ Gentle “ setup with a = 50, b = 6:  Assuming jobs having an avg. runtime of 240min continously writing 150MB of data to disk to represent runtime differences over larger values b:  Required bandwidth: 4GBit/s  Required machine size: > 3200 Cores (when single jobs run on 64 cores) Back to Tu Turbine Developm pment

  19. Turbine Development Setup 3. Flow Model Chemnitz Calculat ation (Solver 2) 2. Flow Model Calculat ation (Solver 1) 1. Prepr processing @ DSI 4. Postpr proces essing @ DSI Back to Tu Turbine Developm pment – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 19

  20. Turbine Development & GPFS Chemnitz Parallel Distr tribute uted File Syste tem GPFS Back to Tu Turbine Developm pment – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 20

  21. Climate Computing  Order er 30 differ eren ent mode dels are used ed Transfer Time to worldw dwide de Rate transport 1  Exper erimen ents with thes ese mode dels produ duce1 e10s PB of Data of PBytes of es 10 Mbps ~ 27 years  100s of of Pby bytes es of of dat ata nee eed to to be be compar pared ed 1 Gbps ~ 97 days betwee een multipl ple e sites es worldw dwide de 100 Gbps ~ 23 hours  Movem emen ent of of dat ata should be be within months * * Otherwise the questions will be forgotten ;-) Statistics taken from : „ BER Network Requirements Workshop”, LBNL report LBNL-4089E 2010, P 33. Recommended Reading Extremely High Bandw dwidth dth Requ quirements ‘ Very Big Data‘ Back to Agenda

  22. Climate Computing Application Setup Folde der 2 CMIP Folde der 1 CMIP Federat ation Preal allocat ation Folder 3 Model po post-pr processing an and CMIP anal alysis Visual alisiat ation @ ISC ’13 Leipz pzig Back to Climat ate Compu puting – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 22

  23. CCA & GPFS & iRODS GPFS and/or or Global Namespace iRODS Back to Climat ate Compu puting – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 23

  24. Service Recipient Relations Service Rec Recipien ent Turbine e Dev evel elopmen ent Climate e Computing TRA RACE on HPC 5T 5T 5T 5T Distributed ed Folders Resources es ed Folders 5T 5T Distributed 5T 5T Distributed ed Folders TRACE E on HPC Feder eration, 5T 5T 5T 5T Resea earch Clien ent Resources es Prea eallocation PREP EP & POST on 5T 5T Cloud Res esources es Client ev evaluating results, e. e.g. TEC ECPLOT Calculation Back to Agenda – public – E. Beier/ W. Wünsch 400G Demonstrator für ISC’13 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend