building manycore processor to dram networks using
play

Building manycore processor-to-DRAM networks using monolithic - PowerPoint PPT Presentation

Building manycore processor-to-DRAM networks using monolithic silicon photonics Ajay Joshi , Christopher Batten , Vladimir Stojanovi , Krste Asanovi MIT, 77 Massachusetts Ave, Cambridge MA 02139 UC Berkeley, 430 Soda


  1. Building manycore processor-to-DRAM networks using monolithic silicon photonics Ajay Joshi † , Christopher Batten † , Vladimir Stojanovi ć † , Krste Asanovi ć ‡ † MIT, 77 Massachusetts Ave, Cambridge MA 02139 ‡ UC Berkeley, 430 Soda Hall, MC #1776, Berkeley, CA 94720 {joshi, cbatten, vlada}@mit.edu, krste@eecs.berkeley.edu High Performance Embedded Computing (HPEC) Workshop 23-25 September 2008

  2. Manycore systems design space MIT/UCB

  3. Manycore system bandwidth requirements MIT/UCB

  4. Manycore systems – bandwidth, pin count and power scaling Server & HPC 1 Byte/Flop, 8 Flops/core @ 5GHz Mobile Client 4 MIT/UCB

  5. Interconnect bottlenecks Manycore system cores CPU CPU CPU Interconnect Network Bottlenecks due to energy and bandwidth density limitations Cache Cache Cache Interconnect Network DRAM DRAM DRAM DIMM DIMM DIMM MIT/UCB

  6. Interconnect bottlenecks Manycore system cores CPU CPU CPU Interconnect Network Bottlenecks due to energy and bandwidth density limitations Cache Cache Cache Interconnect Network Need to jointly optimize on-chip and off-chip DRAM DRAM DRAM DIMM DIMM DIMM interconnect network MIT/UCB

  7. Outline � Motivation � Monolithic silicon photonic technology � Processor-memory network architecture exploration � Manycore system using silicon photonics � Conclusion MIT/UCB

  8. Unified on-chip/off-chip photonic link � Supports dense wavelength-division multiplexing that improves bandwidth density � Uses monolithic integration that reduces energy consumption � Utilizes the standard bulk CMOS flow MIT/UCB

  9. Optical link components 65 nm bulk CMOS chip designed to test various optical devices MIT/UCB

  10. Silicon photonics area and energy advantage Energy Bandwidth Metric (pJ/b) density (Gb/s/ μ ) Global on-chip photonic link 0.25 160-320 Global on-chip optimally repeated electrical link 1 5 Off-chip photonic link (50 μ coupler pitch) 0.25 13-26 Off-chip electrical SERDES (100 μ pitch) 5 0.1 On-chip/off-chip seamless photonic link 0.25 MIT/UCB

  11. Outline � Motivation � Monolithic silicon photonic technology � Processor-memory network architecture exploration � Baseline electrical mesh topology � Electrical mesh with optical global crossbar topology � Manycore system using silicon photonics � Conclusion MIT/UCB

  12. Baseline electrical system architecture Mesh physical view Mesh logical view C = core, DM = DRAM module � Access point per DM distributed across the chip � Two on-chip electrical mesh networks � Request path – core � access point � DRAM module � Response path – DRAM module � access point � core MIT/UCB

  13. Interconnect network design methodology � Ideal throughput and zero load latency used as design metrics � Energy constrained approach is adopted � Energy components in a network � Mesh energy ( E m ) (router-to-router links (RRL), routers) � IO energy ( E io ) (logic-to-memory links (LML)) Total energy budget Calculate on-chip RRL energy Calculate energy Calculate LML Calculate total Flit width budget for LML width mesh energy Calculate on-chip router energy Calculate mesh Calculate zero Calculate I/O throughput load latency throughput MIT/UCB

  14. Network throughput and zero load latency (22nm tech, 256 cores @ 2.5 GHz, 8 nJ/cyc energy budget) � System throughput limited by on-chip mesh or I/O links � On-chip mesh could be over-provisioned to overcome mesh bottleneck � Zero load latency limited by data serialization MIT/UCB

  15. Network throughput and zero load latency (22nm tech, 256 cores @ 2.5 GHz, 8 nJ/cyc energy budget) � System throughput limited by on-chip mesh or I/O links � On-chip mesh could be over-provisioned to overcome mesh bottleneck � Zero load latency limited by data serialization MIT/UCB

  16. Network throughput and zero load latency OPF:4 OPF:1 OPF:2 (22nm tech, 256 cores @ 2.5 GHz, 8 nJ/cyc energy budget) � System throughput limited by on-chip mesh or I/O links � On-chip mesh could be over-provisioned to overcome mesh bottleneck � Zero load latency limited by data serialization MIT/UCB

  17. Network throughput and zero load latency On-chip serialization OPF:4 OPF:1 OPF:2 Off-chip serialization (22nm tech, 256 cores @ 2.5 GHz, 8 nJ/cyc energy budget) � System throughput limited by on-chip mesh or I/O links � On-chip mesh could be over-provisioned to overcome mesh bottleneck � Zero load latency limited by data serialization MIT/UCB

  18. Outline � Motivation � Monolithic silicon photonic technology � Processor-memory network architecture exploration � Baseline electrical mesh topology � Electrical mesh with optical global crossbar topology � Manycore system using silicon photonics � Conclusion MIT/UCB

  19. Optical system architecture Mesh physical view Mesh logical view C = core, DM = DRAM module � Off-chip electrical links replaced with optical links � Electrical to optical conversion at access point � Wavelengths in each optical link distributed across various core-DRAM module pairs MIT/UCB

  20. Network throughput and zero load latency � Reduced I/O cost improves system bandwidth � Reduction in latency due to lower serialization latency � On-chip network is the new bottleneck MIT/UCB

  21. Network throughput and zero load latency � Reduced I/O cost improves system bandwidth � Reduction in latency due to lower serialization latency � On-chip network is the new bottleneck MIT/UCB

  22. Optical multi-group system architecture Ci = core in group i , DM = DRAM module, S = global crossbar switch � Break the single on-chip electrical mesh into several groups � Each group has its own smaller mesh � Each group still has one AP for each DM � More APs � each AP is narrower (uses less λ s) � Use optical network as a very efficient global crossbar � Need a crossbar switch at the memory for arbitration MIT/UCB

  23. Network throughput vs zero load latency � Grouping moves traffic from energy-inefficient B mesh channels to energy-efficient photonic 10x-15x channels A � Grouping and silicon photonics provides 10x- 15x throughput improvement � Grouping reduces ZLL in photonic range, but increases ZLL in electrical range MIT/UCB

  24. Simulation results 256 cores,16 DM Uniform random traffic 256 cores,16 DM Uniform random traffic � Grouping � 2x improvement in bandwidth at comparable latency � Overprovisioning � 2x-3x improvement in bandwidth for small group count at comparable latency � Minimal improvement for large group count MIT/UCB

  25. Simulation results 256 cores 256 cores,16 DM 16 DM Uniform random Uniform traffic random traffic � Replacing off-chip electrical with photonics (Eg1x4 � Og1x4) � 2x improvement in bandwidth at comparable latency Using opto-electrical global crossbar (Eg4x2 � Og16x1) � � 8x-10x improvement in bandwidth at comparable latency MIT/UCB

  26. Outline � Motivation � Monolithic silicon photonic technology � Processor-memory network architecture exploration � Manycore system using silicon photonics � Conclusion MIT/UCB

  27. Simplified 16-core system design MIT/UCB

  28. Simplified 16-core system design MIT/UCB

  29. Simplified 16-core system design MIT/UCB

  30. Simplified 16-core system design MIT/UCB

  31. Simplified 16-core system design MIT/UCB

  32. Full 256-core system design MIT/UCB

  33. Outline � Motivation � Monolithic silicon photonic technology � Processor-memory network architecture exploration � Manycore system using silicon photonics � Conclusion MIT/UCB

  34. Conclusion � On-chip network design and memory bandwidth will limit manycore system performance � Unified on-chip/off-chip photonic link is proposed to solve this problem � Grouping with optical global crossbar improves system throughput � For an energy-constrained approach, photonics provide 8-10x improvement in throughput at comparable latency MIT/UCB

  35. Backup MIT/UCB

  36. MIT Eos1 65 nm test chip � Texas Instruments standard 65 nm bulk CMOS process � First ever photonic chip in sub-100nm CMOS � Automated photonic device layout � Monolithic integration with electrical modulator drivers MIT/UCB

  37. Two-ring filter Vertical coupler grating Digital driver Ring modulator One-ring filter Photo detector Paperclips Waveguide crossings M-Z test structures 4 ring filter banks MIT/UCB

  38. Optical waveguide SEM image of a poly silicon waveguide Cross-sectional view of a photonic chip � Waveguide made of polysilicon � Silicon substrate under waveguide etched away to provide optical cladding � 64 wavelengths per waveguide in opposite directions MIT/UCB

  39. Modulators and filters Double-ring resonant filter � 2 nd order ring filters used � Rings tuned using sizing and heating Resonant racetrack modulator � Modulator is tuned using charge injection � Sub-100 fJ/bit energy cost for the modulator driver MIT/UCB

  40. Photodetectors � Embedded SiGe used to create photodetectors � Monolithic integration enable good optical coupling � Sub-100 fJ/bit energy cost required for the receiver MIT/UCB

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend