bigstation enable scalable real time signal
play

BigStation: Enable Scalable Real-time Signal Processing in Large - PowerPoint PPT Presentation

BigStation: Enable Scalable Real-time Signal Processing in Large MU-MIMO Systems Qing Yang Xiaoxiao Li Hongyi Yao Ji Fang Kun Tan Wenjun Hu Jiansong Zhang Yongguang Zhang Microsoft Research Asia, Beijing, China


  1. BigStation: Enable Scalable Real-time Signal Processing in Large MU-MIMO Systems Qing Yang  Xiaoxiao Li § Hongyi Yao ¶ Ji Fang ‡ Kun Tan † Wenjun Hu † Jiansong Zhang † Yongguang Zhang † † Microsoft Research Asia, Beijing, China  MSRA and CUHK, Hong Kong § MSRA and Tsinghua University, Beijing, China ¶ MSRA and USTC, He Fei, An Hui, China ‡ MSRA and BJTU, Beijing, China

  2. Motivation • Demand for more wireless capacity – Proliferation of mobile devices: wireless access is primary – Data-intensive applications: video, tele-presence – “amount of net traffic carried on wireless will exceed the amount of wired traffic by 2015” (sourced from CISCO VNI 2011-2016) SIGCOMM 2013, Hong Kong, Aug 2013 2

  3. Motivation • Demand for more wireless capacity – Proliferation of mobile devices: wireless access is primary – Data-intensive applications: video, tele-presence Can we engineer next wireless network to match existing wired network – Giga-bit wireless throughput to every user? – “amount of net traffic carried on wireless will exceed the amount of wired traffic by 2015” (sourced from CISCO VNI 2011-2016) SIGCOMM 2013, Hong Kong, Aug 2013 3

  4. How to Gain More Wireless Capacity • More spectrum (DSA) – Spectrum is scarce, shared resource and there is a limit • Spectrum reuse (micro cell, pico cell, …) – Existing cells are already small (like Wi-Fi) – Increased deployment and management complexity • Spatial multiplexing (MU-MIMO) – More promising SIGCOMM 2013, Hong Kong, Aug 2013 4

  5. Background: MU-MIMO Access Point (AP) Joint Signal Processing mobile mobile m AP antennas n total client antennas mobile mobile mobile mobile • Transmit to/Receive from multiple mobile stations 𝑍 = 𝐼S, 𝑌 = 𝐼 ∗ (𝐼𝐼 ∗−1 )𝐼𝑍 S = 𝑌 Uplink: S = 𝐼 ∗ 𝐼 −1 𝐼 ∗ 𝑌 Y = 𝐼𝑇 = 𝑌 Downlink: • In theory, linearly scale capacity with # of AP antennas SIGCOMM 2013, Hong Kong, Aug 2013 5

  6. How Many Antennas do We need • … for giga-bit wireless link per user 802.11n # of ant 1 2 4 8 16 32 64 128 20MHz 72.2M 144M 289M 578M 1.2G 2.3G 4.6G 9.2G 40MHz 150M 300M 600M 1.2G 2.4G 4.8G 9.6G 19.2G 80MHz 325M 650M 1.3G 2.6G 5.2G 10.4G 20.8G 41.6G 160MHz 650M 1.3G 2.6G 5.2G 10.4G 20.8G 41.6G 83.2G 802.11ac Large-scale MU-MIMO systems Giga-bit to 20 concurrent users: 160MHz channel with at least 40 antennas SIGCOMM 2013, Hong Kong, Aug 2013 6

  7. Challenge • Can we build a scalable AP to support such large- scale MU-MIMO operation? – When n, so as m, increases large? Access Point (AP) m AP antennas Joint Signal Processing mobile mobile mobile mobile mobile mobile n total client antennas SIGCOMM 2013, Hong Kong, Aug 2013 7

  8. Computation and Throughput Requirement: a Back-of-Envelope Estimation • Setting: 160MHz, 40 antennas • Data path: – 160MHz channel width  𝑠 = 5 Gbps sa. per ant. – 40 antennas  200Gbps in total • Computation: – Channel inverse (once every frame): 𝑃(𝑛𝑜 2 𝑠/𝑢 𝑔 )  269 GOPS – Spatial demutiplexing/precoding: 𝑃(𝑛𝑜𝑠)  1.5 TOPS – Channel Decoding: 𝑃(𝑜𝑠)  5.5 TOPS – 7.27 TOPS in total! • State-of-art multi-core CPU achieves only 50 GOPS SIGCOMM 2013, Hong Kong, Aug 2013 8

  9. A Single Central Processing Unit Access Point (AP) Joint Signal Processing m AP antennas mobile mobile mobile mobile mobile mobile n total client antennas SIGCOMM 2013, Hong Kong, Aug 2013 9

  10. BigStation: Parallelizing to Scale BigStation AP Simple Simple Processing Processing Inter-connecting Unit Unit Network Simple Simple Simple Processing Processing Processing Unit Unit Unit m AP antennas mobile mobile mobile mobile mobile mobile n total client antennas SIGCOMM 2013, Hong Kong, Aug 2013 10

  11. Outline • Parallel architecture • Parallel algorithms and optimization • Performance • Conclusion SIGCOMM 2013, Hong Kong, Aug 2013 11

  12. Naive Architecture • A pool of processing servers • A pool of processing servers – Sending samples of the same frame to one server… • Enough processing capability with ⌈𝑢 𝑞 /𝑢 𝑔 ⌉ servers SIGCOMM 2013, Hong Kong, Aug 2013 12

  13. Naive Architecture • Issue: long processing latency for a frame ( ~1𝑡 ) • Wireless protocols requirement: milliseconds SIGCOMM 2013, Hong Kong, Aug 2013 13

  14. Our Approach: Distributed Pipeline Channel Spatial Channel inversion demultiplexing decoding • Parallelizing MU-MIMO processing into 3-stage pipeline • At each stage, the computation is further parallelized among multiple servers SIGCOMM 2013, Hong Kong, Aug 2013 14

  15. Data Partitioning across Servers • Exploiting data parallelism inside MU-MIMO Partitioning by subcarriers Channel Spatial Channel OFDM signal inversion demultiplexing decoding SIGCOMM 2013, Hong Kong, Aug 2013 15

  16. Data Partitioning across Servers • Exploiting data parallelism inside MU-MIMO Partitioning by spatial streams Channel Spatial Channel OFDM signal inversion demultiplexing decoding SIGCOMM 2013, Hong Kong, Aug 2013 16

  17. Example • Giga-bit to 20 users – 160MHz  468 parallel subcarriers • Subcarrier partitioning – Each server needs to handle a minimum of 10Mbps data • Spatial stream partitioning – Each server needs to handle 5Gbps data • Generally within existing server’s processing capability – Multi-core (4~16) – 10G NIC SIGCOMM 2013, Hong Kong, Aug 2013 17

  18. Summary • Distributed pipeline for low latency • Exploiting data parallelism across servers at each processing stage • If single datum is still beyond capability of a single processing unit – Building deeper pipeline (see paper for details) SIGCOMM 2013, Hong Kong, Aug 2013 18

  19. Outline • Parallel architecture • Parallel algorithms and optimization • Performance • Conclusion SIGCOMM 2013, Hong Kong, Aug 2013 19

  20. Computation Partitioning in a Server • Three key operations in MU-MIMO – Matrix multiplication – Matrix inversion – Viterbi decoding (channel decoding) SIGCOMM 2013, Hong Kong, Aug 2013 20

  21. Parallel Matrix Multiplication • Divide-and-conquer 𝐼 1 𝐼 ∗ 𝐼 = 𝐼 1 ∗ 𝐼 2 ∗ 𝐼 2 ∗ 𝐼 1 + 𝐼 2 ∗ 𝐼 2 = 𝐼 1 Core 1 Core 2 SIGCOMM 2013, Hong Kong, Aug 2013 21

  22. Parallel Matrix Inversion • Based on Gauss-Jordan method ℎ 1𝑜 ℎ 11 ℎ 12 1 0 0 0 ℎ 21 ℎ 22 ℎ 2𝑜 0 1 0 0 ℎ 31 ℎ 32 0 0 ⋱ ⋮ ⋱ ⋮ 0 0 … 1 0 0 ℎ 𝑜1 ℎ 𝑜2 … ℎ 𝑜𝑜 Core 2 Core 1 SIGCOMM 2013, Hong Kong, Aug 2013 22

  23. Parallel Matrix Inversion • Based on Gauss-Jordan method 𝑗 11 𝑗 12 𝑗 1𝑜 ℎ 1𝑜 ℎ 11 ℎ 12 1 0 0 1 0 0 0 𝑗 21 𝑗 22 𝑗 2𝑜 ℎ 21 ℎ 22 ℎ 2𝑜 0 1 0 0 1 0 0 𝑗 31 𝑗 32 0 0 ℎ 31 ℎ 32 0 0 ⋱ ⋮ ⋱ ⋮ ⋱ ⋮ ⋱ ⋮ 0 0 0 0 … 1 𝑗 𝑜1 𝑗 𝑜2 0 0 … 𝑗 𝑜𝑜 … 1 0 0 ℎ 𝑜1 ℎ 𝑜2 … ℎ 𝑜𝑜 Core 2 Core 1 SIGCOMM 2013, Hong Kong, Aug 2013 23

  24. Parallel Viterbi Decoding • Challenge: sequential operations on a continuous (soft-)bit stream • Solution: – Artificially divide bit-stream into blocks Core 1 Core 2 SIGCOMM 2013, Hong Kong, Aug 2013 24

  25. Parallel Viterbi Decoding • Challenge: sequential operations on a continuous (soft-)bit stream • Solution: – Artificially divide bit-stream into blocks – Add overlaps to ensure converging to optimal Core 1 Core 2 SIGCOMM 2013, Hong Kong, Aug 2013 25

  26. Parallel Viterbi Decoding • How to choose a right block size? – The tradeoff between latency and overhead • Our goal: fully utilize the computation capacity while keeping 𝑀 minimal • Optimal size: 𝑀 ∗ = 2𝐸𝑣/(𝑛𝑤 − 𝑣) 𝑣 : stream bit rate 𝑤 : processing rate per core 𝑛: # of cores G L D Core 1 Core 2 SIGCOMM 2013, Hong Kong, Aug 2013 26

  27. Optimization: Lock-free Computing Structure • Complex interaction between communication and computation threads (1.31x  ) Contention at output buffer Lock free SIGCOMM 2013, Hong Kong, Aug 2013 27

  28. Optimization: Communication • Parallelizing communication among multiple cores • Dealing with incast problem – Application-level flow control • Isolating communication and computation on different cores SIGCOMM 2013, Hong Kong, Aug 2013 28

  29. Outline • Parallel architecture • Parallel algorithms and optimization • Performance • Conclusion SIGCOMM 2013, Hong Kong, Aug 2013 29

  30. Micro-benchmarks • Platform: Dell server with an Intel Xeon E5520 CPU (2.26 GHz, 4 cores) Channel inversion SIGCOMM 2013, Hong Kong, Aug 2013 30

  31. Micro-benchmarks Spatial demultiplexing Viterbi decoding SIGCOMM 2013, Hong Kong, Aug 2013 31

  32. Micro-benchmarks 6 users, 100Mbps 20 users, 600Mbps 50 users, 1Gbps SIGCOMM 2013, Hong Kong, Aug 2013 32

  33. Prototype • Software radio: Sora MIMO Kit – 4x phase coherent radio chains – Extensible with an external clock SIGCOMM 2013, Hong Kong, Aug 2013 33

  34. Capacity Gain Caped at a constant value due to random-user selection! SIGCOMM 2013, Hong Kong, Aug 2013 34

  35. Capacity Gain 6.8x  Overprovisioned AP antennas SIGCOMM 2013, Hong Kong, Aug 2013 35

  36. Processing Delay 860𝜈𝑡 Light load (1 frame per 10ms) Heavy load (back-to-back frames) SIGCOMM 2013, Hong Kong, Aug 2013 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend