mobile ar vr with edge based deep learning
play

Mobile AR/VR with Edge-based Deep Learning Jiasi Chen Department - PowerPoint PPT Presentation

Mobile AR/VR with Edge-based Deep Learning Jiasi Chen Department of Computer Science & Engineering University of California, Riverside CNSM Oct. 23, 2019 Outline What is AR/VR? Edge computing can provide... 1. Real-time object


  1. Mobile AR/VR with Edge-based Deep Learning Jiasi Chen Department of Computer Science & Engineering University of California, Riverside CNSM Oct. 23, 2019

  2. Outline • What is AR/VR? • Edge computing can provide... 1. Real-time object detection for mobile AR 2. Bandwidth-efficient VR streaming with deep learning • Future directions 2

  3. What is AR/VR? 3

  4. End users Multimedia is… Audio On-demand video Internet Live video Content creation Compression Storage Distribution Virtual and augmented reality 4

  5. What is AR/VR? | | | | virtual reality reality augmented virtuality augmented reality mixed reality 5

  6. Who’s Using Virtual Reality? Smartphone-based hardware: Google Cardboard Google Daydream High-end hardware: 6 Playstation VR HTC Vive

  7. Why VR now? Portability (2) Watch it at home (3) Carry it with you (1) Have to go somewhere Movies: VR: Oculus Rift (2016) Virtuality gaming (1990s) CAVE (1992) 7 Similar portability trend for VR, driven by hardware advances from the smartphone revolution.

  8. Who’s Using Augmented Reality? Smartphone- based: Pokemon Go Google Translate (text processing) Snapchat filters (face detection) High-end hardware: 8 Microsoft Hololens Google Glasses

  9. Is it all just fun and games? • AR/VR has applications in many areas: Data visualization Education Public Safety • What are the engineering challenges? • AR: process input from the real world (related to computer vision, robotics) • VR: output the virtual world to your display (related to computer graphics) 9

  10. How AR/VR Works 1. Virtual world 4. Display 3. Render generation VR: 2. Real object detection AR: 4. Render 5. Display 1. Device tracking 10

  11. What systems functionality is currently available in AR/VR? 11

  12. Systems Support for VR Game engines • Unity • Unreal 1. Virtual world 4. Display 3. Render generation VR: • Mobile GPU • Qualcomm VR/AR chips 12

  13. Systems Support for AR Computer vision / machine learning libraries • Vuforia • OpenCV • Tensorflow 2. Real object detection • Google ARCore 4. Render 5. Display • Apple ARKit • Microsoft Hololens 1. Device tracking • Microsoft Hololens • Magic Leap • Smartphones 13

  14. What AR/VR functionality is needed by researchers? 14

  15. Research Space in AR Can edge computing help? Typically done using deep learning (research, not industry) • Slow : 600 ms per frame on a smartphone • Energy drain: 1% battery per minute on a smartphone MARLIN (SenSys’19), Liu et al. (MobiCom’19), DeepDecision (INFOCOM’18), DeepMon (MobiSys’17) 2. Real object detection 4. Render 5. Display 1. Device tracking Typically done using SLAM (combine camera + IMU sensors) • Slow : 30 ms per frame on a smartphone Can edge • Energy drain: > 1.5 W on a smartphone computing help? ShareAR (HotNets’19), MARVEL (SenSys’18), OverLay (MobiSys’15) 15

  16. Research Space in AR Example of slow object detection: Comparison of different apps’ energy drain: Take-home message : Machine learning is useful in AR • As part of the AR processing pipeline (object detection) • At the expense of energy 16

  17. Research Space in VR On a content/edge server On the mobile device 1a. Virtual world 4. Display 3. Render generation 1b. Transmission over the network Internet High bandwidth : Up to 25 Mbps on YouTube at max resolution Can machine learning help with VR traffic optimization? Rubiks (MobiSys’18), FLARE (MobiCom’18), Characterization (SIGCOMM workshop’17), FlashBack (MobiSys’16) Take-home message : Machine learning is useful in VR • To help with user predictions, traffic management 17

  18. Outline • Overview of AR/VR • Edge computing can provide... 1. Real-time object detection for mobile AR 2. Bandwidth-efficient VR streaming with deep learning • Future directions 18

  19. How AR Works • Object detection is a computational bottleneck for AR • Current AR is only able to detect flat planes or specific object instances • Can we do more powerful processing on a server? 2. Real object detection 4. Render 5. Display 1. Device tracking 19

  20. Reducing lag for augmented reality • Augmented and virtual reality requires a lot of computational power • Run expensive computer vision and machine learning algorithms Run on the cloud? Run on the device? Internet Cloud datacenter Too far  too slow! Too slow! e.g. AWS Run on the edge? Edge compute node 20 Xukan Ran, Haoliang Chen, Xiaodan Zhu, Zhenming Liu, Jiasi Chen, “DeepDecision: A Mobile Deep Learning Framework”, IEEE INFOCOM , 2018.

  21. Remote processing Challenges with current approaches • Current approaches for machine learning on mobile devices • Local-only processing Slow! (~600 ms/frame) • Apple Photos, Google Translate • GPU speedup Local processing • Remote-only processing Doesn’t work when network is bad • Apple Siri, Amazon Alexa • Our observations • Different AR apps have different accuracy and latency requirements • Network latency is often higher than CPU/GPU processing time on the edge server • Video streams and deep learning models can scale gracefully 21

  22. Problem Statement • Problem: How should the mobile device be configured to meet the lag requirements of the AR app and the user? • Solution: Periodically profile, optimize, and update the configuration 1. Offline performance 2. Online 3. Update the characterization optimization configuration 22

  23. Online decision framework Metrics: Degrees of freedom: Constraints: Time video resolution • Current network conditions • Application requirements Optimize decision detection accuracy neural net model size • Network condition • Bandwidth • Latency • • Video characteristics App requirements energy consumption • • Latency Frame rate • Accuracy offloading decision • Resolution • Energy • Bit rate • Deep learning characteristics • Model size • Model latency / energy 23 • Model accuracy

  24. System design Edge server Big deep learning Input live video Output display Online decision framework Tiny deep learning User’s battery constraint Current network conditions Big deep Performance App latency requirement learning characterization App accuracy requirement Front-end device 24

  25. AR Object Detection Quality Metrics • Accuracy • Classification and location both important for AR • Intersection over union (IoU) metric • Ground truth: Big deep learning running on highest resolution • Timing • Latency: time from when we sent the frame to getting the result • Frame rate: 1 / time between consecutive frames 25

  26. 1. Offline performance characterization: How do latency and energy change with video resolution? Energy and latency increase with pixels 2 for local processing 26

  27. 1. Offline performance characterization: How does accuracy change with bit rate and resolution? • Encoded videos at different bitrates and resolutions Big deep learning: Tiny deep learning: Accuracy increases more with resolution than bitrate, especially for big deep learning 27

  28. 1. Performance characterization: How does accuracy change with latency? Result from deep learning is stale! time t = 0 ms t = 100 ms Deep learning processing delay • Measured accuracy as deep learning processing latency increased Deep learning processing latency (ms) Accuracy decreases as latency increases. 28

  29. 2. Online decision framework: Optimization problem From offline performance characterization: Frame rate Accuracy 𝑏 𝑗 𝑞, 𝑠, 𝑚𝑗 : accuracy function of model 𝑗 ��� 𝑞 : latency function of model i 𝑚 � � Maximize 𝑔 + ⍺ ∑ 𝑏 � 𝑞, 𝑠, 𝑚 � · 𝑧 � ��� 𝑐 � 𝑞, 𝑠, 𝑔 : battery function of model i Local processing time Network transmission time ��� 𝑞 + � 𝑚 � �� + 𝑀 𝑗𝑔 𝑗 = 0 Subject to 𝑚 � = � Calculate end-to-end latency. ��� 𝑞 𝑗𝑔 𝑗 > 0 𝑚 � ��� 𝑞 𝑧 � � ∑ 𝑚 � ≤ 1/𝑔 Finish processing a frame before next frame arrives. ��� � Don’t use more than B battery ∑ 𝑐 � 𝑞, 𝑠, 𝑔 · 𝑧 � ≤ ℬ ��� Meet application accuracy requirement. 𝑏 � 𝑞, 𝑠, 𝑔 ≥ 𝐵 · 𝑧 � , ∀ 𝑗: Meet application frame rate requirement. 𝑔 ≥ 𝐺; Don’t use more than R bandwidth. 𝑠 · 𝑧 � ≤ 𝑆 � ∑ 𝑧 � = 1 p : video resolution ��� r : video bitrate 𝑔 : frame rate 29 Variables 𝑞, 𝑠, 𝑔 ≥ 0; 𝑧 � ∈ 0,1 ; 𝑧 𝑗 : which deep learning model to run (local, remote)

  30. After: 30

  31. Key Take-Aways Real-time video analysis using local deep learning is slow (~600 ms/frame on current smartphones) Relationship between degrees of freedom and metrics is complex, and requires profiling Choose the right device configuration (resolution, frame rate, deep learning model) to meet QoE requirements 31

  32. Outline • Overview of AR/VR • Edge computing can provide... 1. Real-time object detection for mobile AR 2. Bandwidth-efficient VR streaming using deep learning • Future directions 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend