Optimizing Motion-to-Photon Latency on DAQRI Smart Glasses - - PowerPoint PPT Presentation

optimizing motion to photon latency
SMART_READER_LITE
LIVE PREVIEW

Optimizing Motion-to-Photon Latency on DAQRI Smart Glasses - - PowerPoint PPT Presentation

Optimizing Motion-to-Photon Latency on DAQRI Smart Glasses Heinrich Fink heinrich.fink@daqri.com XDC 2018 1 System Overview Himax Intel SKL Gen9 Display-Port Ctrl + LCOS m7-6Y75 Hardware Platform Software Linux 4.14.x Motion Samples


slide-1
SLIDE 1

Optimizing Motion-to-Photon Latency

  • n DAQRI Smart Glasses

Heinrich Fink

heinrich.fink@daqri.com

XDC 2018

1

slide-2
SLIDE 2

System Overview

Hardware Platform Software Linux 4.14.x Pose User App Rendering Frame Display Photons Motion Visual Inertial Odometry (VIO) Camera + Sensors Tracking Samples

Ideal motion-to-photon latency: AR < 5 ms [1,3] VR < 20 ms [2]

2 Intel SKL Gen9 m7-6Y75 Himax Ctrl + LCOS

Display-Port

slide-3
SLIDE 3

T T + 11.1 ms T + 22.2 ms T + 33.3 ms APPLICATION GPU

3

VIO Host

render-pose

30 / 800 Hz 45 - 90 Hz DISPLAY LCOS 540 Hz HIMAX Display Controller 90 Hz

scan-out & deinterleave

Demo :: Basic Mode

Motion-to-photon latency: ~ 26 ms different for each color → rainbow effect

render

slide-4
SLIDE 4

T T + 11.1 ms T + 22.2 ms T + 33.3 ms

shift-pose

APPLICATION GPU

4

VIO Host COMPOSITOR GPU HIMAX Display Controller DISPLAY LCOS

render-pose warp-poses

30 / 800 Hz 45 - 90 Hz 90 Hz 90 Hz 540 Hz

Himax 2D-shift

scan-out & deinterleave

Demo :: Optimized Mode

M2P Latency: ~ 8 ms

warp render

slide-5
SLIDE 5

Challenges of AR Compositor Architecture

5

  • Custom interface between App and Compositor
  • Need to attach pose / time to app render buffer (for downstream)
  • Let compositor pace app render cycles
  • Decouple render-rate from display-rate
  • No prevalent standard exists for that (yet?)
  • Keep end-to-end pipeline short
  • No triple buffering, no intermediates
  • Updating poses as late as possible w/o stalling the full pipeline
  • Compositor/App compete for GPU resources
  • Pre-emption likely needed
slide-6
SLIDE 6

The Path Ahead

  • Remove the desktop render path
  • Direct use of KMS to flip / access timing info
  • Use DRM format modifiers for optimal end-to-end buffer formats
  • Can import into EGL / Vulkan for applications
  • Use dma_fence for down/up stream synch and traceability
  • Use KMS-exposed hardware planes for simple compositing
  • Although varying timing requirements might be tricky
  • Observability through standard tools (GPUView, GPUtop, …?)

6

slide-7
SLIDE 7

References

[1] Bailey, R. E., Arthur, J. J., & Williams, S. P. (2004, August). Latency requirements for head-worn display S/EVS applications. In Enhanced and Synthetic Vision 2004 (Vol. 5424, pp. 98-110). International Society for Optics and Photonics. [2] Yao, R., Heath, T., Davies, A., Forsyth, T., Mitchell, N., & Hoberman, P. (2014). Oculus vr best practices guide. Oculus VR, 4. http://static.oculusvr.com/sdk-downloads/documents/OculusBestPractices.pdf [3] Lincoln, P. C. (2017). Low Latency Displays for Augmented Reality (Doctoral dissertation, The University of North Carolina at Chapel Hill). [4] Presentation about [3] : https://www.microsoft.com/en-us/research/video/low-latency-displays-augmented-reality/ [5] Wagner, D., (2018). MOTION TO PHOTON LATENCY IN MOBILE AR AND VR, https://medium.com/@DAQRI/motion-to-photon-latency-in-mobile-ar-and-vr-99f82c480926

7