23270 augmented reality for navigation and informational
play

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS - PowerPoint PPT Presentation

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was established in March, 2017 together with


  1. 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

  2. Product Vision

  3. Company Introduction Apostera GmbH with headquarter in Munich, was established in March, 2017 together with 3 affiliated R&D centers to leverage 10+ years engineering experience in complex software development for Automotive Industry. Apostera GmbH engineering and business experience in Driver Experience, Navigation and Telecommunication domains together with unique IP and mathematical talent guarantees creation of advanced product portfolio to bring mobility world to new era of autonomy. Apostera GmbH today’s target is to reshape areas of Automotive Perception, Visualization, Path Planning, V2X and finally Autonomous Driving in open and collaborative manner. Per erceptio ion: : Adv Advanced Su Surr rround Vi View Mo Moni nitor orin ing, g, So Soft ftware Sm Smart Ca Camera Vi Visuali lizatio ion: So Soft ftware Aug Augmented Gu Guidance (HU (HUD and and LCD CD) and Sens and Sensor Fusi Fusion Quali uality: A A (AR (AR, AD ADAS, S, AD) AD) Tes estin ing g Aut Automated System Mo Mobi bili lity: : So Soft ftware Man Managed Aut Autonomous Dr Driv ivin ing

  4. APOSTERA product lines - Basics IA AAC ADAS Informational Active ADAS ADAS components Platform HAD Highly Automated Driving

  5. Representation For The Driver LCD scr screen HUD in in car ar Pas ast Outdated Sm Smart Gl Glasses Real-depth HUD with ith wid ide FOV in in car ar Alt lternative, On goin oing development +2 +2 years fast developin ing mar arket (t (today)

  6. Key Challenges For In-Vehicle AR Usability – augmented reality subsystem should not disturb driver as it is continuously observed Hardware limitations – computational, power consumption, zero latency (HUD) Requirements for precise environmental model estimation for occlusion avoiding Dependency on inaccurate map and navigation data Distributed HW architectures, platform flexibility requirements High precision absolute and relative positioning requirements Components synchronization and latency avoidance Embedded memory usage limitations, different memory models Algorithms should be both configurable and efficient Specific rendering requirements, not covered by general purpose frameworks Variety of inputs under different platforms Out-of-vehicle simulation (does not support natural simulation like classical navigation)

  7. System Concept

  8. Unique Automotive Augmented Reality Solution Solution capable to create Augmented , mixed visual Reality for drivers and passengers based on Computer Vision , vehicle sensor, map data, V2X, navigation guidance using Data Fusion . Sensors/CAN ADAS ADAS ADAS Telematics/V2X Platform Platform Platform Step I Further Steps In progress Vehicle displays Automotive Cameras Integration of V2X information Path Planning and AR 360 Projection on wind shield - HUD Motorbikes helmets Navigation System/Map Data

  9. Scientific and Engineering Expertise Recognition and Tracking Computer Vision Approaches Sensor Fusion • • • Road boundaries and lane detection Real-time feature extraction from video Flexible fusion of data from internal and sensors external sources • Slopes estimation • • Road scene semantic segmentation LIDAR data merging • Vehicle recognition and tracking • • Adaptability and confidence estimation of 3D-environment model reconstruction based • Distance & time to collision estimation output data on different sensors • Pedestrian detection and tracking • • GPU optimization for different platforms Latency compensation & data extrapolation • Facade recognition and texture extraction • Road signs recognition Positioning Machine Learning Specifics Augmented Reality • • • Precise relative and absolute positioning CNN and DNN approaches LCD, HUD & further output devices • • • Flexible data fusion and smooth Map Supervised MRF parameters adjustment Natural navigation hints & infographics Matching • • CSP-based structure & parameters adjustment Collison, Lane departure, Blind spots warnings, etc. • Automotive constrained SLAM (both supervised and unsupervised) • POIs and supportive information (facades and • • Video-based digital gyroscope Weak classifiers boosting & others parking slots highlighting, etc.) Integration with HD Maps Predictable Environmental Model, Safety Apps - V2X • HD Maps utilization for Precise positioning, Map matching and BSM transmitting/receiving Path planning, Junction assistance • Remote Vehicles trajectory prediction Data generation for HD Maps • Basic safety applications based on collision detection Contribution to ADAS attributes structure – NDS (HERE)

  10. System Overview • Quick-install demonstration solution ADAS Engine • Platform for AR (allows to be portable) Sensor Abstraction Layer Web Interface • Integration with Head Units Head Unit SW Update • Integration with vehicle networks Configuration Diagnostic • Using of own sensors if needed Navigation data, preprocessed sensor data, etc. ECU (e.g. Jetson TX2) Video Stream with augmented objects Live data from vehicle: - CAN data, Sensors ADAS/AR Engine - Video stream HUD/LCD Control/Settings

  11. Perception Concept

  12. Sensor Fusion: Data Inference Optimal fusion filter parameters adjustment problem statement and solution developed to fit different car models with different chassis geometries and steering wheel models/parameters. Features: Absolute and relative positioning Dead reckoning Fusion with available automotive grade sensors – GPS, steering wheel, steering wheel rate, wheels sensors Fusion with navigation data Rear movements support Complex steering wheel models identification. Ability to integrate with provided models GPS errors correction Stability and robustness against complex conditions – tunnels, urban canyons

  13. Sensor Fusion: Advanced Augmented Objects Positioning Solving map accuracy problems Position clarification: • Camera motion Placing: model: • Road model • Video-based gyroscope • Vehicles • Positioner detection Component • Map data • Road model • Objects tracking

  14. Sensor Fusion: Comparing Solutions Reference solution Apostera solution Update frequency ~4-5 Hz Update frequency ~15 Hz (+extrapolation with any fps)

  15. Lane Detection: Adaptability and Confidence

  16. Lane Detection: 3D-scene Recognition Pipeline Low level invariant features • Single camera • Stereo data • Point clouds Structural analysis Probabilistic models • Real-world features • Physical objects • 3D scene reconstruction • Road situation 3D space scene fusion (different sensors input) Backward knowledge propagation from high levels

  17. Vehicle Detection Convolutional neural network for vehicle detection GPU Acceleration – CUDA Running real-time on NVidia Jetson TX2 Inference speedup on embedded (TX2) GPU vs CPU is ~3x More potential with new libraries (e.g. TensorRT) Training speedup on desktop GPU vs CPU is ~20x Classifier accuracy (about 50k, 960x540, ~55-60 deg HFOV) • Positive: 99.65% • Negative: 99.82% Size of detection down to 30 pix, detection range of about 60 m Figure – Vehicle detection examples

  18. Road Scene Semantic Segmentation Deep fully convolutional neural network for semantic pixel-wise segmentation Road scene understanding use cases: model appearance, shape, spatial-relationship between classes Inference speedup GPU vs CPU is ~3x Figure – Road scene segmentation examples

  19. HMI Concept

  20. Rendering Component Structure Figure – Rendering component

  21. Augmented Objects Primitives Barrier Lane Line Street Name Lane Arrow Fishbone

  22. Augmented Objects Primitives And HMI

  23. Head Up Display Concept. HUD vs LCD Hardware limitation • HUD devices are rarely available on market • FOV and object size Timings • Zero latency • Driver eye position Driver perception • Virtual image distance • Information balance

  24. HUD Image Correction (Dewarping) Need to correct slight distortion in the HUD image A custom warp map was made by taking an image of a test pattern that was projected by the HUD and recorded by a camera Figure – Uncorrected image Figure – Corrected image

  25. Demo Application (LCD)

  26. Summary: Key Technology Advantages Proved understanding of pragmatic intersection and synergy between fundamental theoretical results and final requirements Formal mathematical approaches are complemented by deep learning Solid GPU optimization Automotive grade solutions integrated with all the data sources in vehicle – data fusion approaches High robustness in various weather and road conditions, confidence is estimated for efficient fusion Closed loops designed and implemented to enhance speed and robustness of each component Integration with V2X and various navigation systems System architecture supports distributed HW setup and integration with existing in-vehicle components if required (environmental model, objects detection, navigation, positioner etc.) Hierarchical Algorithmic Framework design highly optimizes computations on embedded platforms Collaboration with scientific groups to integrate cutting edge approaches

  27. BRINGING MOBILITY WORLD TO NEW ERA OF AUTONOMY Sergii Bykov Technical Lead Machine Learning sergii.bykov@apostera.com

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend