23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS - - PowerPoint PPT Presentation

23270 augmented reality for navigation and informational
SMART_READER_LITE
LIVE PREVIEW

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS - - PowerPoint PPT Presentation

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was established in March, 2017 together with


slide-1
SLIDE 1

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS

Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

slide-2
SLIDE 2

Product Vision

slide-3
SLIDE 3

Company Introduction

Apostera GmbH with headquarter in Munich, was established in March, 2017 together with 3 affiliated R&D centers to leverage 10+ years engineering experience in complex software development for Automotive Industry. Apostera GmbH engineering and business experience in Driver Experience, Navigation and Telecommunication domains together with unique IP and mathematical talent guarantees creation of advanced product portfolio to bring mobility world to new era of autonomy. Apostera GmbH today’s target is to reshape areas of Automotive Perception, Visualization, Path Planning, V2X and finally Autonomous Driving in open and collaborative manner.

Per erceptio ion: : Adv Advanced Su Surr rround Vi View Mo Moni nitor

  • rin

ing, g, So Soft ftware Sm Smart Ca Camera and and Sens Sensor Fusi Fusion Vi Visuali lizatio ion: So Soft ftware Aug Augmented Gu Guidance (HU (HUD and and LCD CD) Quali uality: A A (AR (AR, AD ADAS, S, AD) AD) Tes estin ing g Aut Automated System Mo Mobi bili lity: : So Soft ftware Man Managed Aut Autonomous Dr Driv ivin ing

slide-4
SLIDE 4

APOSTERA product lines - Basics IA AAC HAD

Informational ADAS Active ADAS components Highly Automated Driving

ADAS Platform

slide-5
SLIDE 5

Representation For The Driver

Outdated LCD scr screen Sm Smart Gl Glasses HUD in in car ar Real-depth HUD with ith wid ide FOV in in car ar Pas ast Alt lternative, fast developin ing mar arket (t (today) On goin

  • ing development +2

+2 years

slide-6
SLIDE 6

Key Challenges For In-Vehicle AR

Usability – augmented reality subsystem should not disturb driver as it is continuously observed Hardware limitations – computational, power consumption, zero latency (HUD) Requirements for precise environmental model estimation for occlusion avoiding Dependency on inaccurate map and navigation data Distributed HW architectures, platform flexibility requirements High precision absolute and relative positioning requirements Components synchronization and latency avoidance Embedded memory usage limitations, different memory models Algorithms should be both configurable and efficient Specific rendering requirements, not covered by general purpose frameworks Variety of inputs under different platforms Out-of-vehicle simulation (does not support natural simulation like classical navigation)

slide-7
SLIDE 7

System Concept

slide-8
SLIDE 8

Unique Automotive Augmented Reality Solution

Solution capable to create Augmented, mixed visual Reality for drivers and passengers based on Computer Vision, vehicle sensor, map data, V2X, navigation guidance using Data Fusion.

Automotive Cameras Sensors/CAN Navigation System/Map Data Vehicle displays Projection on wind shield - HUD Telematics/V2X

ADAS Platform Step I

Integration of V2X information Motorbikes helmets Path Planning and AR 360

ADAS Platform In progress ADAS Platform Further Steps

slide-9
SLIDE 9

Recognition and Tracking

  • Road boundaries and lane detection
  • Slopes estimation
  • Vehicle recognition and tracking
  • Distance & time to collision estimation
  • Pedestrian detection and tracking
  • Facade recognition and texture extraction
  • Road signs recognition

Positioning

  • Precise relative and absolute positioning
  • Flexible data fusion and smooth Map

Matching

  • Automotive constrained SLAM
  • Video-based digital gyroscope

Predictable Environmental Model, Safety Apps - V2X

  • BSM transmitting/receiving
  • Remote Vehicles trajectory prediction
  • Basic safety applications based on collision detection

Integration with HD Maps HD Maps utilization for Precise positioning, Map matching and Path planning, Junction assistance Data generation for HD Maps Contribution to ADAS attributes structure – NDS (HERE) Augmented Reality

  • LCD, HUD & further output devices
  • Natural navigation hints & infographics
  • Collison, Lane departure, Blind spots warnings, etc.
  • POIs and supportive information (facades and

parking slots highlighting, etc.) Computer Vision Approaches

  • Real-time feature extraction from video

sensors

  • Road scene semantic segmentation
  • Adaptability and confidence estimation of
  • utput data
  • GPU optimization for different platforms

Sensor Fusion

  • Flexible fusion of data from internal and

external sources

  • LIDAR data merging
  • 3D-environment model reconstruction based
  • n different sensors
  • Latency compensation & data extrapolation

Machine Learning Specifics

  • CNN and DNN approaches
  • Supervised MRF parameters adjustment
  • CSP-based structure & parameters adjustment

(both supervised and unsupervised)

  • Weak classifiers boosting & others

Scientific and Engineering Expertise

slide-10
SLIDE 10

System Overview

Live data from vehicle:

  • CAN data, Sensors
  • Video stream

ECU (e.g. Jetson TX2)

ADAS Engine Sensor Abstraction Layer Web Interface SW Update Configuration Diagnostic

Video Stream with augmented objects

ADAS/AR Engine

HUD/LCD

Head Unit

  • Quick-install demonstration solution
  • Platform for AR (allows to be portable)
  • Integration with Head Units
  • Integration with vehicle networks
  • Using of own sensors if needed

Navigation data, preprocessed sensor data, etc. Control/Settings

slide-11
SLIDE 11

Perception Concept

slide-12
SLIDE 12

Sensor Fusion: Data Inference

Optimal fusion filter parameters adjustment problem statement and solution developed to fit different car models with different chassis geometries and steering wheel models/parameters. Features: Absolute and relative positioning Dead reckoning Fusion with available automotive grade sensors – GPS, steering wheel, steering wheel rate, wheels sensors Fusion with navigation data Rear movements support Complex steering wheel models identification. Ability to integrate with provided models GPS errors correction Stability and robustness against complex conditions – tunnels, urban canyons

slide-13
SLIDE 13

Sensor Fusion: Advanced Augmented Objects Positioning

Solving map accuracy problems

Placing:

  • Road model
  • Vehicles

detection

  • Map data

Position clarification:

  • Camera motion

model:

  • Video-based

gyroscope

  • Positioner

Component

  • Road model
  • Objects tracking
slide-14
SLIDE 14

Sensor Fusion: Comparing Solutions

Update frequency ~15 Hz (+extrapolation with any fps) Update frequency ~4-5 Hz

Apostera solution Reference solution

slide-15
SLIDE 15

Lane Detection: Adaptability and Confidence

slide-16
SLIDE 16

Lane Detection: 3D-scene Recognition Pipeline

Low level invariant features

  • Single camera
  • Stereo data
  • Point clouds

Structural analysis Probabilistic models

  • Real-world features
  • Physical objects
  • 3D scene reconstruction
  • Road situation

3D space scene fusion (different sensors input) Backward knowledge propagation from high levels

slide-17
SLIDE 17

Vehicle Detection

Convolutional neural network for vehicle detection GPU Acceleration – CUDA Running real-time on NVidia Jetson TX2 Inference speedup on embedded (TX2) GPU vs CPU is ~3x More potential with new libraries (e.g. TensorRT) Training speedup on desktop GPU vs CPU is ~20x Classifier accuracy (about 50k, 960x540, ~55-60 deg HFOV)

  • Positive: 99.65%
  • Negative: 99.82%

Size of detection down to 30 pix, detection range of about 60 m

Figure – Vehicle detection examples

slide-18
SLIDE 18

Road Scene Semantic Segmentation

Deep fully convolutional neural network for semantic pixel-wise segmentation Road scene understanding use cases: model appearance, shape, spatial-relationship between classes Inference speedup GPU vs CPU is ~3x

Figure – Road scene segmentation examples

slide-19
SLIDE 19

HMI Concept

slide-20
SLIDE 20

Rendering Component Structure

Figure – Rendering component

slide-21
SLIDE 21

Augmented Objects Primitives

Barrier Lane Line Lane Arrow Fishbone Street Name

slide-22
SLIDE 22

Augmented Objects Primitives And HMI

slide-23
SLIDE 23

Head Up Display Concept. HUD vs LCD

Hardware limitation

  • HUD devices are rarely

available on market

  • FOV and object size

Timings

  • Zero latency
  • Driver eye position

Driver perception

  • Virtual image distance
  • Information balance
slide-24
SLIDE 24

HUD Image Correction (Dewarping)

Figure – Corrected image Figure – Uncorrected image

Need to correct slight distortion in the HUD image A custom warp map was made by taking an image of a test pattern that was projected by the HUD and recorded by a camera

slide-25
SLIDE 25

Demo Application (LCD)

slide-26
SLIDE 26

Summary: Key Technology Advantages

Proved understanding of pragmatic intersection and synergy between fundamental theoretical results and final requirements Formal mathematical approaches are complemented by deep learning Solid GPU optimization Automotive grade solutions integrated with all the data sources in vehicle – data fusion approaches High robustness in various weather and road conditions, confidence is estimated for efficient fusion Closed loops designed and implemented to enhance speed and robustness of each component Integration with V2X and various navigation systems System architecture supports distributed HW setup and integration with existing in-vehicle components if required (environmental model, objects detection, navigation, positioner etc.) Hierarchical Algorithmic Framework design highly optimizes computations on embedded platforms Collaboration with scientific groups to integrate cutting edge approaches

slide-27
SLIDE 27

Sergii Bykov Technical Lead Machine Learning sergii.bykov@apostera.com

BRINGING MOBILITY WORLD TO NEW ERA OF AUTONOMY