TRAINING AND VALIDATING AUTOMATED DRIVING APPLICATIONS USING - - PowerPoint PPT Presentation

training and validating automated driving applications
SMART_READER_LITE
LIVE PREVIEW

TRAINING AND VALIDATING AUTOMATED DRIVING APPLICATIONS USING - - PowerPoint PPT Presentation

TRAINING AND VALIDATING AUTOMATED DRIVING APPLICATIONS USING PHYSICS-BASED SENSOR SIMULATION Martijn Tideman Product Director NVIDIA GTC Europe October 11 th 2017 www.tassinternational.com TASS International: Connecting Simulation &


slide-1
SLIDE 1

www.tassinternational.com

TRAINING AND VALIDATING AUTOMATED DRIVING APPLICATIONS USING PHYSICS-BASED SENSOR SIMULATION

NVIDIA GTC Europe – October 11th 2017

Martijn Tideman – Product Director

slide-2
SLIDE 2

Confidential

Validation & Certification Testing & Verification Engineering & Development Research & Concepts TASS International supports the automotive industry in making vehicles safer and smarter by offering software and services for development and validation of Automated Driving and Integrated Safety systems

2

TASS International: Connecting Simulation & Testing

slide-3
SLIDE 3

Confidential 3

  • Per September 1st 2017, TASS International and Siemens joined forces
  • Offering a complete development chain for mechanics and electronics
  • Integrated solutions for verification and validation of automated driving systems
  • AI and Deep Learning are key focus points

TASS International & Siemens

slide-4
SLIDE 4

Confidential

Testing needed to verify and demonstrate that the physical product complies to specific requirements and quality standards (often in an emulated environment representing a subset of real-life use-cases)

Why testing?

4

Connecting Simulation & Testing

slide-5
SLIDE 5

Confidential

Simulation needed to make quick and cost-effective design iterations and validate the product against all relevant real-life use-cases in an environment which is safe and offers perfectly reproducible conditions

Why simulation?

Connecting Simulation & Testing

5

slide-6
SLIDE 6

Confidential

TASS International Simulation Solutions

World & Sensor Modelling

Environmental sensors perceiving the world and delivering input to Automated Driving decision & control logic

Tyre Modelling

Tyres transferring Automated Driving control commands to the road

Human Modelling

Human drivers and passengers traveling safely and comfortably from A to B

V2X Modelling

Receivers and transmitters facilitating wireless communication

6

slide-7
SLIDE 7

Confidential

Simulation Platform: PreScanTM

World & Sensor Modelling V2X Modelling

Environmental sensors perceiving the world and delivering input to Automated Driving decision & control logic Receivers and transmitters facilitating wireless communication

7

slide-8
SLIDE 8

Confidential

Main capabilities

  • Easy world modeling, scenario

building & import

  • Extensive sensor model library
  • Camera, radar, lidar, ultrasone,

infrared, V2X, GPS, etc.

  • Interfaces with 3rd party solutions
  • Vehicle dynamics, maps, traffic, etc.

Real scenario Virtual scenario Virtual camera image

Workflow Example

Physics based camera

PreScanTM Simulation Platform

Sensor models with varying fidelity

8

slide-9
SLIDE 9

Confidential

PreScanTM Application Examples

Adaptive Cruise Control Pedestrian AEB based on radar-camera fusion

9

Lane Keeping Assistance Parking Assistance

slide-10
SLIDE 10

Confidential

Deep Learning is increasingly being applied for ADAS and HAD  Almost all big OEMs / Tiers have established dedicated teams for Deep Learning It is widely recognized that simulation is necessary to train HAD algorithms  Especially for the “corner cases” (i.e. critical situations with low probability) Deep Learning currently mainly applied on camera data, but industry also looking at:  Using radar and lidar data  Raw sensor data fusion

10

Source: DFKI Example: logos of companies recently presenting about deep learning at conferences

Application area: Deep Learning

Deep learning is gaining momentum

slide-11
SLIDE 11

Confidential

Needed for successfully applying Deep Learning:

  • 1. Lots of training data (e.g. camera, radar, lidar data, etc.)
  • There is lots of real-world data available about high-

probability cases, but insufficient real-world data available for critical situations with low-probability (“corner-cases”)

  • Simulation can provide this data very easily!
  • 2. Reference (ground truth) data: aka “labels” or “tags”
  • Manually “tagging/labeling” images is an expensive and

boring process (even if outsourced to low wage countries)

  • Simulation solves this by providing a “free” ground-truth

signal!

  • 3. Test coverage & final validation/certification
  • High Performance Clusters (HPCs) capable of running large

numbers of scenarios & variations for validation purposes

  • Open question: are we able to develop a virtual

homologation methodology & environment?

11

slide-12
SLIDE 12

Confidential

Needed for successfully applying Deep Learning:

  • 1. Lots of training data (e.g. camera, radar, lidar data, etc.)
  • There is lots of real-world data available about high-

probability cases, but insufficient real-world data available for critical situations with low-probability (“corner-cases”)

  • Simulation can provide this data very easily!
  • 2. Reference (ground truth) data: aka “labels” or “tags”
  • Manually “tagging/labeling” images is an expensive and

boring process (even if outsourced to low wage countries)

  • Simulation solves this by providing a “free” ground-truth

signal!

  • 3. Test coverage & final validation/certification
  • High Performance Clusters (HPCs) capable of running large

numbers of scenarios & variations for validation purposes

  • Open question: are we able to develop a virtual

homologation methodology & environment?

12

slide-13
SLIDE 13

Confidential

PreScanTM Physics Based Camera (PBC)

13

PreScanTM PBC during night-time driving PreScanTM PBC during tunnel entrance/exit

slide-14
SLIDE 14

Confidential

The PreScanTM Physics Based Camera offers:  Full-spectrum world simulation (incl. non-visual wavelengths such as IR)  Camera component models (e.g. lens, filters, imager)

14

PreScanTM Physics Based Camera (PBC)

slide-15
SLIDE 15

Confidential

PreScanTM Physics Based Radar (PBR)

15

Note: this is a 12s scenario, played 5x slower. The radar has a much wider field of view than the camera.

Camera image from the “radar’s point-of-view” PreScanTM PBR simulated radar data, processed to Range-Doppler

slide-16
SLIDE 16

Confidential

PreScan TM Physics Based Radar (PBR) Capabilities

Multipath simulation up to any number of bounces. Multistatic antenna configurations (MIMO). Fully customizable waveforms (FMCW, Fast Chirp Modulation, etc). Physical material properties, including polarization effects. Clutter simulation. Micro-doppler effects. Interference between different radar sets. Non-perfect component behaviour. Configurable tradeoff between fidelity and performance.

16

slide-17
SLIDE 17

Confidential

PreScanTM LIDAR model

17

Example: PreScan LIDAR model simulating a Velodyne LIDAR sensor

slide-18
SLIDE 18

Confidential

Needed for successfully applying Deep Learning:

  • 1. Lots of training data (e.g. camera, radar, lidar data, etc.)
  • There is lots of real-world data available about high-

probability cases, but insufficient real-world data available for critical situations with low-probability (“corner-cases”)

  • Simulation can provide this data very easily!
  • 2. Reference (ground truth) data: aka “labels” or “tags”
  • Manually “tagging/labeling” images is an expensive and

boring process (even if outsourced to low wage countries)

  • Simulation solves this by providing a “free” ground-truth

signal!

  • 3. Test coverage & final validation/certification
  • High Performance Clusters (HPCs) capable of running large

numbers of scenarios & variations for validation purposes

  • Open question: are we able to develop a virtual

homologation methodology & environment?

18

slide-19
SLIDE 19

Confidential

  • PreScan’s Image Segmentation Sensor (ISS) generates segmented images
  • Two modes:

1. Object mode: each object gets unique ID, name, color 2. Type mode:

  • bjects are grouped according to object-type

PreScanTM Image Segmentation Sensor (ISS)

19

Camera image ISS image based on object types ISS image based on unique objects

  • ISS can be combined with other “reference sensors” (e.g. bounding boxes, depth cameras)
  • Not only for camera simulation, but also usable for radar and lidar simulation!
slide-20
SLIDE 20

Confidential

Image Segmentation Sensor: Example Application

20

slide-21
SLIDE 21

Confidential 21

Joint projects with DFKI & Siemens

Main questions: 1. Are synthetic camera images generated by PreScan suitable for training deep-learning based classifiers? What criteria do they need to comply to? 2. Does addition of synthetic images to a set of real images offer added value? Approach:

  • Training based on Convolutional Neural Networks (CNNs) for:

Image segmentation Driving scenario classification

  • Different models were trained based on real and synthetic data, mixed in

various ratios

  • Performance evaluated on a set of real test images using confusion matrices

and Intersection over Union (IoU) criteria

Using PreScanTM data for deep learning

slide-22
SLIDE 22

Confidential 22

Real images from automotive camera Synthetic images from PreScan Physics Based Camera (PBC) model

Joint projects with DFKI & Siemens

Segmented images from PreScan Image Segmentation Sensor (ISS)

Using PreScanTM data for deep learning

slide-23
SLIDE 23

Confidential 23

Real images from automotive camera

Joint projects with DFKI & Siemens

Using PreScanTM data for deep learning

Synthetic images from PreScan Physics Based Camera (PBC) model Segmented images from PreScan Image Segmentation Sensor (ISS)

slide-24
SLIDE 24

Confidential 24

Real images from automotive camera

Joint projects with DFKI & Siemens

Using PreScanTM data for deep learning

Synthetic images from PreScan Physics Based Camera (PBC) model Segmented images from PreScan Image Segmentation Sensor (ISS)

slide-25
SLIDE 25

Confidential 25

Real images from automotive camera

Joint projects with DFKI & Siemens

Using PreScanTM data for deep learning

Synthetic images from PreScan Physics Based Camera (PBC) model Segmented images from PreScan Image Segmentation Sensor (ISS)

slide-26
SLIDE 26

Confidential 26

Some results & findings:

  • Training based on only synthetic data yields models that don’t perform very

well in the real world.

  • Adding synthetic data to real training data increases the quality of the model,

compared to using only real training data.

  • Models based on higher number of synthetic images performed better,

provided that the synthetic input is balanced against reality.

Artefacts and imperfections seen in the real world should also be present in the synthetic data (both in the environment model and in the sensor model).

Joint projects with DFKI & Siemens

Using PreScanTM data for deep learning

Note: these are first steps for PreScan in the field of deep-learning… … many more to follow!

slide-27
SLIDE 27

Confidential

Needed for successfully applying Deep Learning:

  • 1. Lots of training data (e.g. camera, radar, lidar data, etc.)
  • There is lots of real-world data available about high-

probability cases, but insufficient real-world data available for critical situations with low-probability (“corner-cases”)

  • Simulation can provide this data very easily!
  • 2. Reference (ground truth) data: aka “labels” or “tags”
  • Manually “tagging/labeling” images is an expensive and

boring process (even if outsourced to low wage countries)

  • Simulation solves this by providing a “free” ground-truth

signal!

  • 3. Test coverage & final validation/certification
  • High Performance Clusters (HPCs) capable of running large

numbers of scenarios & variations for validation purposes

  • Open question: are we able to develop a virtual

homologation methodology & environment?

27

slide-28
SLIDE 28

Confidential

Field Data Analysis

6-10-2017 28

>90% test coverage?

Simulation of “Corner Cases”

SW update

+ coverage of rare & critical scenarios?

Virtual homologation methodology & environment

Massive Physics Based Parametric Cluster Scenario Evaluation

Scenario Generation Data Analysis Missing Missing Goal: “guaranteed” 100% test coverage!  basis for homologation/certification

slide-29
SLIDE 29

Confidential

PreScanTM simulation platform connected to NVIDIA Drive PX for verification and validation

29

PreScan PC CAN

Injection of PreScanTM synthetic sensor data into NVIDIA Drive PX as an alternative or addition to road testing with real sensors Virtual verification of algorithms for environmental perception Virtual validation of control logic Closed-loop real-time HIL simulation

PreScan synthetic sensor data injection

The PreScanTM - Drive PX injection setup is demonstrated at our booth

slide-30
SLIDE 30

Confidential 30

Virtual validation of environmental perception algorithms running on the Drive PX

PreScanTM simulation platform connected to NVIDIA Drive PX for verification and validation

slide-31
SLIDE 31

Confidential 31

PreScanTM simulation platform connected to NVIDIA Drive PX for verification and validation

slide-32
SLIDE 32

Confidential

Needed for successfully applying Deep Learning:

  • 1. Lots of training data (e.g. camera, radar, lidar data, etc.)
  • There is lots of real-world data available about high-

probability cases, but insufficient real-world data available for critical situations with low-probability (“corner-cases”)

  • Simulation can provide this data very easily!
  • 2. Reference (ground truth) data: aka “labels” or “tags”
  • Manually “tagging/labeling” images is an expensive and

boring process (even if outsourced to low wage countries)

  • Simulation solves this by providing a “free” ground-truth

signal!

  • 3. Test coverage & final validation/certification
  • High Performance Clusters (HPCs) capable of running large

numbers of scenarios & variations for validation purposes

  • Open question: are we able to develop a virtual

homologation methodology & environment?

32

slide-33
SLIDE 33

Confidential

Outlook

  • In addition to PreScan’s camera model, also using PreScan’s physics-based

radar and lidar models to generate synthetic input for deep learning purposes

  • Establishing PreScan injection setups for deep-learning based on raw sensor data fusion
  • Using trained neural networks to automatically generate virtual PreScan scenarios and

the corresponding synthetic sensor data

  • Using the latest HPC and GPU technologies to maximize the amount of “virtual miles”

driven per hour/day/week/month/year

33

slide-34
SLIDE 34

martijn.tideman@tassinternational.com Questions? Live demo? Please visit us at booth E.41

34