training and validating automated driving applications
play

TRAINING AND VALIDATING AUTOMATED DRIVING APPLICATIONS USING - PowerPoint PPT Presentation

TRAINING AND VALIDATING AUTOMATED DRIVING APPLICATIONS USING PHYSICS-BASED SENSOR SIMULATION Martijn Tideman Product Director NVIDIA GTC Europe October 11 th 2017 www.tassinternational.com TASS International: Connecting Simulation &


  1. TRAINING AND VALIDATING AUTOMATED DRIVING APPLICATIONS USING PHYSICS-BASED SENSOR SIMULATION Martijn Tideman – Product Director NVIDIA GTC Europe – October 11 th 2017 www.tassinternational.com

  2. TASS International: Connecting Simulation & Testing Validation & Research & Certification Concepts Engineering & Testing & Development Verification TASS International supports the automotive industry in making vehicles safer and smarter by offering software and services for development and validation of Automated Driving and Integrated Safety systems Confidential 2

  3. TASS International & Siemens  Per September 1 st 2017, TASS International and Siemens joined forces  Offering a complete development chain for mechanics and electronics  Integrated solutions for verification and validation of automated driving systems  AI and Deep Learning are key focus points Confidential 3

  4. Connecting Simulation & Testing Why testing? Testing needed to verify and demonstrate that the physical product complies to specific requirements and quality standards (often in an emulated environment representing a subset of real-life use-cases) Confidential 4

  5. Connecting Simulation & Testing Why simulation? Simulation needed to make quick and cost-effective design iterations and validate the product against all relevant real-life use-cases in an environment which is safe and offers perfectly reproducible conditions Confidential 5

  6. TASS International Simulation Solutions World & Sensor Tyre V2X Human Modelling Modelling Modelling Modelling Environmental sensors Tyres transferring Receivers and Human drivers and perceiving the world Automated Driving passengers transmitters and delivering input to control commands traveling safely and facilitating wireless Automated Driving to the road comfortably communication decision & control logic from A to B Confidential 6

  7. Simulation Platform: PreScan TM World & Sensor V2X Modelling Modelling Environmental sensors Receivers and perceiving the world transmitters and delivering input to facilitating wireless Automated Driving communication decision & control logic Confidential 7

  8. PreScan TM Simulation Platform Workflow Sensor models with varying fidelity Main capabilities Example • Easy world modeling, scenario Physics based camera building & import Virtual camera image • Extensive sensor model library Virtual scenario Real scenario - Camera, radar, lidar, ultrasone, infrared, V2X, GPS, etc. • Interfaces with 3rd party solutions - Vehicle dynamics, maps, traffic, etc. Confidential 8

  9. PreScan TM Application Examples Adaptive Cruise Control Pedestrian AEB based on radar-camera fusion Lane Keeping Assistance Parking Assistance Confidential 9

  10. Application area: Deep Learning Deep learning is gaining momentum Deep Learning is increasingly being applied for ADAS and HAD  Almost all big OEMs / Tiers have established dedicated teams for Deep Learning Example: logos of companies recently It is widely recognized that simulation presenting about deep learning at conferences is necessary to train HAD algorithms  Especially for the “corner cases” (i.e. critical situations with low probability) Deep Learning currently mainly applied on camera data, but industry also looking at:  Using radar and lidar data  Raw sensor data fusion Source: DFKI Confidential 10

  11. Needed for successfully applying Deep Learning: 1. Lots of training data (e.g. camera, radar, lidar data, etc.) • There is lots of real-world data available about high- probability cases, but insufficient real-world data available for critical situations with low- probability (“corner - cases”) • Simulation can provide this data very easily! 2. Reference (ground truth) data: aka “labels” or “tags” • Manually “tagging/labeling” images is an expensive and boring process (even if outsourced to low wage countries) • Simulation solves this by providing a “free” ground -truth signal! 3. Test coverage & final validation/certification • High Performance Clusters (HPCs) capable of running large numbers of scenarios & variations for validation purposes • Open question: are we able to develop a virtual homologation methodology & environment? Confidential 11

  12. Needed for successfully applying Deep Learning: 1. Lots of training data (e.g. camera, radar, lidar data, etc.) • There is lots of real-world data available about high- probability cases, but insufficient real-world data available for critical situations with low- probability (“corner - cases”) • Simulation can provide this data very easily! 2. Reference (ground truth) data: aka “labels” or “tags” • Manually “tagging/labeling” images is an expensive and boring process (even if outsourced to low wage countries) • Simulation solves this by providing a “free” ground -truth signal! 3. Test coverage & final validation/certification • High Performance Clusters (HPCs) capable of running large numbers of scenarios & variations for validation purposes • Open question: are we able to develop a virtual homologation methodology & environment? Confidential 12

  13. PreScan TM Physics Based Camera (PBC) PreScan TM PBC during night-time driving PreScan TM PBC during tunnel entrance/exit Confidential 13

  14. PreScan TM Physics Based Camera (PBC) The PreScan TM Physics Based Camera offers:  Full-spectrum world simulation (incl. non-visual wavelengths such as IR)  Camera component models (e.g. lens, filters, imager) Confidential 14

  15. PreScan TM Physics Based Radar (PBR) Camera image from the PreScan TM PBR simulated radar data, “radar’s point -of- view” processed to Range-Doppler Note: this is a 12s scenario, played 5x slower. The radar has a much wider field of view than the camera. Confidential 15

  16. PreScan TM Physics Based Radar (PBR) Capabilities  Multipath simulation up to any number of bounces.  Multistatic antenna configurations (MIMO).  Fully customizable waveforms (FMCW, Fast Chirp Modulation, etc).  Physical material properties, including polarization effects.  Clutter simulation.  Micro-doppler effects.  Interference between different radar sets.  Non-perfect component behaviour.  Configurable tradeoff between fidelity and performance. Confidential 16

  17. PreScan TM LIDAR model Example: PreScan LIDAR model simulating a Velodyne LIDAR sensor Confidential 17

  18. Needed for successfully applying Deep Learning: 1. Lots of training data (e.g. camera, radar, lidar data, etc.) • There is lots of real-world data available about high- probability cases, but insufficient real-world data available for critical situations with low- probability (“corner - cases”) • Simulation can provide this data very easily! 2. Reference (ground truth) data: aka “labels” or “tags” • Manually “tagging/labeling” images is an expensive and boring process (even if outsourced to low wage countries) • Simulation solves this by providing a “free” ground -truth signal! 3. Test coverage & final validation/certification • High Performance Clusters (HPCs) capable of running large numbers of scenarios & variations for validation purposes • Open question: are we able to develop a virtual homologation methodology & environment? Confidential 18

  19. PreScan TM Image Segmentation Sensor (ISS) • PreScan’s Image Segmentation Sensor (ISS) generates segmented images • Two modes: 1. Object mode: each object gets unique ID, name, color 2. Type mode: objects are grouped according to object-type Camera image ISS image based on object types ISS image based on unique objects  ISS can be combined with other “reference sensors” (e.g. bounding boxes, depth cameras)  Not only for camera simulation, but also usable for radar and lidar simulation! 19 Confidential

  20. Image Segmentation Sensor: Example Application Confidential 20

  21. Using PreScan TM data for deep learning Joint projects with DFKI & Siemens Main questions: 1. Are synthetic camera images generated by PreScan suitable for training deep-learning based classifiers? What criteria do they need to comply to? 2. Does addition of synthetic images to a set of real images offer added value? Approach: • Training based on Convolutional Neural Networks (CNNs) for:  Image segmentation  Driving scenario classification • Different models were trained based on real and synthetic data, mixed in various ratios • Performance evaluated on a set of real test images using confusion matrices and Intersection over Union (IoU) criteria Confidential 21

  22. Using PreScan TM data for deep learning Joint projects with DFKI & Siemens Real images from Synthetic images from PreScan Segmented images from PreScan automotive camera Physics Based Camera (PBC) model Image Segmentation Sensor (ISS) Confidential 22

  23. Using PreScan TM data for deep learning Joint projects with DFKI & Siemens Real images from Synthetic images from PreScan Segmented images from PreScan automotive camera Physics Based Camera (PBC) model Image Segmentation Sensor (ISS) Confidential 23

  24. Using PreScan TM data for deep learning Joint projects with DFKI & Siemens Real images from Synthetic images from PreScan Segmented images from PreScan automotive camera Physics Based Camera (PBC) model Image Segmentation Sensor (ISS) Confidential 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend