visual perception for autonomous driving on the nvidia
play

Visual Perception for Autonomous Driving on the NVIDIA DrivePX2 and - PowerPoint PPT Presentation

Visual Perception for Autonomous Driving on the NVIDIA DrivePX2 and using SYNTHIA Dr. Juan C. Moure Dr. Antonio Espinosa http:// grupsderecerca.uab.cat/hpca4se/en/content/gpu http://adas.cvc.uab.es/elektra/ http://www.synthiadataset.net Our


  1. Visual Perception for Autonomous Driving on the NVIDIA DrivePX2 and using SYNTHIA Dr. Juan C. Moure Dr. Antonio Espinosa http:// grupsderecerca.uab.cat/hpca4se/en/content/gpu http://adas.cvc.uab.es/elektra/ http://www.synthia‐dataset.net

  2. Our Background & Current Research Work 2 Computer Architecture Group: GPU acceleration: Bioinformatics, CV, Image Compression Computer Vision Group: CV Algorithms + Deep Learning for Camera-based ADAS GOAL: Camera-based Perception for Autonomous Driving Robotized car GPU-accelerated algorithms Deep Learning & Simulation Infrastructure (SYNTHIA) Elektra Car + DrivePX2

  3. Overview of Presentation 3 GPU Accelerated Perception Depth Computation Semantic & Slanted stixels (Collaboration with Daimler) Speed up MAP estimation problem solved by DP using CNNs SYNTHIA toolkit New datasets, new ground-truth data, LIDARs …

  4. Stereo Vision for Depth Computation 4 10 meters Disparity: distance between same point in left & right images higher disparity = Objects are closer

  5. SemiGlobal Matching (SGM) on GPU: Parallelism 5 Fine Grain Large Grain Parallelism Parallelism Matching Smoothed Cost Cost … Medium Grain Parallelism [Hernández ICCS‐2016]

  6. SGM on GPU: Results 6 Performance ( Frames / Second, fps ) 150 960x360 100 1280x480 1920x720 50 real-time 0 Tegra X1 (DrivePX) Tegra Parker (DrivePX2) SGM : 4 path directions Maximum Disparity= Image Height / 4 Tegra Parker improves performance ≈ 4x vs Tegra X1 : • 3.5x Higher Effective Memory Bandwidth • Higher execution overlap among kernels

  7. Sky Stixel World: Compact representation of the world Obj. horizon slope Obj. Stereo Images Stereo Disparity Stixels Stixel = Stick + Pixel Obj. Fixed‐width, variable number of stixels per column Grnd [Pfeiffer BMVC‐2011] First proposed by a Research Group in DAIMLER

  8. Semantic Stixels: Unified approach Sky Buil. horizon slope Stereo Disparity Ped. Stereo Images Semantic Stixels Side Road [Schneider IV‐2016] Semantic segmentation

  9. Enhanced model: Slanted Stixels 9 Stixels Slanted Stixels • MAP estimation Problem joining Semantic & Depth Bayesian Model (converted to energy minimization) • Stixel Disparity Model includes slant b: � � � , � � � � ∗ � � � � � � ��� �� ��� �� � � � � � � � � � log � • Redefine Energy function (log-likelihood) : � ����� � � � � � �� � �� • Enforces prior assumptions: no sky below horizon, objects stand on road [Hernández BMVC‐2017] Best Industrial Paper

  10. 10 New SYNTHIA-San Francisco dataset • SF city designed with SYNTHIA toolkit • 2224 Photorealistic images featuring slanted roads, with pixel‐level depth & semantic ground truth • Very expensive to generate equivalent real‐data images

  11. Results: Quantitative & Visual 11 3D Accuracy Results representation on SYNTHIA‐SF Disparity Error (%) from 30.9 to 12.9 IoU (%) from 46 to 48.5 Accuracy remained the same for other datasets Left Image Slanted Stixels Original Stixels

  12. Computation Complexity: Dynamic Programming 12 Disparity Image Sky Ground Object h Pixel Size Semantic Segmentation Work Complexity (per column)  ( h 2 ), h = image height Each column processed independently Dynamic programming strategy for efficient evaluation of all the possible configurations

  13. Stixel (DP) Algorithm on GPU: Parallelism 13 Large Grain parallelism CTA Medium and Fine Grain parallelism CTA ··· h ….. h Stereo Disparity step 1 step 2 step 3 ….. step h Sequential Operation with Decreasing Parallelism

  14. Performance Results 14 Performance ( Frames/Second, fps ) Performance ( Frames/Second, fps ) 300 300 960x360 960x360 369 250 250 1280x480 1280x480 200 200 164 1920x720 1920x720 150 150 107 100 100 53 49 49 50 50 24 17 17 8 7 3 0 0 Tegra X1 (DrivePX) Tegra Parker (DrivePX2) Tegra X1 (DrivePX) Tegra Parker (DrivePX2) Slanted + Semantic Stixel Model Original Stixel Model (includes time for semantic inference) • Real‐time performance on DrivePX2 for all image sizes ( ≈6x‐ 7x on DrivePX2 vs DrivePX ) • Complex Stixel Model: 60‐70% of time for Stixel algorithm + 30‐40% of time for semantic inference

  15. 15 Improving Computation Complexity: Pre-segmentation Ground Object Sky Disparity Image h’ h Semantic Segmentation Work Complexity (column)  ( h’×h’), h’ << h NAÏVE Pre-segmentation  ( h ) Accuracy degrades • Infer possible Stixel cuts (pre‐segmentation) from inputs 10-20% when using • Avoid checking all possible Stixel combinations pre-segmentation

  16. 16 Pre-segmentation using a DNN Ground Object Sky Disparity Image h’ h Semantic Segmentation DNN-based Pre-segmentation Now accuracy improves slightly • Infer possible Stixel cuts from inputs by using when using pre-segmentation general data relations (among columns)

  17. Improved Performance Results 17 Performance ( Frames/Second, fps ) Performance ( Frames/Second, fps ) 300 300 960x360 960x360 250 250 1280x480 1280x480 193 200 200 1920x720 1920x720 150 150 107 87 100 100 49 37 35 50 50 17 17 17 8 7 3 0 0 Tegra X1 (DrivePX) Tegra Parker (DrivePX2) Tegra X1 (DrivePX) Tegra Parker (DrivePX2) Slanted + Semantic Stixel Model + Pre‐segmentation (includes time for semantic inference) • Improves performance on both DrivePX and DrivePX2 ( ≈2x ) • Now 15-30% of time for Stixel algorithm + 70-85% of time for semantic inference • Inference time increase almost neglectable ( <10% ) • Most of the CNN for pre‐segmentation is shared with CNN for semantic segmentation

  18. SYNTHIA Dataset Toolkit 18 Image generator of precise annotated data for training DNNs on Autonomous Driving tasks Ground truth data: • RGB & Per pixel: depth, semantic class, optical flow, 3D rounding boxes • Fully compatible with Cityscapes classes Generation of LIDAR data Problem Customization: Synthia‐SanFrancisco www.synthia‐dataset.net

  19. Summary: Real sequence video 19

  20. Thank you Dr. Juan C. Moure juancarlos.moure@uab.es http:// grupsderecerca.uab.cat/hpca4se/en/content/gpu Autonomous University of Barcelona

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend