Teaching a Car to Drive: An application of End-to-End Deep Learning - - PowerPoint PPT Presentation

teaching a car to drive an application of end to end deep
SMART_READER_LITE
LIVE PREVIEW

Teaching a Car to Drive: An application of End-to-End Deep Learning - - PowerPoint PPT Presentation

Teaching a Car to Drive: An application of End-to-End Deep Learning Larry Jackel NVIDIA, Holmdel NJ 07733 ljackel@nvidia.com arXiv.org > cs > arXiv:1604.07316 Teaching a Car to Drive: An application of End-to-End Deep Learning Larry


slide-1
SLIDE 1

Teaching a Car to Drive: An application of End-to-End Deep Learning

Larry Jackel NVIDIA, Holmdel NJ 07733

ljackel@nvidia.com

arXiv.org > cs > arXiv:1604.07316

slide-2
SLIDE 2

Teaching a Car to Drive: An application of End-to-End Deep Learning

Larry Jackel NVIDIA, Holmdel NJ 07733

ljackel@nvidia.com

arXiv.org > cs > arXiv:1604.07316

slide-3
SLIDE 3

Status

  • Mean Autonomous Distance > 50 km on highways
  • Good enough that passengers doze off
  • Need to get Mean Autonomous Distance > 1,000,000 km
slide-4
SLIDE 4

Mapping Localization Route Planning Actuator Controllers . . . Lane Following

Lane Following

DriveAV: NVIDIA Self-Driving Software

slide-5
SLIDE 5

Lane Following

LaneNet Steering Controller Path Estimation Camera

slide-6
SLIDE 6

Lane Following

Use diverse systems with different strengths and weakness to get required performance

Best case: error rate = error rate(LaneNet) * error rate (PilotNet)

PilotNet LaneNet Fusion Steering Controller Path Estimation Camera

slide-7
SLIDE 7

How we got to PilotNet: A long journey in Neural Nets

slide-8
SLIDE 8

Bell Labs, Holmdel NJ

6000 employees in Holmdel ~300 in Research ~30 in Machine Learning

slide-9
SLIDE 9

1986-Early Results on Recurrent Nets

  • Smallest Associative Memory
  • Extremely high density
  • Months to make
  • Computer Experiments

revealed problems

  • Inefficient use of resources for

pattern storage

  • Not as good at matched

filters

  • No Learning

Initial Results: Recurrent nets were disappointing

144 “synapses” in 6x6 micron cell

slide-10
SLIDE 10

1986 – The Hype Begins The Neural Net Fad

  • Lots of coverage in the

press

  • Minimal science
  • No practical results

“Bell Labs breakthrough:

Chips that work like the brain”

slide-11
SLIDE 11

1988 : First Applications

  • Graf builds 54 neural

programmable chip at Holmdel

  • Good results on handwritten

character recognition with hand-crafted Hubel-Wiesel like features (ConvNet without learning)

  • LeCun Joins Group
  • Builds first “LeNet” OCR

engine with learned features

  • Early benchmarks indicate

excellent performance

Analog hardware Feature extraction

slide-12
SLIDE 12

LeNet’s Feature extraction kernels are learned.

Combines the architecture of Fukushima’s Neocognitron (1981) with Gradient Descent learning of weights

slide-13
SLIDE 13

1989 -1990: OCR: excellent results

But no systematic understanding of what governed the learning process

slide-14
SLIDE 14

1989 -1990: OCR: excellent results

But no systematic understanding of what governed the learning process Then came Vladimir Vapnik

slide-15
SLIDE 15

How we came to view learning (Largely Vladimir Vapnik’s Influence)

  • Choosing the right structure
  • “Structural Risk Minimization”
  • Bring prior knowledge into the learning

machine

  • Capacity control
  • Matching learning machine complexity to available data
  • Examine learning curves
slide-16
SLIDE 16

With Vladimir Vapnik’s help, by the early 1990s we had a much better understanding of the learning process and its applications

  • AT&T OCR has become a

product

  • Holmdel group ~ 30 people
  • Applications include pen-

computing, finance, network fault prediction

  • SVM soft margin and kernel

classifiers invented

(This time not hype!!)

slide-17
SLIDE 17

17

slide-18
SLIDE 18

The 1995 Bets

In 2000 there was a fancy dinner for four paid for by Jackel (We couldn’t prove why Yann’s multi-layer nets work so well) In 2005 there was a fancy dinner for four paid for by Vapnik (Yann’s nets are still in use and keep getting better) Yann and Leon didn’t bet, but they got to eat too.

18

slide-19
SLIDE 19

Autonomous Driving

slide-20
SLIDE 20

The usual approach for self-driving cars

Now often combined using Convolutional Nets

Object Recognition Cost Map Path Planning Actuation

Feature Extraction

Detailed Maps

Sensors Localization

slide-21
SLIDE 21

Learning a complete control loop

  • Learn to steer by observing a human
  • Minimal use of hand-crafted rules
  • Minimal need for hand labeling

Deep Convolutional Nets Camera Steering Actuation DARPA seedling project – Yann LeCun and Urs Muller “DAVE”

slide-22
SLIDE 22

Early examples

  • ALVINN
  • Pomerlau – CMU ~1990 low-res

image, fully-connected nets

  • DAVE
  • Muller, LeCun – 2003
  • Higher res images
  • ConvNets

Show Dave Video

slide-23
SLIDE 23

Example: Road Following

Good quality lane markers and good driving conditions Traditional lane detection-based systems work well Poor quality lane markers Lane detection-based systems struggle End-to-end learning empowers the network to use additional cues

slide-24
SLIDE 24

Training the network

Camera

slide-25
SLIDE 25

NVIDIA PilotNet

~250,000 distinct weights ~27,000,000 connections

slide-26
SLIDE 26

Neural Network Driving the Car in New Jersey

slide-27
SLIDE 27

Visualization

What the network pays attention to

slide-28
SLIDE 28

ATYPICAL VEHICLE CLASS

slide-29
SLIDE 29

Integrating PilotNet: The Rail

45 meters

Pilot was modified and trained to have 41 outputs specifying the center of the path to be followed in 1 meter increments To drive, the 10 most recent rails (the last 1/3 sec) are averaged to get a target position at a set distance in front

  • f the vehicle

The rail Computed at 30Hz

slide-30
SLIDE 30

Driving the rail from Holmdel to Woodbridge

Note performance when lane markings are missing

slide-31
SLIDE 31

Summary

  • Good progress in driving
  • A lot more work needed to get to required safety
  • Looks like diversity of parallel methods is the key

ljackel@nvidia.com