Learning Steering for Parallel Autonomy: Handling Ambiguity in - - PowerPoint PPT Presentation

learning steering for parallel autonomy handling
SMART_READER_LITE
LIVE PREVIEW

Learning Steering for Parallel Autonomy: Handling Ambiguity in - - PowerPoint PPT Presentation

Learning Steering for Parallel Autonomy: Handling Ambiguity in End-to-End Driving Alexander Amini Learning Steering for Parallel Autonomy Alexander Amini Motivation Autonomous systems need the ability to handle a wide range of scenarios


slide-1
SLIDE 1

Learning Steering for Parallel Autonomy Alexander Amini

Alexander Amini

Learning Steering for Parallel Autonomy: Handling Ambiguity in End-to-End Driving

slide-2
SLIDE 2

Learning Steering for Parallel Autonomy Alexander Amini

Motivation

Autonomous systems need the ability to handle a wide range of scenarios Leveraging large datasets, we learn an underlying representation of driving based on human actually did

No Lane Markings Rainy Weather Night-time Driving

slide-3
SLIDE 3

Learning Steering for Parallel Autonomy Alexander Amini

Autonomous Driving Pipeline

Sensor Fusion

  • What’s happening

around me?

Detection

  • Where are
  • bstacles?

Localization

  • Where am I relative

to the obstacles?

Planning

  • Where do I go?

Actuation

  • What control signals

to take?

Separate problem into smaller sub-modules, tackle each independently

[1-3] [4-6] [7, 8] [9, 10] [11, 12]

slide-4
SLIDE 4

Learning Steering for Parallel Autonomy Alexander Amini

End-to-End Learning

Learn the control directly from raw sensor data

Learned Model

Underlying representation of how humans drive

[13-16]

Sensor Fusion

  • What’s happening

around me?

[1-3]

Actuation

  • What control signals

to take?

[11, 12]

Deep Neural Network

slide-5
SLIDE 5

Learning Steering for Parallel Autonomy Alexander Amini

End-to-End Learning

Learn the control directly from raw sensor data

Deep Neural Network

pixel values

Raw images: front facing camera

!

Actuation

  • What control signals

to take?

[11, 12]

Learned Model

Underlying representation of how humans drive

[13-16]

slide-6
SLIDE 6

Learning Steering for Parallel Autonomy Alexander Amini

End-to-End Learning

Learn the control directly from raw sensor data pixel values steering

Raw images: front facing camera

!

Actuation

  • What control signals

to take?

[11, 12]

Deep Neural Network Learned Model

Underlying representation of how humans drive

[13-16]

slide-7
SLIDE 7

Learning Steering for Parallel Autonomy Alexander Amini

Challenges

Uncertainty

slide-8
SLIDE 8

Learning Steering for Parallel Autonomy Alexander Amini

Challenges

Uncertainty Vision

slide-9
SLIDE 9

Learning Steering for Parallel Autonomy Alexander Amini

Challenges

Edge Cases Uncertainty Vision

slide-10
SLIDE 10

Learning Steering for Parallel Autonomy Alexander Amini

Talk Outline

Parallel Autonomy

1

slide-11
SLIDE 11

Learning Steering for Parallel Autonomy Alexander Amini

Talk Outline

Parallel Autonomy Learning Bounds

1 2

slide-12
SLIDE 12

Learning Steering for Parallel Autonomy Alexander Amini

Talk Outline

Parallel Autonomy Learning Bounds Uncertainty

1 2 3

slide-13
SLIDE 13

Parallel Autonomy

Shared robot-human control

slide-14
SLIDE 14

Learning Steering for Parallel Autonomy Alexander Amini

Guardian Angel

[17] Hyundai: Dad’s Sixth Sense. 2014.

slide-15
SLIDE 15

Learning Steering for Parallel Autonomy Alexander Amini

Parallel Autonomy: Architecture

Human Input Drive-by-wire Interface Series Autonomy Shared Controller Low-Level- Tracking Control Hardware

slide-16
SLIDE 16

Learning Steering for Parallel Autonomy Alexander Amini

Parallel Autonomy: Hardware

slide-17
SLIDE 17

Learning Steering for Parallel Autonomy Alexander Amini

Parallel Autonomy: Hardware

[20]

5x LIDAR Laser Scanners [21-23] 3x GMSL Cameras [24] 1x GPS [25] 1x Inertial Measurement Unit [26] 2x Wheel Encoders

slide-18
SLIDE 18

Learning Steering for Parallel Autonomy Alexander Amini

Parallel Autonomy: Hardware

NVIDIA Drive PX2 [27]

GPU enabled computing platform

5x LIDAR Laser Scanners [21-23] 3x GMSL Cameras [24] 1x GPS [25] 1x Inertial Measurement Unit [26] 2x Wheel Encoders

slide-19
SLIDE 19

Learning Steering for Parallel Autonomy Alexander Amini

Shared ≠ Binary Control

slide-20
SLIDE 20

Learning Steering for Parallel Autonomy Alexander Amini

Possible Approaches

Direct actuation with motors

  • Responsiveness
  • Reliability
  • Difficulty designing for manual
  • verride

CAN messages

  • Interference and contradictory

information form other ECUs

  • Built in software safe guards
  • Requires reprogramed ECUs

from Toyota (TRI)

slide-21
SLIDE 21

Learning Steering for Parallel Autonomy Alexander Amini

Possible Approaches

Direct actuation with motors

  • Responsiveness
  • Reliability
  • Difficulty designing for manual
  • verride

CAN messages

  • Interference and contradictory

information form other ECUs

  • Built in software safe guards
  • Requires reprogramed ECUs

from Toyota (TRI)

Spoof input systems

  • Requires physical access to cables transmitting sensor data
  • Requires reverse engineering systems

[20]

slide-22
SLIDE 22

Learning Steering for Parallel Autonomy Alexander Amini

Autonomous Modes

Manual Parallel Autonomy Computer

slide-23
SLIDE 23

Learning Steering Bounds

slide-24
SLIDE 24

Learning Steering for Parallel Autonomy Alexander Amini

Related Work: End-to-End Learning

Predict single control command given image frame

  • No temporal information
  • Real world implementation

Compute policy from long short-term memory (LSTM)

  • Crowdsourced dataset
  • No simulation or real

world evaluation Imitation Learning from the experts

  • Simulated driving courses
  • Suffer from cascading

errors, oscillating actions

[13] [15] [16]

slide-25
SLIDE 25

Learning Steering for Parallel Autonomy Alexander Amini

Related Work: End-to-End Learning

Differentiating Problem:

Unable to integrate ambiguous decisions into higher level navigational control

slide-26
SLIDE 26

Learning Steering for Parallel Autonomy Alexander Amini

Learning a Steering Distribution

Discretize action space of all steering commands to handle ambiguity Transform into continuous probability distributions and extract bounds

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-27
SLIDE 27

Learning Steering for Parallel Autonomy Alexander Amini

Discrete Action Learning

Single image !" from dataset # = !" "%&

'

Neural network (, with parameters ) Output distribution ((!"; ))

min

) 1

2 3 4"

5 ' "%&

log ( !"; )

4" true distribution at frame 9 ((!" ; :)

  • est. distribution at frame 9

Optimization through backpropogation

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-28
SLIDE 28

Learning Steering for Parallel Autonomy Alexander Amini

Multimodal Distributions

We want to learn multimodal distributions but only have access to a single control that the human made

1

!

Predicted Distribution " = 0

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-29
SLIDE 29

Learning Steering for Parallel Autonomy Alexander Amini

Multimodal Distributions

We want to learn multimodal distributions but only have access to a single control that the human made

1

!

1

!

Human Distribution Predicted Distribution " = 1

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-30
SLIDE 30

Learning Steering for Parallel Autonomy Alexander Amini

Multimodal Distributions

We want to learn multimodal distributions but only have access to a single control that the human made

1

!

1

!

Human Distribution Predicted Distribution " = 2

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-31
SLIDE 31

Learning Steering for Parallel Autonomy Alexander Amini

Multimodal Distributions

We want to learn multimodal distributions but only have access to a single control that the human made

1

!

1

!

Human Distribution Predicted Distribution

1

!

Human Distribution " = 3

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-32
SLIDE 32

Learning Steering for Parallel Autonomy Alexander Amini

Multimodal Distributions

We want to learn multimodal distributions but only have access to a single control that the human made

1

!

1

!

Human Distribution Predicted Distribution

1

!

Human Distribution " = 4

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-33
SLIDE 33

Learning Steering for Parallel Autonomy Alexander Amini

Multimodal Distributions

We want to learn multimodal distributions but only have access to a single control that the human made

1

!

Human Distribution

1

!

Human Distribution

1

!

Predicted Distribution " = 5

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-34
SLIDE 34

Learning Steering for Parallel Autonomy Alexander Amini

Multimodal Distributions

We want to learn multimodal distributions but only have access to a single control that the human made

1

!

Human Distribution

1

!

Human Distribution

1

!

Predicted Distribution " → ∞

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-35
SLIDE 35

Learning Steering for Parallel Autonomy Alexander Amini

Advantages of this approach

  • Don’t need to see the same exact intersection multiple times in order to learn
  • We learn an underlying representation of drivable space under ambiguity
  • Not constrained to a pre-defined intersection models (T
  • intersection, 4-way, etc)

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-36
SLIDE 36

Learning Steering for Parallel Autonomy Alexander Amini

Dataset Collection

  • 7 hours of driving data
  • Greater Boston metropolitan area
  • Different seasons, times, weather
  • Highway & city roads
  • Fine tuned on 1 minute of data of roads

without lane markers

  • Trained for 10 epochs
  • Data parallelism with multi-GPUs
  • ~1 hour on NVIDIA DGX-1

[28] [14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-37
SLIDE 37

Learning Steering for Parallel Autonomy Alexander Amini

Discrete to Continuous

Model discrete distribution as samples from a mixture of continuous Gaussians

! " = $ %&' " ()&, +&

,

  • &./

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-38
SLIDE 38

Learning Steering for Parallel Autonomy Alexander Amini

Variational Bayes Mixture Models

Iteratively remove unnecessary mixtures from the posterior Start with maximum number

  • f mixtures to expect

Converge to minimum number of mixtures that model the distribution

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-39
SLIDE 39

Learning Steering for Parallel Autonomy Alexander Amini

Results

  • Gaussian Mixture Model (GMM)

fit with Variational Bayes

  • Number of modes optimized
  • Each mode encodes a “macro-

action” that could be taken

  • Bounds of steering extracted

from each mode

  • Parallel autonomy framework

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-40
SLIDE 40
slide-41
SLIDE 41

Learning Steering for Parallel Autonomy Alexander Amini

Bounds for Parallel Autonomy

Each probabilistic mode contributes to the set of valid controls Autonomy is fused with human in shared controller

[14] Amini et al. “Learning Steering Bounds for Parallel Autonomous Systems”. 2018

slide-42
SLIDE 42

Model Uncertainty

When can I trust my model?

slide-43
SLIDE 43

Learning Steering for Parallel Autonomy Alexander Amini

Why Care About Uncertainty?

OR

ℙ(cat) ℙ(dog)

slide-44
SLIDE 44

Learning Steering for Parallel Autonomy Alexander Amini

Why Care About Uncertainty?

ℙ cat = 0.2 ℙ dog = 0.8

Remember: ℙ cat + ℙ dog ≡ 1 What if we see data which was not represented during trained?

slide-45
SLIDE 45

Learning Steering for Parallel Autonomy Alexander Amini

Why Care About Uncertainty?

ℙ cat = 0.2 ℙ dog = 0.8

Remember: ℙ cat + ℙ dog ≡ 1 What if we see data which was not represented during trained?

Neural network “probabilities” do not measure confidence

slide-46
SLIDE 46

Learning Steering for Parallel Autonomy Alexander Amini

Bayesian Deep Learning

Input image Predicted Depth Model Uncertainty

[29]

slide-47
SLIDE 47

Learning Steering for Parallel Autonomy Alexander Amini

End to End Steering Control

We’ve already seen how we can estimate control from a single input image Issues: 1) No uncertainty measure and can fail catastrophically without warning 2) Only functional in unambiguous scenarios (lane following)

slide-48
SLIDE 48

Learning Steering for Parallel Autonomy Alexander Amini

Integrating Uncertainty Estimation

How can we maintain the end-to-end framework but reliably estimate uncertainty?

[30] Amini et al. “Spatial Uncertainty Sampling for End-to-End Control”. 2017

slide-49
SLIDE 49

Learning Steering for Parallel Autonomy Alexander Amini

A Bayesian Outlook on End to End Control

Network tries to learn steering control, !, directly from raw data, " Find mapping, #, parameterized by weights $ such that min ℒ(!, # +; $ ) Bayesian neural networks aim to learn a posterior over weights, ℙ $ ", ! : ℙ $ ", ! = ℙ ! ", $ ℙ($) ℙ(!|")

[30] Amini et al. “Spatial Uncertainty Sampling for End-to-End Control”. 2017

slide-50
SLIDE 50

Learning Steering for Parallel Autonomy Alexander Amini

A Bayesian Outlook on End to End Control

Network tries to learn steering control, !, directly from raw data, " Find mapping, #, parameterized by weights $ such that min ℒ(!, # +; $ ) Bayesian neural networks aim to learn a posterior over weights, ℙ $ ", ! : ℙ $ ", ! = ℙ ! ", $ ℙ($) ℙ(!|") Approximate the posterior ℙ $ ", ! by sampling [18,19]

Intractable!

[30] Amini et al. “Spatial Uncertainty Sampling for End-to-End Control”. 2017

slide-51
SLIDE 51

Learning Steering for Parallel Autonomy Alexander Amini

Elementwise Dropout for Uncertainty

Evaluate ! stochastic forward passes through the network #$ $%&

'

Dropout as a form of stochastic sampling (),$ ~ ,-./01223 4 ∀ 6 ∈ #

⊙ =

Unregularized Kernel # Bernoulli Dropout (#,$ Stochastic Sampled #$

: ; < = = 1 ! ? @ = #$

' $%&

  • AB. ;

< = = 1 ! ? @(=)E − : ; < =

E ' $%& [30] Amini et al. “Spatial Uncertainty Sampling for End-to-End Control”. 2017

slide-52
SLIDE 52

Learning Steering for Parallel Autonomy Alexander Amini

Elementwise Dropout for Uncertainty

Evaluate ! stochastic forward passes through the network #$ $%&

'

Dropout as a form of stochastic sampling (),$ ~ ,-./01223 4 ∀ 6 ∈ #

⊙ =

Unregularized Kernel # Bernoulli Dropout (#,$ Stochastic Sampled #$ Large spatial correlation between adjacent pixels!

: ; < = = 1 ! ? @ = #$

' $%&

  • AB. ;

< = = 1 ! ? @(=)E − : ; < =

E ' $%& [30] Amini et al. “Spatial Uncertainty Sampling for End-to-End Control”. 2017

slide-53
SLIDE 53

Learning Steering for Parallel Autonomy Alexander Amini

Spatial Dropout for Uncertainty

⊙ =

Unregularized Kernel # Bernoulli Dropout $#,& Stochastic Sampled #&

Instead of ignoring spatial correlations between pixels in feature maps let’s sample over convolutional kernels $',& ~ *+,-./001 2 ∀ 4 ∈ #

An entire feature map! (ex. a single edge detector)

6 7 8 9 = 1 ; < = 9 #&

> &?@

AB, 7 8 9 = 1 ; < =(9)E − 6 7 8 9

E > &?@ [30] Amini et al. “Spatial Uncertainty Sampling for End-to-End Control”. 2017

slide-54
SLIDE 54

Learning Steering for Parallel Autonomy Alexander Amini

Training Results

Accelerated Training Convergence Steering Precision Analysis Uncertainty by Steering Angle

!

Uncertainty increases on more dynamic turns (requiring more precision) and decreases with more data

!

!

[30] Amini et al. “Spatial Uncertainty Sampling for End-to-End Control”. 2017

slide-55
SLIDE 55

Learning Steering for Parallel Autonomy Alexander Amini

Summary

Parallel Autonomy Steering Bounds Uncertainty Estimation

+ =

  • Learn control directly from

data

  • Handle ambiguity efficiently
  • Enable higher level

navigation

  • Understand when the

model is likely to fail

  • Interpret model

“confidence”

  • Integration onto full-scale

autonomous vehicle

  • Demonstration of guardian

angel capabilities

[29,30] [14] [20]

slide-56
SLIDE 56
slide-57
SLIDE 57

Learning Steering for Parallel Autonomy Alexander Amini

Summary

Acknowledgements:

All references used in this presentation can be found here: https://goo.gl/faoA6o

Daniela Rus Sertac Karaman Liam Paull Ava Soleimany Thomas Balch MIT Distributed Robotics Lab Toyota Research Institute National Science Foundation

+ =