Self-improving Learners Min Sun National Tsing Hua University @2 nd - - PowerPoint PPT Presentation

self improving learners
SMART_READER_LITE
LIVE PREVIEW

Self-improving Learners Min Sun National Tsing Hua University @2 nd - - PowerPoint PPT Presentation

VS Lab Self-improving Learners Min Sun National Tsing Hua University @2 nd AII Workshop Ch Challen enges es of Moder ern AI Large-scale labelled dataset Ch Challen enges es of Moder ern AI Large-scale labelled dataset Talent


slide-1
SLIDE 1

Min Sun National Tsing Hua University @2nd AII Workshop

Self-improving Learners VSLab

slide-2
SLIDE 2

Ch Challen enges es of Moder ern AI

  • Large-scale labelled dataset
slide-3
SLIDE 3

Ch Challen enges es of Moder ern AI

  • Large-scale labelled dataset
  • Talent Intensive Workforce
slide-4
SLIDE 4

We Weapons to Tackle the Challenges

  • Sensory data from

realistic user scenarios

slide-5
SLIDE 5

We Weapons to Tackle the Challenges

  • Sensory data from

realistic user scenarios

  • Exponential trends in

computing

slide-6
SLIDE 6

Ou Outline

  • Self-Supervised Learning of Depth from 360◦ Videos

(Sensory, Pitch)

  • DPP-Net: Device-aware Progressive Search for Pareto-
  • ptimal Neural Architectures (Compute)
slide-7
SLIDE 7

Min Sun National Tsing Hua University Under Submission

Self-Supervised Learning of Depth from 360◦ Videos VSLab

slide-8
SLIDE 8

Ou Our Go Goal

Image credits: https://hackernoon.com/mit-6-s094-deep-learning-for-self-driving-cars-2018-lecture-2-notes-e283b9ec10a0

𝟒𝟕𝟏°

  • 1. Well-Calibrated
  • 2. Low-Cost
  • 3. High-Resolution
  • 4. Large FoV

360 Vision

slide-9
SLIDE 9

𝑹𝟐 𝑱𝟐 𝑱𝟑 𝑬𝟐 𝑸𝟐𝑸𝟑 R, T 𝑸𝟐 𝑸𝟑 DNet PNet

Ou Our Model

[1] Zhou et al., Unsupervised Learning of Depth and Ego-Motion from Video, CVPR 2017 [1]

I: Equirectangular I: Cube D: Depth P: Camera motion Q: Point Cloud

slide-10
SLIDE 10

𝑢, Frame Inverse Depth 𝑢- Frame Inverse Depth Frame Inverse Depth 𝑢, 𝑢-

Da Dataset – Pa PanoSUNCG

slide-11
SLIDE 11

Qu Quantitative Results – De Depth

slide-12
SLIDE 12

Ef Efficiency – Sp Speedup Ratio

slide-13
SLIDE 13

Frame EQUI Ours GT

Qu Qualitative Results – Pa PanoSUNCG

slide-14
SLIDE 14

Frame Our prediction Frame Our prediction

Qu Qualitative Results – Re Real-wo world Videos

slide-15
SLIDE 15

DPP PP-Ne Net: : De Device-aw awar are Pr Progressive Search for Pareto-

  • p
  • ptimal Neural Architectures

Jin-Dong (Mark) Dong1, An-Chieh Cheng1, Da-Cheng Juan2, Wei Wei2, Min Sun1 National Tsing-Hua University1 Google2 ICLR Workshop 2018 https://markdtw.github.io/pppnet.html

Slides by Mark : markdtw

slide-16
SLIDE 16

Ho Hot Trend - Ne Neural Architecture Search

  • Barret Zoph, et al. “Neural Architecture Search

with Reinforcement Learning”, In ICLR 2017

NAS used 800 GPUs for 28days

  • Irwan Bello, et al. “Neural Optimizer Search

with Reinforcement Learning”, In ICML 2017

NASNet used 450 GPUs for 3-4 days (i.e. 32,400- 43,200 GPU hours)

  • Hieu Pham, et al. “Efficient Neural Architecture

Search via Parameter Sharing”, In ArXiv 2018

ENAS used 1 GTX1080Ti for 10 hours

slide-17
SLIDE 17

Wh What’s Missing

  • Current works mostly focus on achieving high classification accuracy

regardless of other factors. single objective -> multi-objectives(accuracy, inference time, etc)

  • Demands for ubiquitous model inference is rising. However, designing

suitable NNs for all devices (HPC, cloud, embedded system, mobile phone, etc.) remain challenging.

  • Therefore, we aim at automatically design such models for different

devices considering multiple objectives.

slide-18
SLIDE 18

Ou Our Approach: Sea Search Sp Space

  • Cells are connected following

CondenseNet by Huang et al. (1) layers with different resolution are also directly connected. (2) growth rate G doubles when the feature map shrinks.

  • This connection scheme improves

the computational efficiency.

Cell repetitions C and growth rate G

slide-19
SLIDE 19

Ou Our Approach: Sea Search Sp Space

  • Designed a new cell search space that covers famous compact CNNs.
  • Search for a cell instead of a whole architecture.
slide-20
SLIDE 20

Ou Our Approach: Sea Search Al Algorithm

  • Sequential Model-based Optimization.
  • Sequential: Progressivelyadd layers.
  • Model-based: RNN Regressor ->

predict accuracy.

  • Select K Networks: ParetoOptimality
slide-21
SLIDE 21

Ex Experiment Settings gs

  • Test DPP-Net on 3 different devices.
  • Train on CIFAR-10.
slide-22
SLIDE 22

CI CIFAR-10 10 Experim iment

slide-23
SLIDE 23
  • DPP-Net-PNAS selects the model with highest accuracy.

CI CIFAR-10 10 Experim iment

slide-24
SLIDE 24
  • DPP-Net-PNAS selects the model with highest accuracy.
  • DPP-Net-Device-A runs the fastest on certain device.

CI CIFAR-10 10 Experim iment

slide-25
SLIDE 25
  • DPP-Net-PNAS selects the model with highest accuracy.
  • DPP-Net-Device-A runs the fastest on certain device.
  • DPP-Net-Panacea performs relatively good on every objectives.

CI CIFAR-10 10 Experim iment

slide-26
SLIDE 26

Im Image geNet Ex Experiment

  • DPP-Net-Panacea outperforms CondenseNet in every objectives except

number of params and memory usage.

slide-27
SLIDE 27

Im Image geNet Ex Experiment

  • DPP-Net-Panacea outperforms CondenseNet in every objectives except

number of params and memory usage.

  • DPP-Net-Panacea outperforms NASNet-A in every objectives
slide-28
SLIDE 28
  • Use largely available sensory data (w/o label) to self-improve your

systems

  • Leverage exponential increase of computation to reduce the effort of

talents

Co Conclusion