EdgeL 3 : Compressing L 3 -Net for Mote-Scale Urban Noise Monitoring - - PowerPoint PPT Presentation

edgel 3 compressing l 3 net for mote scale urban noise
SMART_READER_LITE
LIVE PREVIEW

EdgeL 3 : Compressing L 3 -Net for Mote-Scale Urban Noise Monitoring - - PowerPoint PPT Presentation

EdgeL 3 : Compressing L 3 -Net for Mote-Scale Urban Noise Monitoring Sangeeta Kumari Dhrubojyoti Roy Mark Cartwright Ohio State University Ohio State University New York University Juan Pablo Bello Anish Arora New York University Ohio


slide-1
SLIDE 1

EdgeL3: Compressing L3-Net for Mote-Scale Urban Noise Monitoring

Sangeeta Kumari

Ohio State University

Dhrubojyoti Roy

Ohio State University

Mark Cartwright

New York University

Juan Pablo Bello

New York University

Anish Arora

Ohio State University

May 24, 2019

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 1 / 26

slide-2
SLIDE 2

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 2 / 26

slide-3
SLIDE 3

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 3 / 26

slide-4
SLIDE 4

Urban Noise Monitoring

Credit: Getty Images

70 million people across USA were exposed to noise levels beyond what the EPA considers harmful (2014)

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 4 / 26

slide-5
SLIDE 5

Urban Noise Monitoring

Credit: Getty Images

70 million people across USA were exposed to noise levels beyond what the EPA considers harmful (2014) In 2016, NYC’s 311 service line received an average of 48 noise complaints per hour

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 4 / 26

slide-6
SLIDE 6

Urban Noise Monitoring

Credit: Getty Images

70 million people across USA were exposed to noise levels beyond what the EPA considers harmful (2014) In 2016, NYC’s 311 service line received an average of 48 noise complaints per hour Limitations with 311 reporting

Inaccurate information on all sources of disruptive noise Verification of authentic noise complaints

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 4 / 26

slide-7
SLIDE 7

SONYC

Sounds of New York City (SONYC) aims at continuous monitoring, analysing, and mitigating urban noise pollution

Figure 1: Acoustic sensing unit deployed on a New York City street

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 5 / 26

slide-8
SLIDE 8

Machine Listening Goals

Low-cost and battery/solar powered sensing

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 6 / 26

slide-9
SLIDE 9

Machine Listening Goals

Low-cost and battery/solar powered sensing Real-time multi-label noise classification

Noise: traffic, sirens, construction, unnecessary honking, social noise etc.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 6 / 26

slide-10
SLIDE 10

Machine Listening Goals

Low-cost and battery/solar powered sensing Real-time multi-label noise classification

Noise: traffic, sirens, construction, unnecessary honking, social noise etc.

Address lack of annotated data

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 6 / 26

slide-11
SLIDE 11

Machine Listening Goals

Low-cost and battery/solar powered sensing Real-time multi-label noise classification

Noise: traffic, sirens, construction, unnecessary honking, social noise etc.

Address lack of annotated data Limited Flash (2 MB) and RAM (1 MB) on edge devices (ARM Cortex-M7)

‘mote-scale’ devices

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 6 / 26

slide-12
SLIDE 12

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 7 / 26

slide-13
SLIDE 13

Look, Listen, and Learn (L3-Net)

Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (28,28) Concatenate Dense: 128 + ReLU Dense: 2 + SoftMax Correspond? (Yes / No) 1 2 3 4 5 6 7 8 Audio subnetwork 1 s Mel-spectrogram Input Size: (256, 199, 1) Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (32,24) Video subnetwork Single image video frame Size: (224, 224, 3) Fusion layers

Figure 2: Architecture of the L3-Net embedding models

L3-Net trains audio embedding by learning associations between audio snippets and video frames 1

Audio-Visual Correspondence (AVC) task

1Arandjelovic, Relja and Zisserman, Andrew. "Look, Listen and Learn". IEEE ICCV. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 8 / 26

slide-14
SLIDE 14

Look, Listen, and Learn (L3-Net)

Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (28,28) Concatenate Dense: 128 + ReLU Dense: 2 + SoftMax Correspond? (Yes / No) 1 2 3 4 5 6 7 8 Audio subnetwork 1 s Mel-spectrogram Input Size: (256, 199, 1) Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (32,24) Video subnetwork Single image video frame Size: (224, 224, 3) Fusion layers

Figure 2: Architecture of the L3-Net embedding models

L3-Net trains audio embedding by learning associations between audio snippets and video frames 1

Audio-Visual Correspondence (AVC) task

Use audio embedding to train downstream task (classifier for limited data)

1Arandjelovic, Relja and Zisserman, Andrew. "Look, Listen and Learn". IEEE ICCV. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 8 / 26

slide-15
SLIDE 15

Look, Listen, and Learn (L3-Net)

Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (28,28) Concatenate Dense: 128 + ReLU Dense: 2 + SoftMax Correspond? (Yes / No) 1 2 3 4 5 6 7 8 Audio subnetwork 1 s Mel-spectrogram Input Size: (256, 199, 1) Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (32,24) Video subnetwork Single image video frame Size: (224, 224, 3) Fusion layers

Figure 2: Architecture of the L3-Net embedding models

L3-Net trains audio embedding by learning associations between audio snippets and video frames 1

Audio-Visual Correspondence (AVC) task

Use audio embedding to train downstream task (classifier for limited data) Downstream datasets:

US8K: 8732 audio clips divided into 10 cross-validation folds ESC-50: 2000 clips divided into 5 folds

Downstream Accuracy

US8K: 75.91% | ESC-50: 73.65%

1Arandjelovic, Relja and Zisserman, Andrew. "Look, Listen and Learn". IEEE ICCV. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 8 / 26

slide-16
SLIDE 16

Look, Listen, and Learn (L3-Net)

Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (28,28) Concatenate Dense: 128 + ReLU Dense: 2 + SoftMax Correspond? (Yes / No) 1 2 3 4 5 6 7 8 Audio subnetwork 1 s Mel-spectrogram Input Size: (256, 199, 1) Batch Normalization Conv: 64 (3,3) + BN + ReLU Conv: 64 (3,3) + BN + ReLU Max pool: (2,2) Conv: 128 (3,3) + BN + ReLU Conv: 128 (3,3) + BN + ReLU Max pool: (2,2) Conv: 256 (3,3) + BN + ReLU Conv: 256 (3,3) + BN + ReLU Max pool: (2,2) Conv: 512 (3,3) + BN + ReLU Conv: 512 (3,3) + BN + ReLU Max pool: (32,24) Video subnetwork Single image video frame Size: (224, 224, 3) Fusion layers

Figure 2: Architecture of the L3-Net embedding models

L3-Net trains audio embedding by learning associations between audio snippets and video frames 1

Audio-Visual Correspondence (AVC) task

Use audio embedding to train downstream task (classifier for limited data) Downstream datasets:

US8K: 8732 audio clips divided into 10 cross-validation folds ESC-50: 2000 clips divided into 5 folds

Downstream Accuracy

US8K: 75.91% | ESC-50: 73.65%

L3-Net audio has 4,688,066 parameters and is 18 MB

1Arandjelovic, Relja and Zisserman, Andrew. "Look, Listen and Learn". IEEE ICCV. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 8 / 26

slide-17
SLIDE 17

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 9 / 26

slide-18
SLIDE 18

Non-sparse Audio Model

Depth Reduction:

conv8 has 2,359,808 params (50% of total)

2Li, Hao et al. "Pruning Filters for Efficient ConvNets." ICLR. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 10 / 26

slide-19
SLIDE 19

Non-sparse Audio Model

Depth Reduction:

conv8 has 2,359,808 params (50% of total) Embedding could be generated from penultimate layer or before

2Li, Hao et al. "Pruning Filters for Efficient ConvNets." ICLR. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 10 / 26

slide-20
SLIDE 20

Non-sparse Audio Model

Depth Reduction:

conv8 has 2,359,808 params (50% of total) Embedding could be generated from penultimate layer or before

Width Reduction:

Filters with smaller kernel weights produce feature maps with weaker activations 2

2Li, Hao et al. "Pruning Filters for Efficient ConvNets." ICLR. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 10 / 26

slide-21
SLIDE 21

Non-sparse Audio Model

Depth Reduction:

conv8 has 2,359,808 params (50% of total) Embedding could be generated from penultimate layer or before

Width Reduction:

Filters with smaller kernel weights produce feature maps with weaker activations 2 Drop kernels whose absolute weight sum is less than a threshold value.

2Li, Hao et al. "Pruning Filters for Efficient ConvNets." ICLR. 2017.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 10 / 26

slide-22
SLIDE 22

Sparse Audio Model

Figure 3: Pruning Weights 3

Prune potentially unimportant connections 3

Zero out the weights whose absolute magnitude < threshold

20 30 40 50 60 70 80 85 90 95 Compression (%) 50 55 60 65 70 75 AVC Accuracy (%) Original Accuracy: 77.81%

conv1 conv2 conv3 conv4 conv5 conv6 conv7 conv8

3Han, Song, et al. "Learning both weights and connections for efficient neural network." NIPS. 2015.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 11 / 26

slide-23
SLIDE 23

Sparse Audio Model

Figure 3: Pruning Weights 3

Prune potentially unimportant connections 3

Zero out the weights whose absolute magnitude < threshold

20 30 40 50 60 70 80 85 90 95 Compression (%) 50 55 60 65 70 75 AVC Accuracy (%) Original Accuracy: 77.81%

conv1 conv2 conv3 conv4 conv5 conv6 conv7 conv8

Figure 4: Impact of pruning individual layers on AVC accuracy

Prune each layer independently with a range of sparsity values to determine sensitivity of each

conv1 most sensitive conv8 least sensitive

3Han, Song, et al. "Learning both weights and connections for efficient neural network." NIPS. 2015.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 11 / 26

slide-24
SLIDE 24

Re-Training Methodology

Fine-tuning: Retrain the L3-Net for AVC task with the pruned audio model while freezing the video model

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 12 / 26

slide-25
SLIDE 25

Re-Training Methodology

Fine-tuning: Retrain the L3-Net for AVC task with the pruned audio model while freezing the video model Knowledge Distillation: Minimize the Mean Square Error loss between embedding from original L3-Net and sparse L3-Net audio

conv1 conv2 conv3 conv4 conv5 conv6 conv7 conv8 64 64 128 128 256 256 512 512 conv1 conv2 64 64 conv3 conv4 128 128 conv5 conv6 256 256 conv7 conv8 512 512 Teacher Audio Embedding Student Audio Embedding Audio Input SPARSE L3–NET AUDIO SUBNETWORK (STUDENT) DENSE L3–NET AUDIO SUBNETWORK (TEACHER) MSE

Figure 5: Knowledge Distillation setup with original L3 audio as teacher and pruned audio model as student for audio embedding approximation.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 12 / 26

slide-26
SLIDE 26

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 13 / 26

slide-27
SLIDE 27

Depth Reduction

Given a trained L3-Net audio, generate embedding out of 7th, 6th or 5th layer

Reduction In

  • Num. Filters in Audio Convolution Layers

Reduction in Weights (%) Accuracy (%) conv 1 conv 2 conv 3 conv 4 conv 5 conv 6 conv 7 conv 8 US8K ESC-50 Original 64 64 128 128 256 256 512 512 NA 75.91 73.65 Depth 64 64 128 128 256 256 512 50.42 74.38 74 64 64 128 128 256 256 72.34 71.74 68.7 64 64 128 128 256 86.66 68.77 66.6 Table 1: Downstream accuracy of L3-Net depth reduction experiments.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 14 / 26

slide-28
SLIDE 28

Width Reduction

Reduction In

  • Num. Filters in Audio Convolution Layers

Reduction in Weights (%) US8K ESC-50 conv 1 conv 2 conv 3 conv 4 conv 5 conv 6 conv 7 conv 8 Before FT (%) After FT (%) Before FT (%) After FT (%) Original 64 64 128 128 256 256 512 512 NA 75.91 NA 73.65 NA Width 64 48 64 64 128 128 256 256 43.14 51.39 74 34 71.25 64 48 64 64 128 128 128 128 64.54 52.16 71.45 33.3 64.7 64 48 64 64 64 64 128 128 69.89 48.87 72.46 32.6 67.2 Table 2: Downstream accuracy of L3-Net before and after fine-tuning

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 15 / 26

slide-29
SLIDE 29

Sparse Models

Sparsities in Audio Convolution Layers (%) Reduction in Weights (%) Memory (MB) conv1 conv2 conv3 conv4 conv5 conv6 conv7 conv8 NA 18 30 40 50 30 50 50 60 53.49 8.317 40 50 60 40 60 60 70 63.48 6.530 40 50 60 40 70 70 80 72.29 4.955 60 60 70 50 70 70 80 73.55 4.730 70 70 75 60 80 80 85 80.87 3.421 80 80 85 40 85 85 95 87.08 2.310 30 85 85 90 60 90 90 95 90.51 1.697 85 85 85 75 95 98 98 95.45 0.814 93 94 96 97 95.97 98 97 97.00 0.536 95 96 97 97 98 98.65 98 98.00 0.357 94 96 99 99 99 99 99.2 99.00 0.179

Table 3: Decomposition plan for L3-Net audio subnetwork layerwise pruning. The first row corresponds to the original model with 18MB weights.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 16 / 26

slide-30
SLIDE 30

Re-training Performance on AVC

Figure 6: Improvement in L3-Net AVC through fine-tuning (FT). The red dotted line corresponds to the baseline model performance.

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 17 / 26

slide-31
SLIDE 31

EdgeL3

Sparsities in Audio Convolution Layers (%) Reduction in Weights (%) Memory (MB) conv1 conv2 conv3 conv4 conv5 conv6 conv7 conv8 NA 18 30 40 50 30 50 50 60 53.49 8.317 40 50 60 40 60 60 70 63.48 6.530 40 50 60 40 70 70 80 72.29 4.955 60 60 70 50 70 70 80 73.55 4.730 70 70 75 60 80 80 85 80.87 3.421 80 80 85 40 85 85 95 87.08 2.310 30 85 85 90 60 90 90 95 90.51 1.697 85 85 85 75 95 98 98 95.45 0.814 93 94 96 97 95.97 98 97 97.00 0.536 95 96 97 97 98 98.65 98 98.00 0.357 94 96 99 99 99 99 99.2 99.00 0.179

Table 4: EdgeL3 audio model is only 0.8MB and meets our flash memory constraint

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 18 / 26

slide-32
SLIDE 32

Re-training Performance on Downstream Task

Improvements on downstream tasks through fine-tuning (FT) and knowledge distillation (KD)

Figure 7: UrbanSound8K Figure 8: ESC-50

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 19 / 26

slide-33
SLIDE 33

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 20 / 26

slide-34
SLIDE 34

Mote-scale Implementation

Fixed-point quantization

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 21 / 26

slide-35
SLIDE 35

Mote-scale Implementation

Fixed-point quantization Change in input representation

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 21 / 26

slide-36
SLIDE 36

Mote-scale Implementation

Fixed-point quantization Change in input representation Incremental computation of activations

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 21 / 26

slide-37
SLIDE 37

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 22 / 26

slide-38
SLIDE 38

edgel3 Python Package

Reference model for generating audio embedding for Edge computing pip install edgel3

1

import edgel3

2

import soundfile as sf

3 4

audio, sr = sf.read('/path/to/file.wav')

5 6

# Get embedding out of EdgeL3 (95.45% sparse fine-tuned L3)

7

emb, ts = edgel3.get_embedding(audio, sr, retrain_type='ft',

8

sparsity=95.45)

9 10

#Get embedding out of 81.0% sparse knowledge distilled L3 audio

11

emb, ts = edgel3.get_embedding(audio, sr, retrain_type='kd',

12

sparsity=81.0)

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 23 / 26

slide-39
SLIDE 39

Outline

1 Introduction 2 L3-Net 3 Approach 4 Results 5 Mote-scale Implementation 6 Python Package 7 Conclusion

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 24 / 26

slide-40
SLIDE 40

Conclusion

EdgeL3 has 213,491 parameters and is only 0.814 MB

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 25 / 26

slide-41
SLIDE 41

Conclusion

EdgeL3 has 213,491 parameters and is only 0.814 MB Negligible loss of 0.22% in AVC performance and 1.4% (1.9%) drop in the ESC-50 (US8K)

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 25 / 26

slide-42
SLIDE 42

Conclusion

EdgeL3 has 213,491 parameters and is only 0.814 MB Negligible loss of 0.22% in AVC performance and 1.4% (1.9%) drop in the ESC-50 (US8K) Pruned models made available in edgel3 package

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 25 / 26

slide-43
SLIDE 43

Conclusion

EdgeL3 has 213,491 parameters and is only 0.814 MB Negligible loss of 0.22% in AVC performance and 1.4% (1.9%) drop in the ESC-50 (US8K) Pruned models made available in edgel3 package Ongoing work for a realistic mote scale realization of EdgeL3

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 25 / 26

slide-44
SLIDE 44

Questions?

  • S. Kumari, D. Roy et al.

PAISE 2019 Workshop May 24, 2019 26 / 26