Deep Learning in Smart Spaces Markus Loipfinger Advisor(s): - - PowerPoint PPT Presentation

deep learning in smart spaces
SMART_READER_LITE
LIVE PREVIEW

Deep Learning in Smart Spaces Markus Loipfinger Advisor(s): - - PowerPoint PPT Presentation

Fakultt fr Informatik Technische Universitt Mnchen [1] Deep Learning in Smart Spaces Markus Loipfinger Advisor(s): Marc-Oliver Pahl, Stefan Liebald Supervisor: Prof. Dr.-Ing. Georg Carle Chair of Network Architectures and Services


slide-1
SLIDE 1

Fakultät für Informatik

Technische Universität München

Deep Learning in Smart Spaces

Markus Loipfinger

Advisor(s): Marc-Oliver Pahl, Stefan Liebald Supervisor: Prof. Dr.-Ing. Georg Carle Chair of Network Architectures and Services Department of Informatics Technical University of Munich (TUM)

[1]

slide-2
SLIDE 2

2

Outline

Motivation Analysis

  • Background
  • Application

Related Work Design & Implementation Evaluation

  • Quantitative
  • Qualitative

Conclusion

slide-3
SLIDE 3

3

Motivation

Why Deep Learning in Smart Spaces?

  • Enable interesting use cases such as facilitating

the daily life routines (e.g. self-adapting rooms)

  • Find complex coherences among the data
  • Goal: Enable users with little or even no pre-knowledge

to build and train a neural network

  • Approach: Provide easy-to-use machine learning

functionality

  • Modularize to suitable building block
  • Enable Mash-Up

[9] [11]

slide-4
SLIDE 4

4

Analysis: How does Deep Learning work?

Deep Learning – Feed-forward Neural Network

Forward Pass Backward Pass x1 x2 x3 b w3 w2 w1 a y = a ( x * W + b )

slide-5
SLIDE 5

5

Analysis

Deep Learning Architectures – Recurrent Neural Network

. . . n e t . . . e t w . . .

slide-6
SLIDE 6

6

Analysis

Deep Learning Architectures – Deep Belief Network

Classifier

slide-7
SLIDE 7

7

Analysis

Application of machine learning / deep learning in smart spaces

  • Knowledge required in
  • Machine learning / deep learning
  • Machine learning framework / library
  • Design of neural network based on
  • Particular problem

→ Each problem requires an appropriate learning algorithm

  • Available training data
  • Provide machine learning as a service
  • Reduce complexity of machine learning
  • Users do not require pre-knowledge
  • Ensure usability & reusability
  • Rapid prototyping
slide-8
SLIDE 8

8

Related Work

Use cases mentioned in related work

→ e.g Health & Home Care, Comfort, Security, Energy Saving, etc.

  • Supporting (disabled) people [2], [3]

→ Security (e.g. fire alarm) by FFNN → Automation (e.g. controlling of all devices) by RNN

  • Recognizing human activites [4], [5], [6]

→ Pre-Training of a DBN (unsupervised learning) → Fine-Tuning of the DBN (supervised learning) → Output: 1 out of 10 activities

  • Predicting human behaviour [7]

→ Pre-Training & Fine-Tuning of a DBN → Hybrid architecture with a classifier on top of the DBN → RNN: takes previous actions into account

  • Smart Grid: Q-Learning, Reinforcement Learning [8]
slide-9
SLIDE 9

9

Design

Modular Approach

  • Separation of the state of a neural network and the respective

learning algorithm

  • Usability & Reusability

All parameters and hyperparameters of a neural network contained in a configuration file Three machine learning services

  • Feedforward Neural Network
  • Deep Belief Network
  • Recurrent Neural Network
slide-10
SLIDE 10

10

Insertion: Neural Network - Parameters & Hyperparameters

Neural Network Learning rate Cost function Number of hidden layers Optimization technique Regularization technique Type of activation function Number of training epochs Size of mini-batch Weight initialization Bias initialization

slide-11
SLIDE 11

11

Functionality

slide-12
SLIDE 12

12

Implementation

Machine Learning Services

  • One service for each neural network
  • Based on a context model
  • Neural network implementation with the help of TensorFlow
  • Configuration file reader
  • Prepare training data

→ Bring data into the right shape → each neural network requires other conditions on the input:

  • FFNN: (data, label)-pairs of size [batch_size, feature_size]
  • r [batch_size, label_size]
  • DBN: data of size [batch_size, feature_size]
  • RNN: (data, label)-pairs of size [batch_size, num_steps,

feature_size] or [batch_size, label_size]

slide-13
SLIDE 13

13

Evaluation

Main aspects: Performance, Usability & Reusability

Quantitative Evaluation Latency of the modular approach

➢ Maybe a bit slower due to modularization

Accuracy and training-time

➢ Probably comparable

Lines of Code for implementing the use case by using my machine / deep learning modules

➢ Significantly less

Time for

➢ Implementing the use cases ➢ Creating the corresponding neural networks

Qualitative Evaluation Experience with the concept Usability Effect and application of reusability

slide-14
SLIDE 14

14

Evaluation

Accuracy / Reconstruction Error – Comparison

Approach Accuracy / Reconstruction Error Lines of Code Service FFNN 97.50 % 2 Regular FFNN 97.58 % 85 Service DBN 0.02 2 Regular DBN 0.02 148 Service RNN 98.50 % 2 Regular RNN 98.44 % 128

slide-15
SLIDE 15

15

Evaluation

Training Time - Comparison

500 1000 1500 2000 2500 3000 Training Time [s] 200 400 600 800 Training Iterations [x10] Regular FFNN Service FFNN Regular DBN Service DBN Regular RNN Service RNN

slide-16
SLIDE 16

16 500 1000 1500 2000 2500 3000 Training Time [s] 200 400 600 800 Training Iterations [x10] Regular FFNN Service FFNN Regular DBN Service DBN Regular RNN Service RNN

Evaluation

Training Time - Comparison

slide-17
SLIDE 17

17 500 1000 1500 2000 2500 3000 Training Time [s] 200 400 600 800 Training Iterations [x10] Regular FFNN Service FFNN Regular DBN Service DBN Regular RNN Service RNN

Evaluation

Training Time - Comparison

16.0 16.2 16.4 16.6 16.8 Training Time [s] 50 86 87 88 89 90 91 92 Training Time [s] 50 860 865 870 875 880 885 890 Training Time [s] 50

slide-18
SLIDE 18

19

Evaluation

Running Time - Comparison

Regular FFNN Service FFNN Regular DBN Service DBN Regular RNN Service RNN Machine Learning Approach 2 4 6 8 10 12 Runtime [s]

slide-19
SLIDE 19

20

Evaluation

Running Time - Comparison

Regular RNN Service RNN 3 4 5 6 7 8 9 10 11 Runtime [s] Regular FFNN Service FFNN 0.25 0.30 0.35 0.40 0.45 0.50 Runtime [s] Regular DBN Service DBN 2 3 4 5 6 7 8 Runtime [s]

slide-20
SLIDE 20

21

Evaluation

Main aspects: Performance, Usability & Reusability

Quantitative Evaluation Latency of the modular approach

➢ Maybe a bit slower due to

modularization Difference of about 0.3 s Accuracy and Time

➢ Probably comparable

Acc: similar, Time: comparable Lines of code for implementing the use case by using my machine / deep learning modules

➢ Significantly less

83, 146, 126 lines of code less (1) Time for

➢ Implementing the use cases

Service: ~30 s, Regular: 5 - 10 min

➢ Creating the corresponding neural

networks Service: ~30 s, Regular: 2 - 5 min Qualitative Evaluation Experience with the concept

➢ Service handling ++ ➢ Code understanding +/0/0 (1) ➢ Service understanding + ➢ Configuration file understanding ++ ➢ Configuration file modifying ++ ➢ Neural network creation + ➢ Neural network training +/++/0 (1) ➢ Understanding without ML/DL +

knowledge Usability ++/++/+ (1) Reusability ++/++/+ (1)

(1)FFNN/DBN/RNN

slide-21
SLIDE 21

22

Conclusion

  • Realization of three machine learning services which
  • are easy-to-use
  • do not require knowledge in
  • machine learning
  • a machine learning library
  • yield
  • a high usability
  • reusability
  • a good performance

Separation of the state of a neural network and the corresponding learning algorithm → configuration file Structure of the learning algorithms and the neural networks already implemented

slide-22
SLIDE 22

23

Thank you

slide-23
SLIDE 23

24

Sources

[1] https://storiesbywilliams.com/tag/ibm-watson-supercomputer/ [2] A. Hussein, M. Adda, M. Atieh, and W. Fahs, “Smart home design for disabled people based on neural networks,” Procedia Computer Science, vol. 37,

  • pp. 117 –126, 2014.

[3] A. Badlani and S. Bhanot, “Smart home system design based on artificial neural networks,” in Proc. of the Word Congress on Engineering and Computer Science, 2011. [4] H. Fang and C. Hu, “Recognizing human activity in smart home using deep learning algorithm,” in Proceedings of the 33rd Chinese Control Conference, July 2014, pp.4716–4720. [5] H. D. Mehr, H. Polat, and A. Cetin, “Resident activity recognition in smart homes by using artificial neural networks,” in 2016 4th International Istanbul Smart Grid Congress and Fair (ICSG), April 2016, pp. 1–5. [6] H. Fang, L. He, H. Si, P. Liu, and X. Xie, “Human activity recognition based

  • n feature selection in smart home using back-propagation algorithm,” {ISA}

Transactions, vol. 53, no. 5, pp. 1629 – 1638, 2014, {ICCA} 2013. [7] S. Choi, E. Kim, and S. Oh, “Human behavior prediction for smart homes using deep learning,” in 2013 IEEE RO-MAN, Aug 2013, pp. 173–179. [8] D. Li and S. K. Jayaweera, “Reinforcement learning aided smart-home decision- making in an interactive smart grid,” in 2014 IEEE Green Energy and Systems Conference (IGESC), Nov 2014, pp. 1–6. [9] https://www.google.de/search?q=pr%C3%A4sentation+m%C3%A4nnchen+ idee&source=lnms&tbm=isch&sa=X&ved=0ahUKEwihgLvYyPzUAhWIh7QKHZ [10]07D68Q_AUICigB&biw=1855&bih=990#imgrc=dTd6OykL30b4QM: [11] http://www.allnetflats-in-deutschland.de/smarthome

slide-24
SLIDE 24

25

Configuration File

[Neural Network] type = Feed Forward Neural Network [Input Layer] feature_size = 784 [Hidden Layers] hidden_layer_1 = 100 number_of_hidden_layers = 1 [Output Layer]

  • utput_size = 10

softmax_unit = True no_activation = False [Weight] mean = 0.0 standard_deviation = 0.1 seed = 123 [Bias] constant = 0.0 [Cost Function] cross_entropy = False squared_errors = True [Optimization Technique] gradient_descent = True momentum = False adagrad_optimizer = False [Regularization Technique] dropout = False weight_decay = False [Additional Methods] learning_rate_decay = False early_stopping = True k_cross_validation = False [Hyperparameters] activation_fct_tanh = True activation_fct_sigmoid = False activation_fct_relu = False learning_rate = 0.08 number_of_training_epochs = 25 mini-batch = 300 momentum_rate = 0.8 p_keep = 0.75 wc_factor = 0.6 lr_decay_step = 100000 lr_decay_rate = 0.96 early_stopping_rounds = 100 early_stopping_metric_loss = True early_stopping_metric_accuracy = False validation_k = 10 [Parameters] display_step = 10