Incremental and adaptive learning for online monitoring of embedded - - PowerPoint PPT Presentation

incremental and adaptive learning for online monitoring
SMART_READER_LITE
LIVE PREVIEW

Incremental and adaptive learning for online monitoring of embedded - - PowerPoint PPT Presentation

Incremental and adaptive learning for online monitoring of embedded software Monica Loredana Angheloiu Supervisors: Marie-Odile Cordier Laurence Roz 1 20/06/2012 Outline Introduction Context of the internship Previous work


slide-1
SLIDE 1

Incremental and adaptive learning for online monitoring of embedded software

Monica Loredana Angheloiu Supervisors: Marie-Odile Cordier Laurence Rozé

20/06/2012 1

slide-2
SLIDE 2

Outline

Introduction Context of the internship Previous work Proposed approach Empirical Results Conclusion

20/06/2012 2

slide-3
SLIDE 3

Introduction

  • Data is being collected continuously
  • Useful information is “hidden”
  • Human analysts can no longer infer knowledge

The solution: machine learning

20/06/2012 3

slide-4
SLIDE 4

Context of the internship - Manage YourSelf

20/06/2012 4

Monitoring Diagnosis Reparation Functioning reports Acquiring prevention rules Learning crash rules Fleet of mobile smartphones and PDAs Server Prevention rules

slide-5
SLIDE 5

Problem statement

  • Input data

– Reports are generated by smartphones (or PDAs)

  • each time a problem appears
  • at regular time stamps in case of nominal behavior

– Reports are sent in batch at a regular time stamps

  • Objectives

– Improve learning on server module using incremental learning

20/06/2012 5 <rapport> <Date value="1302174135" /> <Application name="Appli_Birds" value="running" /> <OS value="Android 2.2" /> <Battery value="1" /> <Crash /> </rapport>

slide-6
SLIDE 6

Manage YourSelf-general structure

20/06/2012 6

  • batch learning vs. incremental learning

at each learning step

  • batch-learning systems examine all examples one

time

  • incremental systems examine the new training

examples arrived Previous approach for server module:

  • Decision trees are used

for batch-learning

  • All examples are stored
slide-7
SLIDE 7

Challenges

reduce the storage stabilize the processing time requirements detect concept drifts

20/06/2012 7

slide-8
SLIDE 8

Few definitions

Incremental learning

20/06/2012 8

Classification algorithm

New example(s) New concept description

slide-9
SLIDE 9

Few definitions

Incremental learning Instance memory

20/06/2012 9

Classification algorithm

New example(s) Stored example(s) New stored example(s) New concept description

slide-10
SLIDE 10

Few definitions

Incremental learning Instance memory Concept memory

20/06/2012 10

Classification algorithm

New example(s) Stored concept description New concept description

slide-11
SLIDE 11

Few definitions

Incremental learning Instance memory Concept memory

20/06/2012 11

Classification algorithm

New example(s) Stored example(s) New stored example(s) Stored concept description New concept description

slide-12
SLIDE 12

Few definitions

Incremental learning Instance memory Concept memory Online learning

  • Incremental learning
  • Real time processing
  • Incoming order
  • Drift detection

20/06/2012 12

slide-13
SLIDE 13

Few definitions

Incremental learning Concept memory Instance memory Online learning Concept drift

  • Hidden context changes

20/06/2012 13

slide-14
SLIDE 14

Representative approaches

20/06/2012 14

According to Maloof et all,2004[1]

slide-15
SLIDE 15

A comparison between the representative incremental methods with partial instance memory

20/06/2012 15

AQ11-PM FLORA IB FACIL DARLING Algorithm Generalize training examples maximally Generalize training examples when needed Derived from nearest neighbor and do not generalize training examples Derived from AQ11-PM Make classification tree Examples Store only positive extreme examples Store examples

  • ver a window of

time Store specific examples Store both positive and negative examples, not necessarily extreme Store specific neighboring examples Interesting features Keep old stable concepts Use similarity function Use the growth of a rule Use a specific weight forgetting mechanism Disadvantages May cause

  • vertraining

May delete available concept description Computationally expensive Has user defined parameters hard to tune May not delete

  • utdated

concepts

slide-16
SLIDE 16

Proposed approach of incremental learning with partial instance memory and no concept memory

20/06/2012 16

  • Classification basis non-incremental algorithm:

AQ21

  • Selection and storage of band border examples:

Similar to AQ11-PM Partial instance memory the memory requirement is decreased and limited the learning time is diminished Rule induction (AQ family) the whole search space is not analyzed the rules are quickly created and deleted Forgetting mechanism the concept drifts are detected

slide-17
SLIDE 17

Selection of examples

20/06/2012 17

slide-18
SLIDE 18

Selection of examples

20/06/2012 18

keep all examples of rules covering less than θ examples

slide-19
SLIDE 19

keep band border examples, with the distance < ε, for rules covering over θ examples (similar to AQ11-PM) fix a maximum of stored examples (Ki for positive and Ke for negative)

  • Distance function:

Selection of examples

20/06/2012 19

slide-20
SLIDE 20

keep all examples of rules covering less than θ examples keep band border examples, with the distance < ε, for rules covering over θ examples (similar to AQ11-PM) fix a maximum of stored examples (Ki for positive and Ke for negative) age forgetting mechanism (when needed)

  • Distance function:

Selection of examples

20/06/2012 20

slide-21
SLIDE 21

What we want to assess by experimentations

  • Non-incremental
  • Get results similar to behavior rules used for simulation
  • Incremental in stationary environment
  • Keep the memory limited
  • Keep learning time almost constant
  • Get results similar to the non-incremental approach
  • Incremental in a drifting environment
  • Achieve rules similar to behavior rules of current iteration
  • Detect and track drifts using a forgetting mechanism

20/06/2012 21

slide-22
SLIDE 22

Behavior rules for simulating input data

A total of 11 rules:

  • General
  • If battery < 3% then crash low battery
  • If memory RAM > 95% then crash memory full
  • If memory ROM > ROM size – 1000 MB then crash memory full
  • Operation system
  • If Appli_Incompatible_Android open and OS =Android then crash applicrash
  • If Appli_Incompatible_IOS open and OS =IOS then crash application
  • If Appli_Incompatible_MWP open and OS = MWP then crash application
  • Brand
  • If brand=Sony and battery < 8% then crash low battery
  • If brand =Apple and Appli _GPS open and Appli_Incompatible_GPS open then crash

application

  • Model
  • If model = Omnia7 and Appli_Incompatible_Omnia open then crash application
  • If model =GalaxyMini and Appli _GSM open and Appli _WIFI open and Appli _GPS
  • pen and battery< 10% then crash low battery
  • Specific
  • If Appli_Incompatible_Telephone open then crash application

20/06/2012 22

slide-23
SLIDE 23

Experimentations

Non-incremental Incremental in a a stationary environment

  • 6 steps of incremental learning
  • input of one step include:

approximately 30.800 new incoming reports » 88 different smartphones approximately a 5 days simulation / smartphone » 350 reports / smartphone

20/06/2012 23

Learning time Total positive Total negative Total stored examples Number of important rules Total number of rules Precision Recall 60 m 11,211 145,757 156,968 12 24 100% 99.67%

slide-24
SLIDE 24

Empirical results

20/06/2012 24 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-25
SLIDE 25

Empirical results

20/06/2012 25 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-26
SLIDE 26

Empirical results

20/06/2012 26 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-27
SLIDE 27

Empirical results

20/06/2012 27 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-28
SLIDE 28

Empirical results

20/06/2012 28

slide-29
SLIDE 29

The overtraining impact

  • The rules achieved incrementally have a good precision-recall, but are not as

general as those retrieved in non-incremental, because of border examples stored – The non-incremental rule:

  • ( batterie<= 7 ) and ( brand = 'Sony' or brand = 'Nokia' ) and ( numero<= 50000000 )

– The incremental set of 12 rules:

  • ( batterie between 4 and 7 ) and ( brand = 'Sony' ) and ( "app_Appli_GPS" = 'running' )
  • ( batterie <= 7 ) and ( brand = 'Sony' ) and ( memoirephysique >= 677 ) and (

"app_Appli_WIFI" = 'running' )

  • ( batterie <= 7 ) and ( brand = 'Sony' or brand = 'Apple' ) and ( memoirephysique <=

1400 ) and ( "app_Appli_Call" = 'running' )

  • ( batterie between 4 and 7 ) and ( brand = 'Sony' or brand = 'Apple' ) and (

memoirephysique >= 1402 ) and ( "app_Appli_Call" = 'running' )

  • ( batterie <= 7 ) and ( brand = 'Sony' or brand = 'Apple' ) and ( memoirephysique

between 1402 and 1872 ) and ( "app_Appli_Call" = 'running' )

20/06/2012 29

slide-30
SLIDE 30

Experimentations in a drifting environment

The fleet of smarphones are replaced each time we recomputed concept description and we pass from a smartphone model to another at each incremental step The age forgetting mechanism is used

  • 8 steps of incremental learning
  • input of one step include:

approximately 20.000 reports all reports generated for one smartphone model

» Models: GalaxyMini, Galaxy S2, IPhone 4S, Omnia 7, Lumia 900, Lumia 800, Xperia Pro, Xperia Mini

each model includes 11 different smartphones simulations, during a month

  • In experiments the age parameter of the forgetting mechanism is set to 3

20/06/2012 30

slide-31
SLIDE 31

Empirical results

20/06/2012 31

Experiment 1: Lumia 800

Models order: GalaxyMini, Galaxy S2, Xperia Pro, Xperia Mini, IPhone 4S, Omnia 7, Lumia 900, Lumia 800

  • Behavior rules for the last iteration
  • ( batterie < 3 )
  • (memory RAM > 950)
  • (memory ROM > 15000)
  • ( "app_Appli_Incompatible_MicrosoftWindowsPhone" = 'running' )
  • ( "app_Appli_Incompatible_Telephone" = 'running' )
  • Rules achieved
  • ( batterie <= 2 )
  • ( memoirevive >= 922 )
  • ( "app_Appli_Incompatible_MicrosoftWindowsPhone" = 'running' )
  • ( "app_Appli_Incompatible_Telephone" = 'running' )
slide-32
SLIDE 32

Empirical results

20/06/2012 32

Experiment 2: Xperia Mini

Models order: GalaxyMini, Galaxy S2, IPhone 4S, Omnia 7, Lumia 900, Lumia 800, Xperia Pro, Xperia Mini

  • Behavior rules for the last iteration
  • ( batterie < 3 )
  • (memory RAM > 950)
  • (memory ROM > 3000)
  • ( " app_Appli_Incompatible_Android " = 'running' )
  • ( batterie < 8 ) and ( modelu = XperiaMini' )
  • ( "app_Appli_Incompatible_Telephone" = 'running' )
  • Rules achieved
  • ( batterie <= 7 ) and ( modelu <> 'Lumia800' )
  • ( memoirevive >= 933 )
  • ( batterie <= 2 )
  • ( memoirevive <= 543 ) and ( "app_Appli_Incompatible_Android" = 'running' )
  • ( version <> 'Android23' ) and ( "app_Appli_Incompatible_MicrosoftWindowsPhone" =

'running' )

  • ( "app_Appli_Incompatible_Telephone" = 'running' )
slide-33
SLIDE 33

Conclusions

The proposed approach

  • use simulated data
  • limit the memory requirements
  • keep learning time almost constant
  • store band border examples (similar to AQ11-PM)
  • incorporate a forgetting mechanism
  • detect and adapt to concept drifts

20/06/2012 33

slide-34
SLIDE 34

Future work

  • Improve feature selection

» drop irrelevant dimensions

  • Develop a concept memory approach

» filter rules according to some specific parameters » limit the number of saved rules for each iteration » merge current rules with previous ones

  • Deal with examples which are masked by prevention rules

20/06/2012 34

slide-35
SLIDE 35

Thank you !

20/06/2012 35

slide-36
SLIDE 36

Incremental and adaptive learning for online monitoring of embedded software

Monica Loredana Angheloiu Supervisors: Marie-Odile Cordier Laurence Rozé

20/06/2012 1

slide-37
SLIDE 37

Outline

Introduction Context of the internship Previous work Proposed approach Empirical Results Conclusion

20/06/2012 2

slide-38
SLIDE 38

Introduction

  • Data is being collected continuously
  • Useful information is “hidden”
  • Human analysts can no longer infer knowledge

The solution: machine learning

20/06/2012 3

slide-39
SLIDE 39

Context of the internship - Manage YourSelf

20/06/2012 4

Monitoring Diagnosis Reparation Functioning reports Acquiring prevention rules Learning crash rules Fleet of mobile smartphones and PDAs Server Prevention rules

slide-40
SLIDE 40

Problem statement

  • Input data

– Reports are generated by smartphones (or PDAs)

  • each time a problem appears
  • at regular time stamps in case of nominal behavior

– Reports are sent in batch at a regular time stamps

  • Objectives

– Improve learning on server module using incremental learning

20/06/2012 5 <rapport> <Date value="1302174135" /> <Application name="Appli_Birds" value="running" /> <OS value="Android 2.2" /> <Battery value="1" /> <Crash /> </rapport>

slide-41
SLIDE 41

Manage YourSelf-general structure

20/06/2012 6

  • batch learning vs. incremental learning

at each learning step

  • batch-learning systems examine all examples one

time

  • incremental systems examine the new training

examples arrived Previous approach for server module:

  • Decision trees are used

for batch-learning

  • All examples are stored
slide-42
SLIDE 42

Challenges

reduce the storage stabilize the processing time requirements detect concept drifts

20/06/2012 7

slide-43
SLIDE 43

Few definitions

Incremental learning

20/06/2012 8

Classification algorithm

New example(s) New concept description

slide-44
SLIDE 44

Few definitions

Incremental learning Instance memory

20/06/2012 9

Classification algorithm

New example(s) Stored example(s) New stored example(s) New concept description

slide-45
SLIDE 45

Few definitions

Incremental learning Instance memory Concept memory

20/06/2012 10

Classification algorithm

New example(s) Stored concept description New concept description

slide-46
SLIDE 46

Few definitions

Incremental learning Instance memory Concept memory

20/06/2012 11

Classification algorithm

New example(s) Stored example(s) New stored example(s) Stored concept description New concept description

slide-47
SLIDE 47

Few definitions

Incremental learning Instance memory Concept memory Online learning

  • Incremental learning
  • Real time processing
  • Incoming order
  • Drift detection

20/06/2012 12

slide-48
SLIDE 48

Few definitions

Incremental learning Concept memory Instance memory Online learning Concept drift

  • Hidden context changes

20/06/2012 13

slide-49
SLIDE 49

Representative approaches

20/06/2012 14

According to Maloof et all,2004[1]

slide-50
SLIDE 50

A comparison between the representative incremental methods with partial instance memory

20/06/2012 15

AQ11-PM FLORA IB FACIL DARLING Algorithm Generalize training examples maximally Generalize training examples when needed Derived from nearest neighbor and do not generalize training examples Derived from AQ11-PM Make classification tree Examples Store only positive extreme examples Store examples

  • ver a window of

time Store specific examples Store both positive and negative examples, not necessarily extreme Store specific neighboring examples Interesting features Keep old stable concepts Use similarity function Use the growth of a rule Use a specific weight forgetting mechanism Disadvantages May cause

  • vertraining

May delete available concept description Computationally expensive Has user defined parameters hard to tune May not delete

  • utdated

concepts

slide-51
SLIDE 51

Proposed approach of incremental learning with partial instance memory and no concept memory

20/06/2012 16

  • Classification basis non-incremental algorithm:

AQ21

  • Selection and storage of band border examples:

Similar to AQ11-PM Partial instance memory the memory requirement is decreased and limited the learning time is diminished Rule induction (AQ family) the whole search space is not analyzed the rules are quickly created and deleted Forgetting mechanism the concept drifts are detected

slide-52
SLIDE 52

Selection of examples

20/06/2012 17

slide-53
SLIDE 53

Selection of examples

20/06/2012 18

keep all examples of rules covering less than θ examples

slide-54
SLIDE 54

keep band border examples, with the distance < ε, for rules covering over θ examples (similar to AQ11-PM) fix a maximum of stored examples (Ki for positive and Ke for negative)

  • Distance function:

Selection of examples

20/06/2012 19

slide-55
SLIDE 55

keep all examples of rules covering less than θ examples keep band border examples, with the distance < ε, for rules covering over θ examples (similar to AQ11-PM) fix a maximum of stored examples (Ki for positive and Ke for negative) age forgetting mechanism (when needed)

  • Distance function:

Selection of examples

20/06/2012 20

slide-56
SLIDE 56

What we want to assess by experimentations

  • Non-incremental
  • Get results similar to behavior rules used for simulation
  • Incremental in stationary environment
  • Keep the memory limited
  • Keep learning time almost constant
  • Get results similar to the non-incremental approach
  • Incremental in a drifting environment
  • Achieve rules similar to behavior rules of current iteration
  • Detect and track drifts using a forgetting mechanism

20/06/2012 21

slide-57
SLIDE 57

Behavior rules for simulating input data

A total of 11 rules:

  • General
  • If battery < 3% then crash low battery
  • If memory RAM > 95% then crash memory full
  • If memory ROM > ROM size – 1000 MB then crash memory full
  • Operation system
  • If Appli_Incompatible_Android open and OS =Android then crash applicrash
  • If Appli_Incompatible_IOS open and OS =IOS then crash application
  • If Appli_Incompatible_MWP open and OS = MWP then crash application
  • Brand
  • If brand=Sony and battery < 8% then crash low battery
  • If brand =Apple and Appli _GPS open and Appli_Incompatible_GPS open then crash

application

  • Model
  • If model = Omnia7 and Appli_Incompatible_Omnia open then crash application
  • If model =GalaxyMini and Appli _GSM open and Appli _WIFI open and Appli _GPS
  • pen and battery< 10% then crash low battery
  • Specific
  • If Appli_Incompatible_Telephone open then crash application

20/06/2012 22

slide-58
SLIDE 58

Experimentations

Non-incremental Incremental in a a stationary environment

  • 6 steps of incremental learning
  • input of one step include:

approximately 30.800 new incoming reports » 88 different smartphones approximately a 5 days simulation / smartphone » 350 reports / smartphone

20/06/2012 23

Learning time Total positive Total negative Total stored examples Number of important rules Total number of rules Precision Recall 60 m 11,211 145,757 156,968 12 24 100% 99.67%

slide-59
SLIDE 59

Empirical results

20/06/2012 24 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-60
SLIDE 60

Empirical results

20/06/2012 25 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-61
SLIDE 61

Empirical results

20/06/2012 26 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-62
SLIDE 62

Empirical results

20/06/2012 27 No θ ε Ki Ke Time Time last incremen tal step Mean positive Mean negative Mean of stored examples Number

  • f

important rules Total number of rules Precision Recall

  • 60 m

11,211 145,757 156,968 12 24 100% 99.67% 1 25 250 250 111 m 20 s 19 m 38 s 1,229.6 904.5 32,309.4 11 41 56.47% 93.22% 2 25 0.5 250 250 143 m 29 s 31 m 35 s 2,359.4 3,859.5 35,315.4 24 62 99.67% 95.07% 3 25 1 250 250 131 m 33 s 27 m 12 s 2341 5616 36,527.6 25 62 99.82% 96.75% 4 25 1.5 250 250 115 m 37 s 18 m 58 s 2,297.8 5,981.6 36,643.6 23 61 99.94% 96.47% 5 25 2 250 250 119 m 10 s 24 m 22 s 2,279.3 5,997.3 36,643.6 24 64 99.94% 96.47% 6 25 2.5 250 250 122 m 23 s 29 m 1 s 2,384.2 6,291.8 36,676.8 27 68 99.86% 96.56% 7 30 100 100 97 m 40 s 11 m 19 s 880.8 499.8 31,857.6 11 42 98.77% 93.66% 8 30 0.5 100 100 106 m 52 s 18 m 27 s 1,467.8 1,984.8 33,256.4 17 57 99.00% 96.29% 9 30 1 100 100 106 m 1 s 24 m 8 s 1,457.8 2,644.1 33,809.6 19 60 99.65% 95.29% 10 30 1.5 100 100 104 m 6 s 25 m 4 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 11 30 2 100 100 101 m 18 s 24 m 16 s 1,437.3 2,895.5 33,845.6 18 68 99.71% 93.28% 12 30 2.5 100 100 101 m 5 s 24 m 12 s 1,437.3 2,895.3 33,845.6 18 68 99.71% 93.28% 23 m 11 s 1,855.3 3,753.3 34,782.2 20 57

slide-63
SLIDE 63

Empirical results

20/06/2012 28

slide-64
SLIDE 64

The overtraining impact

  • The rules achieved incrementally have a good precision-recall, but are not as

general as those retrieved in non-incremental, because of border examples stored – The non-incremental rule:

  • ( batterie<= 7 ) and ( brand = 'Sony' or brand = 'Nokia' ) and ( numero<= 50000000 )

– The incremental set of 12 rules:

  • ( batterie between 4 and 7 ) and ( brand = 'Sony' ) and ( "app_Appli_GPS" = 'running' )
  • ( batterie <= 7 ) and ( brand = 'Sony' ) and ( memoirephysique >= 677 ) and (

"app_Appli_WIFI" = 'running' )

  • ( batterie <= 7 ) and ( brand = 'Sony' or brand = 'Apple' ) and ( memoirephysique <=

1400 ) and ( "app_Appli_Call" = 'running' )

  • ( batterie between 4 and 7 ) and ( brand = 'Sony' or brand = 'Apple' ) and (

memoirephysique >= 1402 ) and ( "app_Appli_Call" = 'running' )

  • ( batterie <= 7 ) and ( brand = 'Sony' or brand = 'Apple' ) and ( memoirephysique

between 1402 and 1872 ) and ( "app_Appli_Call" = 'running' )

20/06/2012 29

slide-65
SLIDE 65

Experimentations in a drifting environment

The fleet of smarphones are replaced each time we recomputed concept description and we pass from a smartphone model to another at each incremental step The age forgetting mechanism is used

  • 8 steps of incremental learning
  • input of one step include:

approximately 20.000 reports all reports generated for one smartphone model

» Models: GalaxyMini, Galaxy S2, IPhone 4S, Omnia 7, Lumia 900, Lumia 800, Xperia Pro, Xperia Mini

each model includes 11 different smartphones simulations, during a month

  • In experiments the age parameter of the forgetting mechanism is set to 3

20/06/2012 30

slide-66
SLIDE 66

Empirical results

20/06/2012 31

Experiment 1: Lumia 800

Models order: GalaxyMini, Galaxy S2, Xperia Pro, Xperia Mini, IPhone 4S, Omnia 7, Lumia 900, Lumia 800

  • Behavior rules for the last iteration
  • ( batterie < 3 )
  • (memory RAM > 950)
  • (memory ROM > 15000)
  • ( "app_Appli_Incompatible_MicrosoftWindowsPhone" = 'running' )
  • ( "app_Appli_Incompatible_Telephone" = 'running' )
  • Rules achieved
  • ( batterie <= 2 )
  • ( memoirevive >= 922 )
  • ( "app_Appli_Incompatible_MicrosoftWindowsPhone" = 'running' )
  • ( "app_Appli_Incompatible_Telephone" = 'running' )
slide-67
SLIDE 67

Empirical results

20/06/2012 32

Experiment 2: Xperia Mini

Models order: GalaxyMini, Galaxy S2, IPhone 4S, Omnia 7, Lumia 900, Lumia 800, Xperia Pro, Xperia Mini

  • Behavior rules for the last iteration
  • ( batterie < 3 )
  • (memory RAM > 950)
  • (memory ROM > 3000)
  • ( " app_Appli_Incompatible_Android " = 'running' )
  • ( batterie < 8 ) and ( modelu = XperiaMini' )
  • ( "app_Appli_Incompatible_Telephone" = 'running' )
  • Rules achieved
  • ( batterie <= 7 ) and ( modelu <> 'Lumia800' )
  • ( memoirevive >= 933 )
  • ( batterie <= 2 )
  • ( memoirevive <= 543 ) and ( "app_Appli_Incompatible_Android" = 'running' )
  • ( version <> 'Android23' ) and ( "app_Appli_Incompatible_MicrosoftWindowsPhone" =

'running' )

  • ( "app_Appli_Incompatible_Telephone" = 'running' )
slide-68
SLIDE 68

Conclusions

The proposed approach

  • use simulated data
  • limit the memory requirements
  • keep learning time almost constant
  • store band border examples (similar to AQ11-PM)
  • incorporate a forgetting mechanism
  • detect and adapt to concept drifts

20/06/2012 33

slide-69
SLIDE 69

Future work

  • Improve feature selection

» drop irrelevant dimensions

  • Develop a concept memory approach

» filter rules according to some specific parameters » limit the number of saved rules for each iteration » merge current rules with previous ones

  • Deal with examples which are masked by prevention rules

20/06/2012 34

slide-70
SLIDE 70

Thank you !

20/06/2012 35