SociableSense: Exploring the Trade- offs of Adaptive Sampling and - - PowerPoint PPT Presentation

sociablesense exploring the trade offs of adaptive
SMART_READER_LITE
LIVE PREVIEW

SociableSense: Exploring the Trade- offs of Adaptive Sampling and - - PowerPoint PPT Presentation

SociableSense: Exploring the Trade- offs of Adaptive Sampling and Computation Offloading for Social Sensing Kiran K. Rachuri, Cecilia Mascolo, Mirco Musolesi, Peter J. Rentfrow Mateusz Grabowski The images come from an original presentation:


slide-1
SLIDE 1

SociableSense: Exploring the Trade-

  • ffs of Adaptive

Sampling and Computation Offloading for Social Sensing Kiran K. Rachuri, Cecilia Mascolo, Mirco Musolesi, Peter J. Rentfrow Mateusz Grabowski

The images come from an original presentation: http://www.sigmobile.org/mobicom/2011/slides/58-socialsense-slides.pptx

  • r they are trademarks or available on reusing licence
slide-2
SLIDE 2

How sociable am I? How to measure it quantitatively? How to provide real-time feedback to users about their sociability? How does information about my sociability affect my performance at work?

Idea

slide-3
SLIDE 3

Smartphone sensing

slide-4
SLIDE 4

Smartphone sensing

slide-5
SLIDE 5

Applications

Microphone - Speaker Recognition Bluetooth - Colocation Detection Accelerometer - Activity Recognition

slide-6
SLIDE 6

Challenges

Frequency of sensor sampling

  • limited energy
  • sufficient accuracy

Computation performing

  • locally on the phone
  • remotely in the cloud
slide-7
SLIDE 7

Sensor sampling

slide-8
SLIDE 8

Sensor sampling

SENSING SLEEPING SENSING SLEEPING SENSING SLEEPING SLEEPING SENSING SLEEPING SENSING SLEEPING

slide-9
SLIDE 9

Learning based Adaptation

SUCCESS: catched an interesting event FAILURE: catched a missable data ACTION: sensing from a sensor

slide-10
SLIDE 10

Learning based Adaptation

SUCCESS: P := P + a(1 - P) FAILURE: P := P - aP PROBABILITY OF SENSING

0 < a < 1

  • 0. 1 < P < 0.9
slide-11
SLIDE 11

Learning based Adaptation

PROB ACTION EVENT RESULT PROB ADJUSTING 0.5 sleeping missable

  • 0.5

sensing unmissable success 0.5 + 0.5 * (1 - 0.5) = 0.75 0.75 sensing missable failure 0.75 - 0.5 * 0.75 = 0.375 0.375 sleeping missable

  • 0.375

sleeping unmissable

  • 0.375

sensing unmissable success 0.375 + 0.5 * (1 - 0.375) = 0.6875 0.6875 sensing unmissable success 0.6875 + 0.5 * (1 - 0.6875) = 0,84375 ... ... ... ... ...

a = 0.5 SUCCESS: P := P + a(1 - P) FAILURE: P := P - aP

slide-12
SLIDE 12

Computation distribution

slide-13
SLIDE 13

Computation distribution

Where to perform the computation? Which factors to consider when deciding? How to measure the impact of these factors on the future?

slide-14
SLIDE 14

Computation distribution

Let's consider:

  • energy consumption
  • latency
  • total data sent over the network

and assume that they are pre-calculated and available before-hand!

slide-15
SLIDE 15

Computation distribution

Let's divide a task into the subtasks: T = [t1, t2, t3, ..., tn] There are 2n combinations in which T can be computed.

Configuration Subtask 1 Subtask 2 C1 locally locally C2 locally remotely C3 remotely locally C4 remotely remotely

slide-16
SLIDE 16

Utility function

uC(i) = wenergyue(i) + wlatencyul(i) + wdata over networkud(i)

  • fixed weights
  • utility value for energy, ... :

ue(i) =

  • its range is [-1, 0]

emin - e(i) e(i) emin = min {e(i) | i = 1, 2, ..., 2n}

wenergy + wlatency + w data over network = 1

slide-17
SLIDE 17

Adaptation of weights

<rules> <condition="battery_left > 80 and data_sent < 50MB"> <weight metric="energy">33.3</weight> <weight metric="latency">33.3</weight> <weight metric="data">33.3</weight> </condition> <condition="battery_left < 20"> <weight metric="energy">60</weight> <weight metric="latency">20</weight> <weight metric="data">20</weight> </condition> <condition="data_sent > 50MB"> <weight metric="energy">20</weight> <weight metric="latency">20</weight> <weight metric="data">60</weight> </condition> <condition="default"> <weight metric="energy">33.3</weight> <weight metric="latency">33.3</weight> <weight metric="data">33.3</weight> </condition> </rules>

Changing weights in time

slide-18
SLIDE 18

Evaluation

  • Microbenchmarks:

○ Sensor Sampling Benchmarks ○ Computation Distribution Benchmarks

  • Social psychology study
slide-19
SLIDE 19

Experimental datasets

  • 231 hours of raw accelerometer sensor data

(i.e., X, Y, Z coordinates)

  • 241 hours of Bluetooth data (i.e., Bluetooth

identifiers)

  • 151 hours of microphone data (i.e., audio

recordings)

  • 10 users carrying a Samsung Galaxy S or

Nokia 6210 Navigator phone

slide-20
SLIDE 20

Categorization of Events

Sensor Unmissable event Missable event Microphone audiable data silence Bluetooth change in the number of colocated users no change Accelerometer movement of a user user is stationary

slide-21
SLIDE 21

Sensor sampling: Performance Metrics

Accuracy: the percentage of missed events Energy consumption: measured using the Nokia Energy Profiler Latency: the delay in detecting change of event sequence from missable to unmissable and vice versa

Always the lower the better.

slide-22
SLIDE 22

Variation of the alpha parameter

selected "a" parameter: 0.5

slide-23
SLIDE 23

Techniques used for the comparison

Type Back-off function Advance function Linear k * x x / k Quadratic x2 sqrt(x) Exponential ex ln(x)

+ dynamic adaptation technique

x - sleep time

slide-24
SLIDE 24

Combinations of advance and back-off functions

slide-25
SLIDE 25

Computation Distribution: Performance Metrics

Lifetime of the phone: the total time until battery gets completely discharged from the fully charged state Average data sent over the network: the average number of bytes sent by the system over the 3G network to process a sensor task Average latency: the average time taken for processing a sensor task

slide-26
SLIDE 26

Tasks used in the benchmarks

  • activity recognition (1 subtask)
  • colocation detection (1 subtask)
  • speaker identification (2 subtasks)

○ converting the recorded audio sample to Perceptual Linear Predictive (PLP) coefficients file ○ comparing the extracted coefficients file with the speaker models of all users

slide-27
SLIDE 27

Performance of the configurations

Conf AR CD SI1 SI2 Con f AR CD SI1 SI2 C1 L L L L C9 R L L L C2 L L L R C10 R L L R C3 L L R L C11 R L R L C4 L L R R C12 R L R R C5 L R L L C13 R R L L C6 L R L R C14 R R L R C7 L R R L C15 R R R L C8 L R R R C16 R R R R

Weights set wenergy wlatency wdata over network S1 1 S2 1 S3 1 S4 0.33 0.33 0.33

slide-28
SLIDE 28

Sociability measurements

slide-29
SLIDE 29

Network constraint: pij is the proportion of time i spent with j Two dimensions:

  • colocation
  • interaction

Sociability measurements

slide-30
SLIDE 30

Network constraint - examples

B A C

1.0 1.0 0.5 0.5 A (0.5)2 + (0.5)2 = 0.5 B (1.0)2 + (1.0 * 0.5)2 = 1.25 C (1.0)2 + (1.0 * 0.5)2 = 1.25

B C A D E

0.4 0.4 0.4 0.5 0.5 0.2 0.2 0.2 1.0 C 0.69 A 0.83 B 0.83 D 1.13 E 1.32

slide-31
SLIDE 31

Social Psychology Study

  • two working weeks with 10 users
  • 2 phases:
  • 1. SociableSense system disabled
  • 2. feedback mechanism enabled:

showing users: sociability, strength of their relations, activity levels and alerts about the users in sociable locations

slide-32
SLIDE 32

EmotionSense App

slide-33
SLIDE 33

Results and Discussion