Programming Sensor Networks: A Tale of Two Perspectives Ramesh - - PowerPoint PPT Presentation

programming sensor networks a tale of two perspectives
SMART_READER_LITE
LIVE PREVIEW

Programming Sensor Networks: A Tale of Two Perspectives Ramesh - - PowerPoint PPT Presentation

Programming Sensor Networks: A Tale of Two Perspectives Ramesh Govindan ramesh@usc.edu Embedded Networks Laboratory http://enl.usc.edu 1 Wireless Sensing: Applications 2 Lots of applications Wireless Sensing: Platforms Motes: 8 or 16 bit


slide-1
SLIDE 1

1

Programming Sensor Networks: A Tale of Two Perspectives

Ramesh Govindan

ramesh@usc.edu Embedded Networks Laboratory http://enl.usc.edu

slide-2
SLIDE 2

2

Wireless Sensing: Applications

Lots of applications

slide-3
SLIDE 3

3

Wireless Sensing: Platforms

Lots of platforms

Motes: 8 or 16 bit sensor devices 32-bit embedded single-board computers

slide-4
SLIDE 4

4

Wireless Sensing Research

Processor Platforms Radios Sensors Operating Systems Localization Time Synchronization Medium Access Calibration Collaborative Signal Processing Data-centric Routing Data-centric Storage Querying, Triggering Aggregation and Compression Collaborative Event Processing Monitoring Security Programming Systems

Lots of research!

slide-5
SLIDE 5

5

… some of it from our Lab

Architecture

Multi-hop wireless network Centralized Program Annotated Localized Binary Kairos preprocessor + language compiler Program Kairos runtime Thread
  • f
control sync read/write Cached Objects Managed Objects Queue Manager Requests Replies Sensor Node Program Kairos runtime Thread
  • f
control sync read/write Cached Objects Managed Objects Queue Manager Requests Replies Sensor Node Link + distribute to runtime Program Kairos runtime Thread
  • f
control sync read/write Cached Objects Managed Objects Queue Manager Requests Replies Sensor Node Program Kairos runtime Thread
  • f
control sync read/write Cached Objects Managed Objects Queue Manager Requests Replies Sensor Node Link + distribute to runtime Link + distribute to runtime

Macro-programming Structural Health Monitoring Routing and Data Dissemination Data-Centric Storage Measurements and Testbeds

slide-6
SLIDE 6

6

But, there is a problem!

Six pages of 158 pages

  • f code from a wireless structural data

acquisition system called Wisden

Programming these networks is hard!

slide-7
SLIDE 7

7

Three Responses

Event-based programming on an OS that supports no isolation, preemption, memory management

  • r a network stack is hard.

Therefore, we need OSes that support preemption and memory management, we need virtual machines, we need higher-level communication abstractions.

OS/Middleware

slide-8
SLIDE 8

8

Three Responses

Tiny sensor nodes (motes) are resource- constrained, and we cannot possibly be re- programming them for every application. Therefore, we need a network architecture that constrains what you can and cannot do on the motes.

Networking Networking

slide-9
SLIDE 9

9

Three Responses

Today, we’re programming sensor networks in the equivalent of assembly language. What we need is a macroprogramming system, where you program the network as a whole, and hide all the complexity in the compiler and the runtime

Programming Programming Languages Languages

slide-10
SLIDE 10

10

Three Responses

Programming Programming Languages Languages OS/Middleware OS/Middleware Networking Networking The Tenet The Tenet Architecture Architecture The Pleaides The Pleaides Macroprogramming Macroprogramming System System

slide-11
SLIDE 11

11

The Tenet The Tenet Architecture Architecture

Omprakash Gnawali, Ben Greenstein, Ki-Young Jang, August Joki, Jeongyeup Paek, Marcos Vieira, Deborah Estrin, Ramesh Govindan, Eddie Kohler, The TENET Architecture for Tiered Sensor Networks, In Proceedings of the ACM Conference on Embedded Networked Sensor Systems (Sensys), November 2006.

slide-12
SLIDE 12

12

The Problem

Sensor data fusion within the network

… can result in energy-efficient implementations

But implementing collaborative fusion on the motes for each application separately

… can result in fragile systems that are hard to program, debug, re-configure, and manage

We learnt this the hard way, through many trial deployments

slide-13
SLIDE 13

13

An Aggressive Position

Why not design systems without sensor data fusion on the motes? A more aggressive position: Why not design an architecture that prohibits collaborative data fusion on the motes? Questions:

How do we design this architecture? Will such an architecture perform well?

No more on-mote collaborative fusion

slide-14
SLIDE 14

14

Tiered Sensor Networks

Motes

Low-power, short-range radios Contain sensing and actuation

Masters

32-bit CPUs (e.g. PC, Stargate) Higher-bandwidth radios Larger batteries or powered

Enable flexible deployment

  • f dense instrumentation

Provide greater network capacity, larger spatial reach

Many real-world sensor network deployments are tiered Real world deployments at,

Great Duck Island (UCB, [Szewczyk,`04]), James Reserve (UCLA, [Guy,`06]), Exscal project (OSU, [Arora,`05]), …

Future large-scale sensor network deployments will be tiered

slide-15
SLIDE 15

15

Tenet Principle

Multi-node data fusion functionality and multi-node application logic should be implemented only in the master tier. The cost and complexity of implementing this functionality in a fully distributed fashion on motes

  • utweighs the performance benefits of doing so.

Aggressively use tiering to simplify system !

slide-16
SLIDE 16

16

and may return responses

Tenet Architecture

Motes process data, No multi-node fusion at the mote tier Masters control motes Applications run on masters, and masters task motes

slide-17
SLIDE 17

17

What do we gain ?

Simplifies application development Application writers do not need to write or debug embedded code for the motes

– Applications run on less-constrained masters

slide-18
SLIDE 18

18

What do we gain ?

Enables significant code re-use across applications Simple, generic, and re-usable mote tier

– Multiple applications can run concurrently with simplified mote functionality

Robust and scalable network subsystem

– Networking functionality is generic enough to support various types of applications

slide-19
SLIDE 19

19

More bits communicated than necessary? Communication over longer hops?

Challenges

Fusion

Not an issue

Typically the diameter of the mote tier will be small Can compensate by more aggressive processing at the motes

In most deployments, there is a significant temporal correlation

Mote-local processing can achieve significant compression

… but little spatial correlation

Little additional gains from mote tier fusion

Mote-local processing provides most of the aggregation benefits.

The costs will be small, as we shall see…

slide-20
SLIDE 20

20

System Overview

Tasking Subsystem Networking Subsystem

Tasking Language Task Parser Tasklets and Runtime Reliable Transport Routing Task Dissemination

Tenet System How to disseminate tasks and deliver responses? How to express tasks?

slide-21
SLIDE 21

21

Tasking Language

Linear data-flow language allowing flexible composition of tasklets

A tasklet specifies an elementary sensing, actuation, or data processing action Tasklets can have several parameters, hence flexible Tasklets can be composed to form a task

  • Sample(500ms, REPEAT, ADC0, LIGHT) Send()

No loops, branches: eases construction and analysis

Not Turing-complete: aggressively simple, but supports wide range of applications

Data-flow style language natural for sensor data processing

slide-22
SLIDE 22

22

Task Composition

CntToLedsAndRfm SenseToRfm With time-stamp and seq. number Get memory status for node 10 If sample value is above 50, send sample data, node-id and time-stamp

Wait Count Lights Send Sample Send Count StampTime Send Sample MemStats Send Address NEQ(10) DeleteIf Sample LT(50) DeleteATaskIf Address StampTime Send

slide-23
SLIDE 23

23

The Tenet Stack

slide-24
SLIDE 24

24

Application Case Study: PEG

Goal

Compare performance with an implementation that performs in-mote multi-node fusion

Pursuit-Evasion Game

Pursuers (robots) collectively determine the location of evaders, and try to corral them

slide-25
SLIDE 25

25

Mote-PEG vs. Tenet-PEG

Pursuer

Evader Detected Re-task the motes Task the motes

Evader Pursuer

Evader Detected

Evader

Leader Election

Leader

Mote-PEG Tenet-PEG

slide-26
SLIDE 26

26

Error in Position Estimate Reporting Message Overhead

PEG Results

0.1 0.2 0.3 0.4 0.5

Fraction of Reports

1 2 3 4 5 6

Positional Error Mote-PEG Tenet-PEG

Comparable positional estimate error Comparable reporting message overhead

50 100 150 200 250 300 350 400

msg/min

Mote-PEG Tenet-PEG

min avg max

slide-27
SLIDE 27

27

PEG Results

Latency is nearly identical

A Tenet implementation of an application can perform as well as an implementation with in-mote collaborative fusion

slide-28
SLIDE 28

28

N W

Real-world Tenet deployment on Vincent Thomas Bridge

570 ft 120 ft 30 ft

Mote Master

Ran successfully for 24 hours 100% reliable data delivery Deployment time: 2.5 hours Total sensor data received: 860 MB

slide-29
SLIDE 29

29

Interesting Observations

Fundamental mode agrees with previously published measurement Faulty sensor! Consistent modes across sensors

slide-30
SLIDE 30

30

Summary

Applications

Simplifies application development

Networking Subsystem Robust and scalable network Tasking Subsystem Re-usable generic mote tier

Simple, generic and re-usable system

slide-31
SLIDE 31

31

Software Available

Master tier

Cygwin Linux Fedora Core 3 Stargate MacOS X Tiger

Mote tier

Tmote Sky MicaZ Maxfor Mica2 Imote-2 (in progress)

http://tenet.usc.edu

slide-32
SLIDE 32

32

The Pleaides The Pleaides Macroprogramming Macroprogramming System System

Nupur Kothari, Ramakrishna Gummadi, Todd Millstein, Ramesh Govindan, Reliable and Efficient Programming Abstractions for Wireless Sensor Networks, Proceedings of the SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2007.

slide-33
SLIDE 33

33

What is Macroprogramming?

Conventional sensornet programming Node-local program written in nesC Compiled to mote binary

slide-34
SLIDE 34

34

What is Macroprogramming?

Central program that specifies application behavior Node-local program written in nesC Compiled to mote binary

Compiler Runtime

+

Simplifies programming by offloading concurrency, reliability, and energy efficiency to the compiler and runtime

slide-35
SLIDE 35

35

Change of Perspective

int int val

val L OCAL ; L OCAL ; vo id main() { vo id main() {

node_list node_list

all = all = ge t_available _no de s ge t_available _no de s(); ();

int int

max = 0; max = 0; fo r ( fo r (int

int i = 0,

i = 0, node

node n =

n = ge t_no de ge t_no de (all (all, i); , i); n != n != -

  • 1;

1; n = n = ge t_no de ge t_no de (all (all, ++i)) { , ++i)) { if ( if (val@n val@n > max) > max) max = max = val@n val@n; ; } } } }

Easily recognizable maximum computation loop

slide-36
SLIDE 36

36

Pleiades: Contributions

The Pleaides programming language

Centralized as opposed to node-level

Automatic program partitioning and control-flow migration

Minimizes energy

Easy-to-use and reliable concurrency primitive

Ensures consistency under concurrent execution

Mote-based implementation

Evaluated several realistic applications

slide-37
SLIDE 37

37

fo r

Pleiades Constructs

Node-local variable Central variable List of nodes in network Network Node Access node-local variable at node Concurrent-for loop cfor execution corresponds to some sequential execution of the loops iterations (serializability)

c fo r(int i = 0, node n = get_no de(all, i); n != -1; n = get_no de(all, ++i)) { if (val@n > max) max = val@n; } } int val L OCAL ; vo id main() { vo id main() {

nodelist nodelist

all = all = get_available_no des(); (); int int max = 0; max = 0;

slide-38
SLIDE 38

38

Pleiades: Main Challenges

The Pleiades Compiler and Runtime Concurrency Partitioning How to efficiently partition code and migrate control- flow during program execution How to achieve serializability

slide-39
SLIDE 39

39

Program Execution

Control-flow migration as well as data movement Control-flow migration Access node-local variables from nearby nodes

val@n1 = a; n3 = val@n2; val@n3 = b; val@n4 = c;

Nodecut n1 n2 n3 n4 How does the compiler partition code into nodecuts? How does the runtime know where to execute each nodecut?

void main() { void main() { ……… ……… val@n1 = a;

n3 = val@n2;

val@n3 = b; val@n4 = c; ………

}

Sequential Program Uses the property that the location

  • f variables within a nodecut is

known before its execution Attempts to find lowest communication node, based on location of variables in the nodecut and topology information

slide-40
SLIDE 40

40

Cfor Execution

start 1 2 3 6 4 5 end all = get_available_nodes() max = 0 n = get_first(all) False True False True max = temp@n if(temp@n > max) if(n!=NULL) n = get_next(all) start 1 2 3 6 4 5 end all = get_available_nodes() max = 0 n = get_first(all) False True False True max = temp@n if(temp@n > max) if(n!=NULL) n = get_next(all)

Nodecut encountering a cfor forks a thread for each iteration Approach: Distributed locking, with multiple reader/single writer locks Challenge: To ensure serializability during concurrent execution On completion, cfor iterations send DONE message to originating node

slide-41
SLIDE 41

41

Implementation and Evaluation

Compiler built as an extension to the CIL infrastructure for C analysis and transformation Pleiades compiler generates nesC code Pleiades evaluated on TelosB motes Experience with several applications: pursuit-evasion, car parking, etc.

slide-42
SLIDE 42

42

Pursuit-Evasion in Pleiades

Pleiades-PEG vs. nesC-PEG

0.5 1 1.5 2 2.5 Lines of Code

  • Avg. Error

Latency Message Overhead Pleiades-PEG nesC-PEG

slide-43
SLIDE 43

43

Summary

The Pleaides Compiler Concurrency Partitioning

Automated nodecut generation and dynamic control-flow migration Programmer-directed concurrency and compiler-generated locking

slide-44
SLIDE 44

44

Which is Better?

Programming Programming Languages Languages Networking Networking The Tenet The Tenet Architecture Architecture The Pleaides The Pleaides Macroprogramming Macroprogramming System System

slide-45
SLIDE 45

45

Head-to-Head

Tenet Pleaides Expressivity Low, by design High Cuteness Low: Some interesting protocol design questions, but focus is

  • n simplicity

High: Lots of interesting compiler

  • ptimization questions,

consistency models Time-to- develop ~ 3 student years ~ 3 student years Papers 2 3, potential for more

slide-46
SLIDE 46

46

Head-to-Head

Tenet Pleaides Missing Components Sleep scheduling, security Any-to-any routing, energy management, robustness Maturity Seen two deployments, have external users Code still needs much handholding What I believe in √ What I like √

slide-47
SLIDE 47

47

http://enl.usc.edu