Software Programming tiny devices You would like to have: - - PowerPoint PPT Presentation

software programming tiny devices
SMART_READER_LITE
LIVE PREVIEW

Software Programming tiny devices You would like to have: - - PowerPoint PPT Presentation

Software Programming tiny devices You would like to have: structured programs with multiple concurrent threads good responsiveness to events (some applications pose RT requirements) readable, reusable code Programming tiny devices So what


slide-1
SLIDE 1

Software

slide-2
SLIDE 2

Programming tiny devices

You would like to have:

structured programs with multiple concurrent threads good responsiveness to events (some applications pose RT requirements) readable, reusable code

slide-3
SLIDE 3

Programming tiny devices

So what is the problem? Practically, the only problem is multithreading, i.e., keeping track of multiple concurrent activities This is because normally such activities require a considerable amount of RAM for their stacks

slide-4
SLIDE 4

Thread =

code data stack

x N

constant, in flash, shared, not a problem global, shared, determined by the application, not by the program structure per thread, implied by program structure, needed to provide save area for context switch, wasted from the viewpoint of the application

slide-5
SLIDE 5

Allocating stack to threads …

… results in nasty memory fragmentation:

threads can be preempted every which way you never know when a thread will be preempted, so you have to allocate to it the maximum amount of stack it may ever need

slide-6
SLIDE 6

Note that interrupts fare better

… because they use the stack in order

this one will not receive control until this one is gone

… so they can reasonably share the same stack

slide-7
SLIDE 7

TinyOS

This is one popular operating system for devices with very little RAM; here’s its approach to multithreading: Well, there isn’t any: what they call “tasks” are chunks of code executed back to back (not co-existing) All the truly fine parts of the program are implemented as interrupt service routines

slide-8
SLIDE 8

In our OS, dubbed PicOS, …

… there are threads, but their preemption mechanism is restrictive: A thread can only lose the CPU at a moment when it has given up the stack

slide-9
SLIDE 9

In TinyOS:

A lot of finesse (“multithreading”), causes stack containment problems Too little finesse may make the program respond poorly to events This is because the interrupt stack is the critical place for implementing all concurrency in the system The moral: the stack provides a poor way of saving thread context for preemption

slide-10
SLIDE 10

Reactive thread model in PicOS:

Wait for events Wake up Declare events Do things

thread = FSM

state ...

... ... ... ... ...

state state state

slide-11
SLIDE 11

PicOS thread (strand) example:

strand (sniffer, sess_t) char c; entry (RC_TRY) data->packet = tcv_rnp (RC_TRY, efd); data->length = tcv_left (packet); entry (RC_PASS) if (data->user != US_READY) { wait (&data->user, RC_PASS); delay (1000, RC_LOCKED); release; } c = 1; ... ... entry (RC_ENP) tcv_endp (data->packet); signal (&data->packet); proceed (RC_TRY); endstrand

slide-12
SLIDE 12

Two types of threads:

thread has no private (differentiating) data; usually, a thread exists in a single copy at a time

thread (name)

strand has a private data pointer (representing its specific data structure)

strand (name, data_pointer_type)

slide-13
SLIDE 13

Co-existing threads:

State B0 State B1 State B2 State B3 State A0 State A1 State A2 State A3 State A4

Hitting the end of a state amounts to a temporary “return” from the thread’s code

slide-14
SLIDE 14

IPC

thread (coordinator) entry (MN_START) mon = runthread (monitor); for (int i = 0; i < NP; i++) { runstrand (consumer, bufptr [i]); runstrand (producer, bufptr [i]); } entry (MN_WAIT) if (!running (consumer) || !running (producer)) { killall (producer); killall (consumer); ptrigger (mon, 0); finish; } joinall (consumer, MN_WAIT); joinall (producer, MN_WAIT); endthread

slide-15
SLIDE 15

Flowcharts

thread (hrate) address packet; entry (HR_INIT) delay (3 * 1024, HR_SEND); release; entry (HR_SEND) if ((HeartRate = hrc_get ()) > 255) HeartRate = 255; if (XWS == 0) { // Don't do this if a sample is being sent packet = tcv_wnp (HR_SEND, BSFD, 2); put1 (packet, PT_HRATE); put1 (packet, HeartRate); tcv_endp (packet); } proceed (HR_INIT); endthread INIT

3 sec

SEND calculate HR XWS? yes get xmt buffer send HR no

slide-16
SLIDE 16

Layer-less I/O programming

VNETI A P I P H Y P L U G I N S Application Device NULL plug-in TARP plug-in … Device . . .

  • pen-ended
slide-17
SLIDE 17

Plugin structure

typedef struct { int (*tcv_ope) (int, int, ...); int (*tcv_clo) (int, int); int (*tcv_rcv) (int, address, int, int*, tcvadp_t*); int (*tcv_frm) (address, int, tcvadp_t*); int (*tcv_out) (address); int (*tcv_xmt) (address); int (*tcv_tmt) (address); int tcv_info; } tcvplug_t;

how to open a session how to close a session preprocessing upon reception application packet boundary preprocessing for output after packet transmission

  • n timeout
slide-18
SLIDE 18

Praxis view

Open a session, specifying the PHY and the Plug-in:

SFD = tcv_open (RS_RETRY, 1, 2);

Use the session ID for receiving and sending packets:

packet = tcv_rnp (RS_RDP, SFD); ... tcv_endp (packet); packet = tcv_wnp (RS_WRP, SFD, length); ... tcv_endp (packet);

slide-19
SLIDE 19

VNETI

Implements a tagged pool of packet buffers Packets can be queued at sessions (for reception by the praxis) Packets can be queued at PHYs for dispatching to the device Plug-in functions can manipulate those packets by returning disposition codes They can also modify their contents and determine header boundaries

slide-20
SLIDE 20

Plug-ins tell VNETI what to do with their packets

int tcv_out_oep (address pbuff) { ... return TCV_DSP_XMT; ... return TCV_DSP_DROP; ... } int tcv_rcv_oep (int phy, address rawp, int len, int *ses, tcvadp_t *bounds) { if (not_ours (rawp)) return TCV_DSP_PASS; ... *ses = ...; bounds->head = ...; bounds->tail = ...; return TCV_DSP_RCV; }

slide-21
SLIDE 21

System layout

slide-22
SLIDE 22

VUEE (or VUE2): virtual execution

This part can be re-compiled and executed in a virtual environment called VUEE (Virtual Underlay Execution Engine)

slide-23
SLIDE 23

This means that …

… you can emulate a whole network of nodes

possibly running a multiplicity of praxes

in a way practically indistinguishable from a realistic execution In particular, any programs (OSS) developed to communicate with the nodes, can easily be fooled to talk to the model instead

slide-24
SLIDE 24

How it works

SMURPH VUEE library PicOS code … PicOS code PicOS code wrapper

+

data set Internet IF agent … UART node 10 OSSI program

slide-25
SLIDE 25

Real time aspects of PicOS

Criticism: limited preemptibility of threads gets in the way of RT response First note that interrupts are handled

  • utside of threads

wait for event handle event thread retrieve data (into a buffer) trigger event (wake the thread) interrupt scheduler

slide-26
SLIDE 26

Reducing the granularity of states

... entry (RCV_READ) do { Pkt = tcv_rnp (RCV_READ, Ses); update (Pkt [SensorValue]); tcv_endp (Pkt); } while (Pkt [MsgType != MSG_LAST); ... ... entry (RCV_READ) Pkt = tcv_rnp (RCV_READ, Ses); update (Pkt [SensorValue]); tcv_endp (Pkt); if (Pkt [MsgType != MSG_LAST) proceed (RCV_READ); ...

slide-27
SLIDE 27

Event polling

... entry (RCV_READ) while (MoreToDo) { lots_of_work (...); } ... ... entry (RCV_READ) while (MoreToDo) { if (ImportantEventPending) proceed (RCV_READ); lots_of_work (...); } ... allows the scheduler to run more important threads