1
play

1 Challenge 4: Example: nORB Middleware Certification Application - PDF document

HW2 Adaptive QoS Control Due on 4/25, 4pm. Submit to the hand-in bin located next to Lopata 508 in Distributed Real-Time Middleware (use the drawer marked CSE467S). Hard deadline: no homework accepted after due date/time.


  1. HW2 Adaptive QoS Control � Due on 4/25, 4pm. � Submit to the hand-in bin located next to Lopata 508 in Distributed Real-Time Middleware (use the drawer marked CSE467S). � Hard deadline: no homework accepted after due date/time. � Discussion and collaboration on homework are not Chenyang Lu allowed. Each student must turn in his/her own work. Department of Computer Science and Engineering � Graded homework will be handed back to students' Washington University in St. Louis pendaflex folders. 1 Challenge 1: Workload Uncertainties Challenges for Real-Time Systems � Classical real-time scheduling theory relies on � Task execution times accurate knowledge about workload and platform. � Heavily influenced by sensor data or user input � Unknown and time-varying � Disturbances New challenges under uncertainties � Aperiodic events � Maintain robust real-time properties in face of � Resource contention from subsystems � unknown and varying workload � Denial of Service attacks � system failure � e.g., SCADA for power grid management, total ship � system upgrade computing environment � Certification and testing of real-time properties of adaptive systems 3 4 Challenge 2: Challenge 3: System Failure System Upgrade � Only maintaining functional reliability is not sufficient. � Goal: Portable application across HW/OS platforms Must also maintain robust real-time properties! � Same application “work” on multiple platforms � Existing real-time middleware � Support functional portability 1. Norbert fails. � Lack QoS portability: must manually reconfigure applications 2. Move its tasks to other for different platforms to achieve desired real-time processors. properties hermione & harry are � Profile execution times overloaded! � Determine/implement allocation and task rate � Test/analyze schedulability Time-consuming and expensive! 5 6 1

  2. Challenge 4: Example: nORB Middleware Certification Application CORBA Objects � Uncertainties call for adaptive solutions. But… � Adaptation can make things worse. Server Client � Adaptive systems are difficult to test and certify Timer … Worker … T1: 2 Hz nORB* thread 1 thread T2: 12 Hz CPU utilization 0.8 Priority … 0.6 … queues Offline, 0.4 An unstable manual config. 0.2 Conn. adaptive system … … Conn. 0 thread thread 0 100 200 300 Time (sampling period) Operation Request Lanes P1 P2 Set Point … … 7 8 Adaptive QoS Control Adaptive QoS Control Middleware � Develop software feedback control in middleware � FCS/nORB: Single server control � Achieve robust real-time properties for many applications � FC-ORB: Distributed systems with end-to-end tasks � Apply control theory to design and analyze control algorithms � Facilitate certification of embedded software Sensor/human input? Disturbance? Applications Maintain QoS guarantees Adaptive QoS Control Middleware • w/o accurate knowledge about workload/platform Drivers/OS/HW? • w/o hand tuning Available resources? HW failure? 9 10 Feedback Control Real-Time Scheduling A Feedback Control Loop (FCS) Service Developers specify � FC-U Sensors, Inputs Performance specs � CPU utilization = 70%; Deadline miss ratio = 1%. � { R i ( k+1 )} Specs Tunable parameters � Application? Controller Actuator U s = 70% Range of task rate: digital control loop, video/data display � Quality levels: image quality, filters Middleware � U ( k ) Admission control � Parameters Monitor Drivers/OS? FCS guarantees specs by tuning parameters based on � R 1 : [1, 5] Hz online feedbacks R 2 : [10, 20] Hz HW? Automatic: No need for hand tuning � Transparent from developers � Performance Portability! � 11 12 2

  3. The Family of FCS Algorithms The FC-U Algorithm � FC-U controls utilization � Performance spec: U(k) = U s U s : utilization reference � Meet all deadlines if U s ≤ schedulable utilization bound K u : control parameter � Relatively low utilization if utilization bound is pessimistic R i (0): initial rate � FC-M controls miss ratio 1. Get utilization U(k) from Utilization Monitor. � Performance spec: M(k) = M s 2. Utilization Controller: � High utilization B(k+1) = B(k)+ K u *(U s –U(k)) /* Integral Controller */ � Does not require utilization bound to be known a priori 3. Rate Actuator adjusts task rates � Small but non-zero deadline miss ratio: M(k) > 0 R i (k+1) = (B(k+1)/B(0))R i (0) � FC-UM combines FC-U and FC-M 4. Inform clients of new task rates. � Performance specs: U s , M s � Allow higher utilization than FC-U � No deadline misses in “nominal” case � Performance bounded by FC-M 13 14 Control Analysis Dynamic Response � Rigorously designed based on feedback control theory Stability Controlled � Analytic guarantees on variable � Stability Steady state error � Steady state performance Reference � Transient state: settling time and overshoot � Robustness against variation in execution time � Do not assume accurate knowledge of execution time Transient State Steady State � Lu, Stankovic, Tao, and Son, Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms, Real-Time Time Systems , 23(1/2), July/September 2002. Settling time 15 16 FCS/nORB Architecture Implementation Application CORBA Objects � Running on top of COTS Linux � Deadline Miss Monitor � Instrument operation request lanes Server Client miss monitor � Time-stamp operation request and response on each lane util monitor � CPU Utilization Monitor FCS/nORB Timer controller … … worker rate thread � Interface with Linux /proc/stat file thread rate assigner modulator � Count idle time: “Coarse” granularity: jiffy (10 ms) … Priority … � Only controls server delay Queues conn. … … conn. thread thread feedback lane … … Operation Request Lanes 17 18 3

  4. Offline or Online? Set-up OS: Redhat Linux � Offline � Hardware platform � FCS executed in testing phase on a new platform � Server A: 1.8GHz Celeron, 512 MB RAM � Turned off after entering steady state � Server B: 1.99GHz Pentium 4, 256 MB RAM � No run-time overhead � Same client � � Cannot deal with varying workload Connected via 100 Mbps LAN � � Online Experiment � � Run-time overhead (actually small…) 1. Overhead � Robustness in face of changing execution times 2. Steady execution time (offline case) 3. Varying execution time (on-line case) 19 20 Performance Portability Server Overhead Steady Execution Time •Overhead: FC-UM > FC-M > FC-U •FC-UM increases CPU utilization by <1% for a 4s sampling period. • Same CPU utilization (and no deadline miss) on different platforms w/o hand-tuning! Server Overhead per Sampling Period 1.00 1.00 40 0.90 0.90 35 0.80 0.80 0.70 0.70 30 Overhead (ms) U(k) U(k) 0.60 0.60 25 0.50 B(k) 0.50 B(k) 0.40 0.40 M(k) M(k) 20 0.30 0.30 0.20 0.20 15 0.10 0.10 0.00 10 0.00 0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200 5 Time (4 sec) Time (4 sec) 0 U s = 70% FC-U FC-M FC-UM FC-U on Server A FC-U on Server B Sampling Period = 4 sec 1.8GHz Celeron, 512 MB RAM 1.99GHz Pentium 4, 256 MB RAM 21 22 Steady-state Deadline Miss Ratio Steady-State CPU Utilization Server A Server A • FC-M enforces miss ratio spec • FC-U, FC-UM enforces utilization spec • FC-U, FC-UM causes no deadline misses • FC-M achieves higher utilization Average CPU Utilization in Steady State Average Deadline Miss Ratio in Steay State 98.93 100 2.00 74.97 1.49 80 70.01 1.50 60 % 1.00 % 40 0.50 20 0.00 0 FC-U FC-M FC-UM FC-U FC-M FC-UM M s = 1.5% U s = 70% U s = 75% 23 24 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend