1
play

1 Challenge 4: Adaptive QoS Control Certification Develop - PDF document

Challenges for Real-Time Systems Adaptive QoS Control Classical real-time scheduling theory relies on accurate knowledge about workload and platform. in Distributed Real-Time Middleware New challenges under uncertainties Maintain robust


  1. Challenges for Real-Time Systems Adaptive QoS Control � Classical real-time scheduling theory relies on accurate knowledge about workload and platform. in Distributed Real-Time Middleware New challenges under uncertainties � Maintain robust real-time properties in face of Chenyang Lu � unknown and varying workload � system failure Department of Computer Science and Engineering � system upgrade Washington University in St. Louis � Certification and testing of real-time properties of adaptive systems 2 Challenge 1: Challenge 2: Workload Uncertainties System Failure � Task execution times � Only maintaining functional reliability is not sufficient. � Heavily influenced by sensor data or user input Must also maintain robust real-time properties! � Unknown and time-varying � Disturbances 1. Norbert fails. � Aperiodic events 2. Move its tasks to other � Resource contention from subsystems processors. � Denial of Service attacks hermione & harry are � e.g., SCADA for power grid management, total ship overloaded! computing environment 3 4 Challenge 3: Example: nORB Middleware System Upgrade Application CORBA Objects � Goal: Portable application across HW/OS platforms � Same application “work” on multiple platforms � Existing real-time middleware Server Client � Support functional portability � Lack QoS portability: must manually reconfigure applications for different platforms to achieve desired real-time Timer … Worker … T1: 2 Hz nORB* properties thread thread T2: 12 Hz � Profile execution times Priority … � Determine/implement allocation and task rate … queues Offline, � Test/analyze schedulability manual config. Conn. … Conn. Time-consuming and expensive! … thread thread Operation Request Lanes … … 5 6 1

  2. Challenge 4: Adaptive QoS Control Certification � Develop software feedback control in middleware � Uncertainties call for adaptive solutions. � Achieve robust real-time properties for many applications But… � Apply control theory to design and analyze control algorithms � Adaptation can make things worse. � Facilitate certification of embedded software � Adaptive systems are difficult to test and certify Sensor/human input? Disturbance? 1 CPU utilization 0.8 0.6 Applications 0.4 Maintain QoS guarantees An unstable 0.2 Adaptive QoS Control Middleware • w/o accurate knowledge adaptive system 0 about workload/platform Drivers/OS/HW? 0 100 200 300 • w/o hand tuning Time (sampling period) Available resources? HW failure? P1 P2 Set Point 7 8 Feedback Control Real-Time Scheduling (FCS) Service Adaptive QoS Control Middleware Developers specify � � FCS/nORB: Single server control Performance specs � � FC-ORB: Distributed systems with end-to-end tasks CPU utilization = 70%; Deadline miss ratio = 1%. � Tunable parameters � Range of task rate: digital control loop, video/data display � Quality levels: image quality, filters � Admission control � FCS guarantees specs by tuning parameters based on � online feedbacks Automatic: No need for hand tuning � Transparent from developers � Performance Portability! � 9 10 A Feedback Control Loop The FC-U Algorithm FC-U Sensors, Inputs U s : utilization reference K u : control parameter { R i ( k+1 )} Specs R i (0): initial rate Application? Controller Actuator U s = 70% 1. Get utilization U(k) from Utilization Monitor. Middleware 2. Utilization Controller: U ( k ) B(k+1) = B(k)+ K u *(U s –U(k)) /* Integral Controller */ Parameters Monitor Drivers/OS? 3. Rate Actuator adjusts task rates R 1 : [1, 5] Hz R 2 : [10, 20] Hz R i (k+1) = (B(k+1)/B(0))R i (0) HW? 4. Inform clients of new task rates. 11 12 2

  3. The Family of FCS Algorithms Control Analysis � FC-U controls utilization � Rigorously designed based on feedback control � Performance spec: U(k) = U s theory � Meet all deadlines if U s ≤ schedulable utilization bound � Relatively low utilization if utilization bound is pessimistic � Analytic guarantees on � FC-M controls miss ratio � Stability � Performance spec: M(k) = M s � Steady state performance � High utilization � Transient state: settling time and overshoot � Does not require utilization bound to be known a priori � Robustness against variation in execution time � Small but non-zero deadline miss ratio: M(k) > 0 � Do not assume accurate knowledge of execution time � FC-UM combines FC-U and FC-M � Performance specs: U s , M s � Allow higher utilization than FC-U � Lu, Stankovic, Tao, and Son, Feedback Control Real-Time � No deadline misses in “nominal” case Scheduling: Framework, Modeling, and Algorithms, Real-Time � Performance bounded by FC-M Systems , 23(1/2), July/September 2002. 13 14 FCS/nORB Architecture Dynamic Response Application CORBA Objects Stability Controlled Server Client variable miss monitor Steady state error util monitor FCS/nORB Reference Timer controller … … worker rate thread thread rate assigner modulator … Priority … Queues Transient State Steady State conn. … … conn. thread thread Time feedback lane Settling time … … Operation Request Lanes 15 16 Offline or Online? Implementation � Running on top of COTS Linux � Offline � Deadline Miss Monitor � FCS executed in testing phase on a new platform � Instrument operation request lanes � Turned off after entering steady state � Time-stamp operation request and response on each lane � No run-time overhead � CPU Utilization Monitor � Cannot deal with varying workload � Interface with Linux /proc/stat file � Online � Count idle time: “Coarse” granularity: jiffy (10 ms) � Only controls server delay � Run-time overhead (actually small…) � Robustness in face of changing execution times 17 18 3

  4. Server Overhead Set-up •Overhead: FC-UM > FC-M > FC-U OS: Redhat Linux •FC-UM increases CPU utilization by <1% for a 4s sampling period. � Hardware platform � Server A: 1.8GHz Celeron, 512 MB RAM � Server Overhead per Sampling Period Server B: 1.99GHz Pentium 4, 256 MB RAM � 40 Same client � 35 Connected via 100 Mbps LAN � Overhead (ms) 30 25 Experiment � 20 1. Overhead 15 10 2. Steady execution time (offline case) 5 3. Varying execution time (on-line case) 0 FC-U FC-M FC-UM Sampling Period = 4 sec 19 20 Performance Portability Steady-state Deadline Miss Ratio Steady Execution Time Server A • FC-M enforces miss ratio spec • Same CPU utilization (and no deadline miss) on • FC-U, FC-UM causes no deadline misses different platforms w/o hand-tuning! Average Deadline Miss Ratio in Steay State 1.00 1.00 0.90 0.90 2.00 0.80 0.80 0.70 0.70 U(k) U(k) 1.49 0.60 0.60 1.50 0.50 B(k) 0.50 B(k) 0.40 0.40 M(k) M(k) 0.30 0.30 % 1.00 0.20 0.20 0.10 0.10 0.00 0.00 0.50 0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200 Time (4 sec) Time (4 sec) U s = 70% 0.00 FC-U FC-M FC-UM FC-U on Server A FC-U on Server B M s = 1.5% 1.8GHz Celeron, 512 MB RAM 1.99GHz Pentium 4, 256 MB RAM 21 22 Robust Guarantees Steady-State CPU Utilization Varying Execution Time Server A • FC-U, FC-UM enforces utilization spec • Same CPU utilization and no deadline miss in steady state despite changes in execution times! • FC-M achieves higher utilization Average CPU Utilization in Steady State 1.00 0.90 98.93 0.80 100 0.70 U(k) 0.60 74.97 0.50 B(k) 80 70.01 0.40 M(k) 0.30 60 0.20 % 0.10 0.00 40 0 50 100 150 200 250 300 350 400 20 Time (4 sec) 0 FC-U FC-M FC-UM U s = 70% U s = 75% 23 24 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend