Schedulability Analysis as Evidence? Bjrn Brandenburg Max Planck - - PowerPoint PPT Presentation

schedulability analysis as evidence
SMART_READER_LITE
LIVE PREVIEW

Schedulability Analysis as Evidence? Bjrn Brandenburg Max Planck - - PowerPoint PPT Presentation

Schedulability Analysis as Evidence? Bjrn Brandenburg Max Planck Institute for Software Systems (MPI-SWS) 1 Schedulability Analysis as Evidence? Can safety case rely on schedulability analysis? Safety Case Argument Evidence


slide-1
SLIDE 1

Schedulability Analysis as Evidence?

Björn Brandenburg Max Planck Institute for Software Systems (MPI-SWS)

1

slide-2
SLIDE 2

Schedulability Analysis as Evidence?

Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology

2

slide-3
SLIDE 3

Schedulability Analysis as Evidence?

Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology “Is the methodology agreed as effective?” -- Philippa Ryan

  • Does it “work”?
  • Is it sound? Actually all deadlines met if analysis says ‘yes’?

3

slide-4
SLIDE 4

Schedulability Analysis as Evidence?

Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology “Is the methodology agreed as effective?” -- Philippa Ryan

  • Does it “work”?
  • Is it sound? Actually all deadlines met if analysis says ‘yes’?

3

slide-5
SLIDE 5

Schedulability Analysis as Evidence?

Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology “Is the methodology agreed as effective?” -- Philippa Ryan

  • Does it “work”?
  • Is it sound? Actually all deadlines met if analysis says ‘yes’?

3

slide-6
SLIDE 6

Advanced Analysis for Mixed- Criticality Systems?

It’s been peer-reviewed — should it be deemed effective?

  • Difficult to predict the future.

Let’s take a look at the history of real-time scheduling…

4

slide-7
SLIDE 7

Advanced Analysis for Mixed- Criticality Systems?

It’s been peer-reviewed — should it be deemed effective?

  • Difficult to predict the future.

Let’s take a look at the history of real-time scheduling…

4

slide-8
SLIDE 8

A Look Back (1/4)

Liu & Layland (1973)

  • Raymond Devillers and Joël Goossens, “Liu and Layland’s

schedulability test revisited”, Information Processing Letters, 73(5):157–161, 2000.

5

slide-9
SLIDE 9

A Look Back (1/4)

Liu & Layland (1973)

  • Raymond Devillers and Joël Goossens, “Liu and Layland’s

schedulability test revisited”, Information Processing Letters, 73(5):157–161, 2000.

5

slide-10
SLIDE 10

A Look Back (2/4)

Response-Time Analysis (RTA) — deceptively simple

  • implicit/constrained deadlines vs. arbitrary deadlines
  • preemptive vs. non-preemptive scheduling [13 years]
  • Jitter vs. general self-suspensions [20 years]
  • Jian-Jia Chen et al. Many suspensions, many problems: A review
  • f self-suspending tasks in real-time systems. Technical Report

854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017.

6

slide-11
SLIDE 11

A Look Back (2/4)

Response-Time Analysis (RTA) — deceptively simple

  • implicit/constrained deadlines vs. arbitrary deadlines
  • preemptive vs. non-preemptive scheduling [13 years]
  • Jitter vs. general self-suspensions [20 years]
  • Jian-Jia Chen et al. Many suspensions, many problems: A review
  • f self-suspending tasks in real-time systems. Technical Report

854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017.

6

slide-12
SLIDE 12

A Look Back (2/4)

Response-Time Analysis (RTA) — deceptively simple

  • implicit/constrained deadlines vs. arbitrary deadlines
  • preemptive vs. non-preemptive scheduling [13 years]
  • Jitter vs. general self-suspensions [20 years]
  • Jian-Jia Chen et al. Many suspensions, many problems: A review
  • f self-suspending tasks in real-time systems. Technical Report

854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017.

6

slide-13
SLIDE 13

A Look Back (2/4)

Response-Time Analysis (RTA) — deceptively simple

  • implicit/constrained deadlines vs. arbitrary deadlines
  • preemptive vs. non-preemptive scheduling [13 years]
  • Jitter vs. general self-suspensions [20 years]
  • Jian-Jia Chen et al. Many suspensions, many problems: A review
  • f self-suspending tasks in real-time systems. Technical Report

854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017.

6

slide-14
SLIDE 14

A Look Back (2/4)

Response-Time Analysis (RTA) — deceptively simple

  • implicit/constrained deadlines vs. arbitrary deadlines
  • preemptive vs. non-preemptive scheduling [13 years]
  • Jitter vs. general self-suspensions [20 years]
  • Jian-Jia Chen et al. Many suspensions, many problems: A review
  • f self-suspending tasks in real-time systems. Technical Report

854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017.

6

slide-15
SLIDE 15

A Look Back (3/4)

Multiprocessor Priority Ceiling Protocol (1990):

  • Maolin Yang, Jian-Jia Chen, and Wen-Hung Huang. A misconception

in blocking time analyses under multiprocessor synchronization

  • protocols. Real-Time Systems, 53(2):187–195, 2017.

[MPCP analysis (+others) assumes wrong critical instant]

  • Jian-Jia Chen and Björn Brandenburg. A note on the period enforcer

algorithm for self-suspending tasks. Leibniz Transactions on Embedded Systems, 4(1), 2017. [Period enforcer incompatible with locking protocols / existing analyses]

7

slide-16
SLIDE 16

A Look Back (3/4)

Multiprocessor Priority Ceiling Protocol (1990):

  • Maolin Yang, Jian-Jia Chen, and Wen-Hung Huang. A misconception

in blocking time analyses under multiprocessor synchronization

  • protocols. Real-Time Systems, 53(2):187–195, 2017.

[MPCP analysis (+others) assumes wrong critical instant]

  • Jian-Jia Chen and Björn Brandenburg. A note on the period enforcer

algorithm for self-suspending tasks. Leibniz Transactions on Embedded Systems, 4(1), 2017. [Period enforcer incompatible with locking protocols / existing analyses]

7

slide-17
SLIDE 17

A Look Back (3/4)

Multiprocessor Priority Ceiling Protocol (1990):

  • Maolin Yang, Jian-Jia Chen, and Wen-Hung Huang. A misconception

in blocking time analyses under multiprocessor synchronization

  • protocols. Real-Time Systems, 53(2):187–195, 2017.

[MPCP analysis (+others) assumes wrong critical instant]

  • Jian-Jia Chen and Björn Brandenburg. A note on the period enforcer

algorithm for self-suspending tasks. Leibniz Transactions on Embedded Systems, 4(1), 2017. [Period enforcer incompatible with locking protocols / existing analyses]

7

slide-18
SLIDE 18

A Look Back (4/4)

Scheduling with Arbitrary Processor Affinities

  • A. Gujarati, F. Cerqueira, and B. Brandenburg. Schedulability Analysis of

the Linux Push and Pull Scheduler with Arbitrary Processor Affinities.

  • Proc. ECRTS’13.

[incorrect over-generalization: proposed reduction technique works only for a few, not all global schedulability tests]

  • Bug also present in extended journal paper

➔ got past (at least) six reviewers! ➔ unrealistic and unreasonable to expect reviewers to determine correctness

8

slide-19
SLIDE 19

A Look Back (4/4)

Scheduling with Arbitrary Processor Affinities

  • A. Gujarati, F. Cerqueira, and B. Brandenburg. Schedulability Analysis of

the Linux Push and Pull Scheduler with Arbitrary Processor Affinities.

  • Proc. ECRTS’13.

[incorrect over-generalization: proposed reduction technique works only for a few, not all global schedulability tests]

  • Bug also present in extended journal paper

➔ got past (at least) six reviewers! ➔ unrealistic and unreasonable to expect reviewers to determine correctness

8

slide-20
SLIDE 20

A Look Back (4/4)

Scheduling with Arbitrary Processor Affinities

  • A. Gujarati, F. Cerqueira, and B. Brandenburg. Schedulability Analysis of

the Linux Push and Pull Scheduler with Arbitrary Processor Affinities.

  • Proc. ECRTS’13.

[incorrect over-generalization: proposed reduction technique works only for a few, not all global schedulability tests]

  • Bug also present in extended journal paper

➔ got past (at least) six reviewers! ➔ unrealistic and unreasonable to expect reviewers to determine correctness

8

slide-21
SLIDE 21

Mixed-Criticality Analysis is not Any Easier than Classic Schedulability Analysis!

…so what can we do?

  • OPEN PROBLEM

How to make complex schedulability analysis trustworthy?

  • DESPITE

➔ increasing model fidelity & complexity ➔ increasing proof sophistication & non-obvious correctness criteria

9

slide-22
SLIDE 22

Mixed-Criticality Analysis is not Any Easier than Classic Schedulability Analysis!

…so what can we do?

  • OPEN PROBLEM

How to make complex schedulability analysis trustworthy?

  • DESPITE

➔ increasing model fidelity & complexity ➔ increasing proof sophistication & non-obvious correctness criteria

9

slide-23
SLIDE 23

Mixed-Criticality Analysis is not Any Easier than Classic Schedulability Analysis!

…so what can we do?

  • OPEN PROBLEM

How to make complex schedulability analysis trustworthy?

  • DESPITE

➔ increasing model fidelity & complexity ➔ increasing proof sophistication & non-obvious correctness criteria

9

slide-24
SLIDE 24

10

slide-25
SLIDE 25

PROSA: Formally Proven Schedulability Analysis

An open-source effort to formally verify the core of real-time scheduling theory, with the help of the Coq proof assistant.

  • Precise, unambiguous, uniform definitions
  • All assumptions explicit
  • Guaranteed-correct proofs: all claims in Prosa have

machine-checked proofs, which rules out human error!

11

slide-26
SLIDE 26

PROSA: Formally Proven Schedulability Analysis

An open-source effort to formally verify the core of real-time scheduling theory, with the help of the Coq proof assistant.

  • Precise, unambiguous, uniform definitions
  • All assumptions explicit
  • Guaranteed-correct proofs: all claims in Prosa have

machine-checked proofs, which rules out human error!

11

slide-27
SLIDE 27

PROSA: Formally Proven Schedulability Analysis

An open-source effort to formally verify the core of real-time scheduling theory, with the help of the Coq proof assistant.

  • Precise, unambiguous, uniform definitions
  • All assumptions explicit
  • Guaranteed-correct proofs: all claims in Prosa have

machine-checked proofs, which rules out human error!

11

slide-28
SLIDE 28

PROSA: Formally Proven Schedulability Analysis

An open-source effort to formally verify the core of real-time scheduling theory, with the help of the Coq proof assistant.

  • Precise, unambiguous, uniform definitions
  • All assumptions explicit
  • Guaranteed-correct proofs: all claims in Prosa have

machine-checked proofs, which rules out human error!

11

slide-29
SLIDE 29

What can you do with Prosa?

  • 1. Know which results you can trust.

➔ schedulability analysis as evidence

  • 2. Understand proofs of classic (and new) claims in full detail.
  • 3. Adopt an accepted, precise model: don’t reinvent the wheel.
  • 4. Prepare your proofs so that they can be automatically

checked ➔ avoid errors in new results.

  • 5. Make life easier for peer reviewers.

12

slide-30
SLIDE 30

What can you do with Prosa?

  • 1. Know which results you can trust.

➔ schedulability analysis as evidence

  • 2. Understand proofs of classic (and new) claims in full detail.
  • 3. Adopt an accepted, precise model: don’t reinvent the wheel.
  • 4. Prepare your proofs so that they can be automatically

checked ➔ avoid errors in new results.

  • 5. Make life easier for peer reviewers.

12

slide-31
SLIDE 31

What can you do with Prosa?

  • 1. Know which results you can trust.

➔ schedulability analysis as evidence

  • 2. Understand proofs of classic (and new) claims in full detail.
  • 3. Adopt an accepted, precise model: don’t reinvent the wheel.
  • 4. Prepare your proofs so that they can be automatically

checked ➔ avoid errors in new results.

  • 5. Make life easier for peer reviewers.

12

slide-32
SLIDE 32

What can you do with Prosa?

  • 1. Know which results you can trust.

➔ schedulability analysis as evidence

  • 2. Understand proofs of classic (and new) claims in full detail.
  • 3. Adopt an accepted, precise model: don’t reinvent the wheel.
  • 4. Prepare your proofs so that they can be automatically

checked ➔ avoid errors in new results.

  • 5. Make life easier for peer reviewers.

12

slide-33
SLIDE 33

What can you do with Prosa?

  • 1. Know which results you can trust.

➔ schedulability analysis as evidence

  • 2. Understand proofs of classic (and new) claims in full detail.
  • 3. Adopt an accepted, precise model: don’t reinvent the wheel.
  • 4. Prepare your proofs so that they can be automatically

checked ➔ avoid errors in new results.

  • 5. Make life easier for peer reviewers.

12

slide-34
SLIDE 34

What can you do with Prosa?

  • 1. Know which results you can trust.

➔ schedulability analysis as evidence

  • 2. Understand proofs of classic (and new) claims in full detail.
  • 3. Adopt an accepted, precise model: don’t reinvent the wheel.
  • 4. Prepare your proofs so that they can be automatically

checked ➔ avoid errors in new results.

  • 5. Make life easier for peer reviewers.

12

slide-35
SLIDE 35

What Prosa is NOT

  • Prosa is not a model checker: no exhaustive exploration, no

resource limits ➔ human intelligence is the bottleneck, not machine resources

  • Prosa will not find bugs in existing “proofs”

➔ but will also not allow you to construct false “proofs”

  • Prosa will not automatically find proofs for your claims

➔ Coq is a proof assistant, not an automatic theorem prover

13

slide-36
SLIDE 36

What Prosa is NOT

  • Prosa is not a model checker: no exhaustive exploration, no

resource limits ➔ human intelligence is the bottleneck, not machine resources

  • Prosa will not find bugs in existing “proofs”

➔ but will also not allow you to construct false “proofs”

  • Prosa will not automatically find proofs for your claims

➔ Coq is a proof assistant, not an automatic theorem prover

13

slide-37
SLIDE 37

What Prosa is NOT

  • Prosa is not a model checker: no exhaustive exploration, no

resource limits ➔ human intelligence is the bottleneck, not machine resources

  • Prosa will not find bugs in existing “proofs”

➔ but will also not allow you to construct false “proofs”

  • Prosa will not automatically find proofs for your claims

➔ Coq is a proof assistant, not an automatic theorem prover

13

slide-38
SLIDE 38

What Prosa is NOT

  • Prosa is not a model checker: no exhaustive exploration, no

resource limits ➔ human intelligence is the bottleneck, not machine resources

  • Prosa will not find bugs in existing “proofs”

➔ but will also not allow you to construct false “proofs”

  • Prosa will not automatically find proofs for your claims

➔ Coq is a proof assistant, not an automatic theorem prover

13

slide-39
SLIDE 39

What does Prosa consist of?

  • 1. Definitions: jobs, arrival sequences, schedules, tasks, scheduling policies,

service, job completion, response-time bounds, … ➔ goal: as simple and tasteful as possible

  • 2. Algorithms: response-time analysis, schedule transformations, …
  • 3. Formal Claims: e.g., expressing “if there exists a positive fixed-point

, where , then is a response-time bound.” ➔ focus: primarily soundness

  • 4. Mechanized Proofs: invocations of tactics (i.e., macros) that produce “proof

assembly code” to construct a proof tree in basic logic. ➔ the payoff: machine-checked, thus guaranteed correct

14

slide-40
SLIDE 40

What does Prosa consist of?

  • 1. Definitions: jobs, arrival sequences, schedules, tasks, scheduling policies,

service, job completion, response-time bounds, … ➔ goal: as simple and tasteful as possible

  • 2. Algorithms: response-time analysis, schedule transformations, …
  • 3. Formal Claims: e.g., expressing “if there exists a positive fixed-point

, where , then is a response-time bound.” ➔ focus: primarily soundness

  • 4. Mechanized Proofs: invocations of tactics (i.e., macros) that produce “proof

assembly code” to construct a proof tree in basic logic. ➔ the payoff: machine-checked, thus guaranteed correct

14

slide-41
SLIDE 41

What does Prosa consist of?

  • 1. Definitions: jobs, arrival sequences, schedules, tasks, scheduling policies,

service, job completion, response-time bounds, … ➔ goal: as simple and tasteful as possible

  • 2. Algorithms: response-time analysis, schedule transformations, …
  • 3. Formal Claims: e.g., expressing “if there exists a positive fixed-point

, where , then is a response-time bound.” ➔ focus: primarily soundness

  • 4. Mechanized Proofs: invocations of tactics (i.e., macros) that produce “proof

assembly code” to construct a proof tree in basic logic. ➔ the payoff: machine-checked, thus guaranteed correct

14

slide-42
SLIDE 42

What does Prosa consist of?

  • 1. Definitions: jobs, arrival sequences, schedules, tasks, scheduling policies,

service, job completion, response-time bounds, … ➔ goal: as simple and tasteful as possible

  • 2. Algorithms: response-time analysis, schedule transformations, …
  • 3. Formal Claims: e.g., expressing “if there exists a positive fixed-point

, where , then is a response-time bound.” ➔ focus: primarily soundness

  • 4. Mechanized Proofs: invocations of tactics (i.e., macros) that produce “proof

assembly code” to construct a proof tree in basic logic. ➔ the payoff: machine-checked, thus guaranteed correct

14

slide-43
SLIDE 43

What does Prosa consist of?

  • 1. Definitions: jobs, arrival sequences, schedules, tasks, scheduling policies,

service, job completion, response-time bounds, … ➔ goal: as simple and tasteful as possible

  • 2. Algorithms: response-time analysis, schedule transformations, …
  • 3. Formal Claims: e.g., expressing “if there exists a positive fixed-point

, where , then is a response-time bound.” ➔ focus: primarily soundness

  • 4. Mechanized Proofs: invocations of tactics (i.e., macros) that produce “proof

assembly code” to construct a proof tree in basic logic. ➔ the payoff: machine-checked, thus guaranteed correct

14

slide-44
SLIDE 44

Example: Definitions (1/2)

Goal: Short, Obvious, and Natural Definitions

(* Consider any uniprocessor schedule. *) Variable sched: schedule Job. (* Let j be any job. *) Variable j: Job. (* First, we define whether a job j is scheduled at time t, ... *) Definition scheduled_at (t: time) := sched t == Some j.

15

slide-45
SLIDE 45

Example: Definitions (2/2)

(* ...which also yields the instantaneous service received by job j at time t (i.e., either 0 or 1). *) Definition service_at (t: time) : time := scheduled_at t. (* Based on the notion of instantaneous service, we define the cumulative service received by job j during any interval [t1, t2). *) Definition service_during (t1 t2: time) := \sum_(t1 <= t < t2) service_at t.

16

slide-46
SLIDE 46

Example: Claims (1/2)

(* Consider any uniprocessor schedule. *) Variable sched: schedule Job. (* Let j be any job that is to be scheduled. *) Variable j: Job. Lemma service_at_most_one: ∀ t, service_at sched j t <= 1.

Goal: natural, explicit, readable, as short as sensible.

17

slide-47
SLIDE 47

Example: Claims (2/2)

(* ...which implies that the cumulative service received by job j in any interval of length delta is at most delta. *) Lemma cumulative_service_le_delta: ∀ t delta, service_during sched j t (t + delta) <= delta.

18

slide-48
SLIDE 48

Example: Proof scripts

NOTE: proof scripts are not intended for human consumption.

Lemma cumulative_service_le_delta: ∀ t delta, service_during sched j t (t + delta) <= delta. Proof. unfold service_during; intros t delta. apply leq_trans with (n := \sum_(t <= t0 < t + delta) 1); last by simpl_sum_const; rewrite addKn leqnn. by apply leq_sum; intros t0 _; apply leq_b1. Qed.

19

slide-49
SLIDE 49

Remaining Risks?

How could analysis still be unsound?

  • 1. A bug in the Coq proof assistant

➔ highly unlikely, but not impossible ➔ proof checking kernel: small, simple, widely used

  • 2. Contradicting hypotheses: can “prove” anything from

➔ possible in principle, but... ➔ …ruled out in Prosa by development process

  • 3. Nonsensical specification: e.g., meaning of “jitter”, “release”, “arrival”?

➔ readable specification + community review!

20

slide-50
SLIDE 50

Remaining Risks?

How could analysis still be unsound?

  • 1. A bug in the Coq proof assistant

➔ highly unlikely, but not impossible ➔ proof checking kernel: small, simple, widely used

  • 2. Contradicting hypotheses: can “prove” anything from

➔ possible in principle, but... ➔ …ruled out in Prosa by development process

  • 3. Nonsensical specification: e.g., meaning of “jitter”, “release”, “arrival”?

➔ readable specification + community review!

20

slide-51
SLIDE 51

Remaining Risks?

How could analysis still be unsound?

  • 1. A bug in the Coq proof assistant

➔ highly unlikely, but not impossible ➔ proof checking kernel: small, simple, widely used

  • 2. Contradicting hypotheses: can “prove” anything from

➔ possible in principle, but... ➔ …ruled out in Prosa by development process

  • 3. Nonsensical specification: e.g., meaning of “jitter”, “release”, “arrival”?

➔ readable specification + community review!

20

slide-52
SLIDE 52

Remaining Risks?

How could analysis still be unsound?

  • 1. A bug in the Coq proof assistant

➔ highly unlikely, but not impossible ➔ proof checking kernel: small, simple, widely used

  • 2. Contradicting hypotheses: can “prove” anything from

➔ possible in principle, but... ➔ …ruled out in Prosa by development process

  • 3. Nonsensical specification: e.g., meaning of “jitter”, “release”, “arrival”?

➔ readable specification + community review!

20

slide-53
SLIDE 53

PROSA

Trustworthy Schedulability Analysis

Learn more at: prosa.mpi-sws.org

… or propose a better way to obtain bullet-proof analysis!

21