>>> Confiana em tempo real >>> Aprendizados em - - PowerPoint PPT Presentation

gt gt gt confian a em tempo real gt gt gt aprendizados em
SMART_READER_LITE
LIVE PREVIEW

>>> Confiana em tempo real >>> Aprendizados em - - PowerPoint PPT Presentation

>>> Confiana em tempo real >>> Aprendizados em software embarcado crtico Name: Ricardo Bedin Frana Datum: April 25, 2017 ricardo.franca@embraer.com.br [~]$ _ [1/28] >>> Table of Contents 1. Introduction


slide-1
SLIDE 1

>>> Confiança em tempo real >>> Aprendizados em software embarcado crítico Name: Ricardo Bedin França∗ Datum: April 25, 2017

∗ricardo.franca@embraer.com.br

[~]$ _ [1/28]

slide-2
SLIDE 2

>>> Table of Contents

  • 1. Introduction
  • 2. How we develop aircraft control SW
  • 3. Our metric: Worst-case Execution Time (WCET)
  • 4. Dirty coding tricks
  • 5. Getting to know your compiler

[~]$ _ [2/28]

slide-3
SLIDE 3

Section 1 Introduction

[1. Introduction]$ _ [3/28]

slide-4
SLIDE 4

>>> About the presenter 2006-08 Formal methods for embedded SW (UFSC, IRIT)

[1. Introduction]$ _ [3/28]

slide-5
SLIDE 5

>>> About the presenter 2006-08 Formal methods for embedded SW (UFSC, IRIT) 2009-12 Optimize flight control SW (IRIT, Airbus)

[1. Introduction]$ _ [3/28]

slide-6
SLIDE 6

>>> About the presenter 2006-08 Formal methods for embedded SW (UFSC, IRIT) 2009-12 Optimize flight control SW (IRIT, Airbus) 2012- Develop flight control SW (Embraer)

[1. Introduction]$ _ [3/28]

slide-7
SLIDE 7

>>> About the presenter 2006-08 Formal methods for embedded SW (UFSC, IRIT) 2009-12 Optimize flight control SW (IRIT, Airbus) 2012- Develop flight control SW (Embraer) * Bottom line: Living in Plato’s cave

[1. Introduction]$ _ [3/28]

slide-8
SLIDE 8

>>> Lecture objectives * Summarize performance-related challenges in airborne SW * Devise possible solutions to these challenges * (Hopefully) Employ new tricks in your own software

[1. Introduction]$ _ [4/28]

slide-9
SLIDE 9

Section 2 How we develop aircraft control SW

[2. How we develop aircraft control SW]$ _ [5/28]

slide-10
SLIDE 10

>>> One-slide summary Mobile, cloud Embedded, soft RT DO-178C, level A

[2. How we develop aircraft control SW]$ _ [5/28]

slide-11
SLIDE 11

>>> What really changes in airborne SW? * A bug may mean a disaster

* Aircraft accidents are especially infamous

[2. How we develop aircraft control SW]$ _ [6/28]

slide-12
SLIDE 12

>>> What really changes in airborne SW? * A bug may mean a disaster

* Aircraft accidents are especially infamous

* Strict timing deadlines

* Must be efficient and predictable

[2. How we develop aircraft control SW]$ _ [6/28]

slide-13
SLIDE 13

>>> What really changes in airborne SW? * A bug may mean a disaster

* Aircraft accidents are especially infamous

* Strict timing deadlines

* Must be efficient and predictable

* Long, but small, production runs

* 100’s / 1000’s units flying for decades * HW makers care little about us

[2. How we develop aircraft control SW]$ _ [6/28]

slide-14
SLIDE 14

>>> What about that DO-178C thing? * A document that guides the life cycle of airborne SW

* It strangles guides us according to product criticality * Aircraft certification authorities require us to follow it

[2. How we develop aircraft control SW]$ _ [7/28]

slide-15
SLIDE 15

>>> Scope of this lecture * Bit-shaver point of view

* Process is a swearword! * Focus on good coding and smart verification * However, the entire development is interdependent

[2. How we develop aircraft control SW]$ _ [8/28]

slide-16
SLIDE 16

>>> Scope of this lecture * Emphasis to control software

* No direct interaction with users * Little parallelism * May seem boring but commands very critical equipment

      elevCmd[k] ailCmd[k] rudCmd[k] stallWarn[k] . . .       = F(α[k], β[k], γ[k], elevCmd[k − 1], ...)

[2. How we develop aircraft control SW]$ _ [9/28]

slide-17
SLIDE 17

Section 3 Our metric: Worst-case Execution Time (WCET)

[3. Our metric: Worst-case Execution Time (WCET)]$ _ [10/28]

slide-18
SLIDE 18

>>> Our metric: Worst-case Execution Time (WCET) What is WCET? * Highest possible execution time for a program Why is it important? * Critical SW shall never miss a deadline * You don’t want a blue screen in a crosswind landing!

[3. Our metric: Worst-case Execution Time (WCET)]$ _ [10/28]

slide-19
SLIDE 19

>>> How we compute the WCET * It is essentially NP-hard * Ideally, we want it to be sound and tight Static analysis tools * Sophisticated and sound * Tests made by tool maker * Quick computation * Very coupled to HW Measurement-based * Simpler but unsound * Requires lots of tests * Less coupled to HW * ‘‘Is this representative?’’

[3. Our metric: Worst-case Execution Time (WCET)]$ _ [11/28]

slide-20
SLIDE 20

>>> How we compute the WCET Which method is better? * Ferocious battles between rival researchers * Personal experience: it depends on...

* Which HW you use * How critical is your SW * How much you can tinker with your HW

* No proven solution for multi-cores! * Both methods have common points

* More than many researchers would admit...

[3. Our metric: Worst-case Execution Time (WCET)]$ _ [12/28]

slide-21
SLIDE 21

>>> How we compute the WCET Common tips and tricks * Both methods work easily for simple hardware

* !(pipelines || caches || branch prediction)

* Avoid messy caches (random, round-robin, unified) * Bound resource contention sources * Bound recursion and loops

* In Plato’s cave, we have almost no loops

[3. Our metric: Worst-case Execution Time (WCET)]$ _ [13/28]

slide-22
SLIDE 22

>>> Cache policy vs. timing behavior * Unified caches (MPC5307, MPC5554) are disturbing * Making them instruction-only helped predictability

[3. Our metric: Worst-case Execution Time (WCET)]$ _ [14/28]

slide-23
SLIDE 23

>>> Cache policy vs. timing behavior So what? My app runs at someone else’s device. * OK, you really cannot fine-tune hardware * Neither you need to prove your timing upper bound * And you can blame their obsolete phones! * At least, bound your loops... * And let’s go to code optimizations!

[3. Our metric: Worst-case Execution Time (WCET)]$ _ [15/28]

slide-24
SLIDE 24

Section 4 Dirty coding tricks

[4. Dirty coding tricks]$ _ [16/28]

slide-25
SLIDE 25

>>> Control SW development chain * Design models (Simulink or SCADE) * Automatic code generation

* Sometimes a blessing, sometimes a curse

* Code (C, Ada) what can’t/shouldn’t be done in models

* Sequential, low-level, specially optimized

[4. Dirty coding tricks]$ _ [16/28]

slide-26
SLIDE 26

>>> Saving memory: Sliding debounce What is a sliding debounce? * ‘‘At least 3 C_TRUE in the last 5 time instants’’ * ‘‘Memories’’ must be static or global * Naive coding: 5 bool + 1 int per instance

[4. Dirty coding tricks]$ _ [17/28]

slide-27
SLIDE 27

>>> Saving memory: Sliding debounce Autocode: typedef struct { /*---initialization---*/ unsigned char init; /*------memories------*/ unsigned char fby_SD_5[5]; unsigned int array_manager; unsigned int counter; } outC_SD_5; ‘‘Imported’’ from C: typedef struct { unsigned int counter: 4; unsigned int my_bools: 5; unsigned int padding: 23; } outC_SD_5;

[4. Dirty coding tricks]$ _ [18/28]

slide-28
SLIDE 28

>>> Saving memory: Sliding debounce Why polluting the code with bitfields? * Microcontrollers have fast and scarce internal RAM * Control SW may have lots of sliding debounces!

* SD_250 appeared in the very end of a project * This alone could ruin the program!

* For devices with slow memory, this saves CPU * Also useful for data exchange (buses, shared memory)

* Communication up to 32x faster

[4. Dirty coding tricks]$ _ [19/28]

slide-29
SLIDE 29

>>> Saving CPU: macros Even stackoverflow.com tells me not to!! This must be evil. * ioccc.org rewards the best preprocessor abuse! * Small and frequent functions are expensive * Function inline is not very controllable * Code generator and macros: no need to worry * Human and macros: you review and test, don’t you?

[4. Dirty coding tricks]$ _ [20/28]

slide-30
SLIDE 30

>>> Saving CPU: macros What we do in SCADE ACG expectations

#ifndef FloatMin_BASIC extern float FloatMin_BASIC( float Value1, float Value2); #endif

What we code

#define FloatMin_BASIC(\ Value1, Value2)\ ((Value1 < Value2) ?\ Value1 : Value2)

[4. Dirty coding tricks]$ _ [21/28]

slide-31
SLIDE 31

>>> Saving CPU: not-so-defensive programming * In critical SW, we test with huge code coverage * Not easy to have 100% reusable code

* Lots of bureaucracy when officially reusing * More often, we reuse ideas

* ‘‘I am defending our program from what?’’

[4. Dirty coding tricks]$ _ [22/28]

slide-32
SLIDE 32

>>> Case study: a simple filter * Its standard equation has a division * Should it be protected?

[4. Dirty coding tricks]$ _ [23/28]

slide-33
SLIDE 33

>>> Case study: a simple filter * Its standard equation has a division * Should it be protected? * How can I insert a zero time constant?

* Usually, a huge bug or a HW glitch

* What protection does the saturation offer?

* Sometimes, it simply propagates improper data

[4. Dirty coding tricks]$ _ [23/28]

slide-34
SLIDE 34

>>> Case study: a simple filter * Benefits depend on system architecture * Take care with divisions in safety-critical! * In this specific case: is it necessary?

* Normally, filters are tuned offline * The time constant is then constant! * We can use its inverse * No polemics, faster code

[4. Dirty coding tricks]$ _ [24/28]

slide-35
SLIDE 35

>>> Fun with pointers * What about null pointer checking?

* Only for ‘‘slightly’’ critical stuff * When ultra-critical, static allocation void iAmCritical(float *mostImportantSignal) { if (mostImportantSignal == 0) { trapAndDespair(); /* no decent recovery */ } else { work(); } }

[4. Dirty coding tricks]$ _ [25/28]

slide-36
SLIDE 36

Section 5 Getting to know your compiler

[5. Getting to know your compiler]$ _ [26/28]

slide-37
SLIDE 37

>>> Finding the right optimization level * How was our compiler developed?

* We don’t know their processes * And they are fantastically complex

* Plato’s cave: representative sample of ASM-C traceability

* -O0 = extremely irritating code * -O[wild] = Impossible to ensure representativity * Interprocedural ⇒ one .c generating different .o’s

[5. Getting to know your compiler]$ _ [26/28]

slide-38
SLIDE 38

>>> Finding the right optimization level * How was our compiler developed?

* We don’t know their processes * And they are fantastically complex

* Plato’s cave: representative sample of ASM-C traceability

* -O0 = extremely irritating code * -O[wild] = Impossible to ensure representativity * Interprocedural ⇒ one .c generating different .o’s

RBF in 2012 RBF years later

[5. Getting to know your compiler]$ _ [26/28]

slide-39
SLIDE 39

>>> Feed the compiler appropriately * Many C compilers cannot optimize Booleans

* You (or some websites) should care about it

* Study performance-critical object code

* Compiler behavior may be counter-intuitive void SD_5(char In_b, int MinOnFrames, char *Out_b, outC_SD_5 *outC) {

  • utC_SD_5 L_outC; /* this can reduce loads! */

L_outC = *outC; L_outC.ctr = (L_outC.ctr) + (In_b) - (L_outC.past & 1u); L_outC.past = (L_outC.past >> 1u) | (In_b << 4u); *outC = L_outC; *Out_b = (L_outC.ctr) >= MinOnFrames; return; }

[5. Getting to know your compiler]$ _ [27/28]

slide-40
SLIDE 40

>>> sys.exit(‘‘Thanks for your attention!’’)

import smtplib import subprocess import sys shy = sys.argv[1] question = input("Any questions?\n") if shy: server = smtplib.SMTP('smtp.domain.com', 587) server.starttls() server.login("sender", "password") server.sendmail("sender", "ricardo.franca@embraer.com.br", question) server.quit() else: print(question)

[5. Getting to know your compiler]$ _ [28/28]