Achieving Qualities Software Architecture Jay Urbain, PhD - - PowerPoint PPT Presentation

achieving qualities
SMART_READER_LITE
LIVE PREVIEW

Achieving Qualities Software Architecture Jay Urbain, PhD - - PowerPoint PPT Presentation

Achieving Qualities Software Architecture Jay Urbain, PhD urbain@msoe.edu Credits: Len Bass, Paul Clements, Rick Kazman, 3 rd Ed. http://en.wikipedia.org/wiki/Web_service Relationship to the World Wide Web and REST


slide-1
SLIDE 1

1

Achieving Qualities

Software Architecture Jay Urbain, PhD urbain@msoe.edu

Credits:

  • Len Bass, Paul Clements, Rick Kazman, 3rd Ed.
  • http://en.wikipedia.org/wiki/Web_service
  • Relationship to the World Wide Web and REST Architectures". Web Services

Architecture.W3C. Retrieved 2014-04-22.

slide-2
SLIDE 2

2

System Design

Quality attribute (QA): Measureable or testable property

  • f a system that is used to indicate how well the system

satisfies the needs of its stakeholders, i.e., the functionality of the system. Think of QA as measuring the “goodness” of a product along some dimension of interest to a stakeholder.

slide-3
SLIDE 3

3

Types of Requirements

Functional:

  • What the system must do or how the system must react

to some stimuli. Quality attributes (non-Functional):

  • Qualifications of functional requirements or the whole

product. Constraints:

  • Design decision with zero degrees of freedom.
  • Typically a decision that has already been made.
  • Are constraints a good thing?
slide-4
SLIDE 4

4

System Design

Set of architectural decisions:

  • Achievement of system functionality
  • Controlling the quality attribute response
slide-5
SLIDE 5

Architecture & Requirements

  • What is the response of architecture to each of these

kinds of requirements?

– Functional requirements – QA requirements

5

slide-6
SLIDE 6

Architecture & Requirements

  • What is the response of the architecture to each of these

kinds of requirements?

– Functional requirements are satisfied by assigning an appropriate sequence of responsibilities to software elements through the design. – QA requirements are satisfied by the various structures designed into the architecture, and the behaviors and interactions of the elements that populate those structures.

6

slide-7
SLIDE 7

7

Tactics

  • A tactic is a design decision that influences the control of a

quality attribute response.

  • A collection of tactics is an architectural strategy.
  • Interested in the tactics used by the architect to create a design

including:

– architectural design patterns, or architectural strategies.

slide-8
SLIDE 8

8

Tactics

Example:

  • Introduce redundancy to increase the availability of

the system.

  • Consequence:

– the need for synchronization.

slide-9
SLIDE 9

9

Organizing Tactics

Ramifications

  • Tactics can refine other tactics

– Organize tactics for each quality attribute in a hierarchy

  • Use patterns to package tactics
slide-10
SLIDE 10

10

Tactics

Review tactics for the following quality attributes:

  • Availability
  • Interoperability
  • Modifiability
  • Performance
  • Security
  • Testability
  • Scalability
  • Usability
slide-11
SLIDE 11

11

Availability Tactics

  • Failure occurs when the system no longer delivers

a service that is consistent with its specification.

  • Recovery/repair is an important aspect of availability.
  • All approaches to maintaining availability involve some

type of redundancy.

  • Categorize tactics: fault detection, fault recovery, and

fault prevention.

slide-12
SLIDE 12

12

  • Availability

Tactics

slide-13
SLIDE 13

Detect Faults

  • Ping/echo – determine if pinged component is reachable or alive.
  • Monitor – component monitors health of the system.
  • Heartbeat – periodic message from process being monitored and

system monitor. Flip responsibilities from ping/echo.

  • Time stamp – detect proper sequences of events (especially for

distributed systems). Establish proper sequence by assigning state

  • f logical clock to the event.
  • Sanity checking – check validity of operations or outputs, e.g.,

validate information flow.

  • Condition monitoring – check conditions within component, e.g.,

checksum.

  • Voting – Triple modular redundancy – have 3 components due the

same thing and vote on proper output.

  • Exception detection – exception capture.
  • Self-test – periodic self-test.

13

slide-14
SLIDE 14

Recover from Faults

  • Preparation and repair
  • Reintroduction

14

slide-15
SLIDE 15

Recover from Faults - Preparation and repair

  • Active redundancy – hot spare

– All nodes in a protection group receive and process identical inputs in parallel. – Redundant spares maintain synchronous state with the active nodes.

  • Passive redundancy – warm spare

– Only active members of protection group process input traffic; provide redundant spares with periodic state updates, e.g., check pointing. – Loose coupling

  • Spare – cold spare

– Redundant spares remain out of service until a failure occurs. Level of “warmth” determined by quality attribute response.

15

slide-16
SLIDE 16

Recover from Faults – Preparation and repair

  • Exception handling

– Tactics to manage exception when it occurs.

  • Rollback

– Revert to previous known/good state. – Can reinitiate.

  • Software upgrade

– In-service software upgrades.

  • Retry

– Assume fault is transient, e.g., dropped packet, system busy.

  • Ignore faulty behavior

– May just ignore spurious messages, e.g., denial of service attack.

  • Degradation

– Maintain most critical system components, fail gracefully.

  • Reconfiguration

– Recover from component failures by reassigning responsibilities

16

slide-17
SLIDE 17

Recover from Faults – Reintroduction

  • Shadow

– Operate previously failed or in-service component mode in a “shadow mode” for a predefined duration of time prior to reassuming active role.

  • State resynchronization

– Activate partner in active and passive redundancy – Re-synch states of active and standby components.

  • Escalating restart

– Support multiple levels of restart based on severity of fault. E.g., level 0 = restart process, level 1 = re-initialization, level n = restart system, etc.

  • Non-stop forwarding

– From router design – Functionality split into two parts: supervisory/control plane, and data plane. – If a failure occurs in a supervisor component, can still forward packets along known routes (can’t determine new routes).

17

slide-18
SLIDE 18

Recover from Faults – Prevent faults

  • Removal from service

– Mitigate failure by placing component out of service.

  • Transactions

– ACID properties

  • Predictive model

– Combine model with monitor to predict health of system.

  • Exception prevention

– Prevent system exceptions from happening.

  • Increase competence

– When a component generates an exception it is outside of its competence level, e.g., divide by zero. Handle divide by zero.

18

slide-19
SLIDE 19

Interoperability Tactics

  • Degree to which two or more systems can usefully

exchange meaningful information. – Ability to exchange data (syntax) – Ability to interpret data (semantics)

  • Need to identify with whom, with what, and under what

circumstances (context); and quality of service!

19

slide-20
SLIDE 20

Interoperability: Reasons

  • System provides a service to be used by a collection of

unknown systems.

– E.g., Google Maps, IMDB, Yahoo Stock Quoter

  • Constructing capabilities from existing systems.

– E.g., HealthCare.gov orchestrates insurance coverage using multiple component systems: FBI background check, IRS income verification, insurance provider EDI, etc. – E.g., one system may sense the environment, another system is responsible for processing the raw sensor data. – Software as a Service (SaaS) paradigm. – Promotes cohesion of service purpose, loose coupling of services facilitates flexibility.

20

slide-21
SLIDE 21

Aspects of Interoperability

  • Discovery

– The consumer of a service must discover (possibly at runtime, or prior to runtime) the location, identity, and the interface of the service.

  • Handling of response – distinct possibilities:

– The service reports back to the requester with the response. – The service sends its response on to another system. – The service broadcasts its response to any interested parties

21

slide-22
SLIDE 22

Systems of Systems (SoS)

  • Directed

– SoS objectives, centralized management, funding, and authority for the overall SoS are in place. Healthcare.gov? – Systems are subordinate to the SoS.

  • Acknowledged

– Systems retain their own management, funding, and authority in parallel with the

  • SoS. Saber reservation system provided to airlines.

– Maintain high degree of autonomy, service level agreement (SLA).

  • Collaborative

– There are no overall objectives, centralized management, authority, responsibility, or funding at the SoS level. – Systems voluntarily work together to address shared or common interests. – E.g., Google Maps. Google provides it s own management and funding, various

  • rganizations collaborate.
  • Virtual

– Like collaborative, but systems don’t know about each other. – Large systems must interoperate but no central management authority, – E.g., p2p file sharing, electric utilities

22

slide-23
SLIDE 23

General Interoperability Scenario

  • Source of stimulus:

– System initiates a request to interoperate with another system.

  • Stimulus:

– Request to exchange information.

  • Artifacts:

– The systems (and system components) that wish to interoperate.

  • Environment:

– Systems that wish to interoperate are discovered at runtime or are known prior to runtime; operating environment.

  • Response:

– Request to interoperate results in the exchange of information. Info is understood or rejected

  • Response measure:

– % of info exchanges correctly processed or % of info exchanges correctly rejected.

23

slide-24
SLIDE 24

Aspects of Interoperability

  • Discovery

– The consumer of a service must discover (possibly at runtime, or prior to runtime) the location, identity, and the interface of the service.

  • Handling of response:

– The service reports back to the requester with the response. – The service sends its response on to another system. – The service broadcasts its response to any interested parties

24

slide-25
SLIDE 25

Web Services Interoperability Standards

  • W3C defines a Web service as a software system

designed to support interoperable machine-to-machine interaction over a network.

  • SOAP - Simple Object Access Protocol

– XML based – Service orchestration and composition

  • ReST – Representational State Transfer

– CRUD (create, read, update, delete) via (Post, get, put, delete) – Based on URI – Typically associated with JSON, could be XML, etc.

25

slide-26
SLIDE 26

Web Services Acronym Soup

  • UDDI? Universal Description, Discovery and Integration.
  • SOAP? Simple Object Access Protocol
  • WSDL? Web Services Discovery Language

26

slide-27
SLIDE 27

Web Services Interoperability Standards

  • SOAP - Simple Object Access Protocol

– XML based – Service orchestration and composition

  • ReST – Representation State Transfer

– CRUD (create, read, update, delete) via (Post, get, put, delete) – Based on URI – Often used with JSON, etc.

27

Simplicity of interfaces

  • Modifiability of components to meet changing needs (even while the

application is running)

  • Visibility of communication between components by service agents
  • Portability of component deployment
  • Reliability
slide-28
SLIDE 28

Interoperability

28

slide-29
SLIDE 29

Interoperability Tactics

  • Locate

– Discover service – locate service: defined a priori, searching known directory of services, run-time service discovery. – Client server, Peer to peer mechanism.

  • Manage interfaces

– Orchestrate –

  • SOAP - use control mechanism to coordinate, manage and

sequence the invocation of particular services,

  • ReST – Loosely coupled, stateless, client initiated through

well defined layers of responsibility. – Tailor interface - adds or removes capabilities of an interface. E.g., translation, buffering, data smoothing.

29

slide-30
SLIDE 30

30

Interoperability Checklist

  • Allocation of responsibilities
  • Coordination model
  • Data model
  • Mapping among architectural elements
  • Resource management
  • Binding time
  • Choice of technology
slide-31
SLIDE 31

31

Modifiability Tactics

  • Modifiability is about change:

– Functions, qualities, new technology, etc.

  • What is the likelihood of change?

– Cannot plan a system for all potential changes – system would never be done!

  • Need to also consider:

– When is the change made? – Who makes the change? – How much does the change cost?

  • Tactics?
slide-32
SLIDE 32

32

Modifiability Tactics

Categories:

  • 1. Reduce size of module
  • 2. Increase cohesion
  • 3. Reduce coupling
  • 4. Defer binding
slide-33
SLIDE 33

Basic approach:

  • Need to identify the aspects that vary and separate

them from what stays the same.

33

slide-34
SLIDE 34

34

slide-35
SLIDE 35

Reduce size of module

  • Split module
  • Reducing size, reduces complexity
  • Should reduce average cost of future changes.

35

slide-36
SLIDE 36

Increase Cohesion

  • Increase semantic cohesion

– If the responsibilities A and B in a module do not serve the same purpose they should be placed in different modules. – Hypothesize likely changes to identify responsibilities.

36

slide-37
SLIDE 37

Reduce Coupling

  • Encapsulate

– Define explicit module through well defined interface

  • Use of intermediary

– Break dependency – Bridge, wrapper, mediator, factory, strategy patterns

  • Restrict dependencies

– Restrict modules that a given module can interact with – define layers of responsibility – Avoid circular dependencies

  • Refactor

– Refactor when two modules are affected by the same change

  • Abstract common services

– When two modules contain similar, but not quite the same services

37

slide-38
SLIDE 38

38

Defer binding

  • Compile time

– Component replacement, parameterization

  • Configuration time

– Files

  • Deployment

– Resource file, startup parameters

  • Runtime

– Runtime registration, dynamic lookup, interpret parameters, startup binding, name servers, plug-ins, publish service, shared repositories, polymorphism. – Use composition.

slide-39
SLIDE 39

39

Performance Tactics

slide-40
SLIDE 40

Events

  • Responding to events requires resources (including time)

to be consumed.

  • Concurrent events – operations occurring in parallel.
  • Arrival patterns for events:

– Periodic - arrive at regular time intervals. – Stochastic – arrive according to probabilistic distribution. – Sporadic – neither periodic or stochastic. Might still be able to characterize: e.g., 200msec between events, max 60 events/sec, etc.

40

slide-41
SLIDE 41

Response

Response of the system to stimulus measured by:

  • Latency – time between arrival of stimulus and system

response.

  • Deadlines in processing – systems can run in fixed time

intervals, e.g., monitors, factories, engines.

  • Throughput – number of transactions the system can

process per unit time.

  • Jitter – allowable variability in latency – variance.
  • Number of events not processed – system too busy.

41

slide-42
SLIDE 42

Scenario for Performance

Source of stimulus – Internal or external sources. Stimulus – Type of stimulus and arrival pattern: periodic, stochastic, sporadic (use numeric parameters). Artifact – the system’s one or more affected components. Environment – various operational modes, e.g., peak load, normal, emergency. Response – System’s response to processing events, could change state of the system. Response measure – Time taken to process arriving events (latency or delay), jitter, throughput, miss rate.

42

slide-43
SLIDE 43

Tactics

  • Goal of performance tactics is to generate a response to an

event arriving within some time-based constraint.

  • Event can be single event or stream of events.
  • Performance tactics control the time within which a

response is generated. Tactic categories:

  • Control resource demand
  • Manage resources
  • Note: Need to think of consequences of selected tactics!

43

slide-44
SLIDE 44

Performance Tactics

44

slide-45
SLIDE 45

Control resource demand

  • Manage sampling rate

– Reduce sampling frequency

  • Limit event response

– Queuing, process up to a set maximum rate to maintain consistency

  • Prioritize events

– All events are not equally important

  • Reduce overhead (performance vs. modifiability)

– Reduce intermediaries, processing, data usage

  • Bound execution time (iterative, data-dependent algo’s)

– Limit number of iterations versus reduced accuracy

  • Increase resource efficiency

– Improve algorithm efficiency

45

slide-46
SLIDE 46

Manage resources

  • Increase resources

– More memory, vertical scalability, horizontal scalability, faster networks

  • Introduce concurrency (scheduling policy)

– Threads

  • Eliminate concurrency

– Single thread model, eliminate contention

  • Maintain multiple copies of computations

– Load balancer, multiple compute instances

  • Maintain multiple copies of data

– Caching, distributed indexing

  • Bound queue sizes

– Need strategy for overload

  • Schedule resources

– Policy for contention

  • Improve scheduling, understand contention.

46

slide-47
SLIDE 47

47

slide-48
SLIDE 48

Predictive Performance Classifier Evaluation Metrics

Actual ¡class\Predicted ¡ class ¡ buy_computer ¡ = ¡ ¡yes ¡ buy_computer ¡ = ¡no ¡ Total ¡ buy_computer ¡= ¡yes ¡ 6954 ¡ 46 ¡ 7000 ¡ buy_computer ¡= ¡no ¡ 412 ¡ 2588 ¡ 3000 ¡ Total ¡ 7366 ¡ 2634 ¡ 10000 ¡

  • Given ¡m ¡classes, ¡an ¡entry, ¡CMi,j ¡ ¡in ¡a ¡confusion ¡matrix ¡indicates ¡

# ¡of ¡tuples ¡in ¡class ¡i ¡ ¡that ¡were ¡labeled ¡by ¡the ¡classifier ¡as ¡class ¡j ¡

  • May ¡have ¡extra ¡rows/columns ¡to ¡provide ¡totals ¡

Confusion ¡Matrix: ¡

Actual ¡class\Predicted ¡class ¡ C1 ¡ ¬ ¡C1 ¡ C1 ¡ True ¡Posi;ves ¡(TP) ¡ False ¡Nega;ves ¡(FN) ¡ ¬ ¡C1 ¡ False ¡Posi;ves ¡(FP) ¡ True ¡Nega;ves ¡(TN) ¡ Example of Confusion Matrix: BOLD red is correct classification

48 ¡

slide-49
SLIDE 49

Classifier Evaluation Metrics: Accuracy, Error Rate, Sensitivity and Specificity

  • Classifier ¡Accuracy: ¡percentage ¡
  • f ¡test ¡set ¡tuples ¡that ¡are ¡

correctly ¡classified ¡ Accuracy ¡= ¡(TP ¡+ ¡TN)/All ¡

  • Error ¡rate: ¡1 ¡– ¡accuracy, ¡or ¡

Error ¡rate ¡= ¡(FP ¡+ ¡FN)/All ¡

n Class ¡Imbalance ¡Problem: ¡ ¡

n One ¡class ¡may ¡be ¡rare, ¡e.g. ¡

fraud, ¡or ¡HIV-­‑posiUve ¡

n Significant ¡majority ¡of ¡the ¡

nega3ve ¡class ¡and ¡minority ¡of ¡ the ¡posiUve ¡class ¡

n Sensi;vity ¡: ¡True ¡PosiUve ¡

recogniUon ¡rate ¡

n Sensi;vity ¡= ¡TP/P ¡

n Specificity: ¡True ¡NegaUve ¡

recogniUon ¡rate ¡

n Specificity ¡= ¡TN/N ¡

A\P ¡ C ¡ ¬C ¡ C ¡ TP ¡ FN ¡ P ¡ ¬C ¡ FP ¡ TN ¡ N ¡ P’ ¡ N’ ¡ All ¡

49 ¡

slide-50
SLIDE 50

Classifier Evaluation Metrics: Precision and Recall, and F-measures

  • Precision: ¡exactness ¡– ¡what ¡% ¡of ¡tuples ¡that ¡the ¡classifier ¡

labeled ¡as ¡posiUve ¡are ¡actually ¡posiUve ¡

  • Recall: ¡completeness ¡– ¡what ¡% ¡of ¡posiUve ¡tuples ¡did ¡the ¡

classifier ¡label ¡as ¡posiUve? ¡

  • Perfect ¡score ¡is ¡1.0 ¡
  • Inverse ¡relaUonship ¡between ¡precision ¡& ¡recall ¡(show ¡curve) ¡
  • F ¡measure ¡(F1 ¡or ¡F-­‑score): ¡harmonic ¡mean ¡of ¡precision ¡and ¡

recall, ¡ ¡

  • Fß: ¡ ¡weighted ¡measure ¡of ¡precision ¡and ¡recall ¡

– ß ¡= ¡0.5 ¡weighs ¡precision ¡twice ¡the ¡weight ¡of ¡recall ¡

50 ¡

slide-51
SLIDE 51

Classifier Evaluation Metrics: Example

51 ¡

– Precision ¡= ¡90/230 ¡= ¡39.13% ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡Recall ¡= ¡90/300 ¡= ¡30.00% ¡

Actual ¡Class\Predicted ¡class ¡ cancer ¡= ¡yes ¡ cancer ¡= ¡no ¡ Total ¡ RecogniUon(%) ¡ cancer ¡= ¡yes ¡ 90 ¡ 210 ¡ 300 ¡ 30.00 ¡(sensi3vity ¡TP/P) ¡ cancer ¡= ¡no ¡ 140 ¡ 9560 ¡ 9700 ¡ 98.56 ¡(specificity ¡TN/N) ¡ Total ¡ 230 ¡ 9770 ¡ 10000 ¡ 96.40 ¡(accuracy ¡TP+TN/ All) ¡

slide-52
SLIDE 52

52

Security Tactics

slide-53
SLIDE 53

Characterizing Security

CIA

  • Confidentiality – data or services are protected from unauthorized

access.

  • Integrity – data or services are not subject to unauthorized

manipulation.

  • Availability – system will be available for legitimate use.

Other characteristics used to support CIA:

  • Authentication – verifies the identities of the parties to a transaction,

and checks if they are truly who they claim to be.

  • Nonrepudiation - guarantees that the sender of a message cannot

later deny having sent the message, and the recipient cannot deny having received the message.

  • Authorization – grants user the privileges to perform a task.

53

slide-54
SLIDE 54

General Security Scenario

Threat modeling

  • Source of stimulus – human or other system.
  • Stimulus – stimulus is an attack: unauthorized attempt to display,

change, or delete data; access system services; reduce availability; change system behavior.

  • Artifact – target of attack: services of the system, data, or data

produced or consumed by system.

  • Environment – online/offline, connected/disconnected, firewall/open

network

  • Response – transactions are carried out such that data/services are not

being manipulated; parties identified; parties can not repudiate their involvements; data, resources, system services available for legitimate use.

  • Response measure – how much system is compromised, time

passage before attack detected, # attacks, recovery time, amount of data vulnerable to attack.

54

slide-55
SLIDE 55

Security Tactics

55

slide-56
SLIDE 56

Detect Attacks

  • Detect intrusion – compare network traffic or service request

patterns within a system to a set of signatures or known patterns of malicious behavior (or normal behavior). Protocol, TCP flags, payload sizes, applications, source or destination address, port #. – TCP flags show what the sending TCP entity wants the receiving entity to do, e.g., SYNC.

  • Detect service denial – compare pattern of network traffic coming

into a system to historic profiles of known denial-of-service attacks (or normal behavior).

  • Verify message integrity – checksums, CRC’s, hash values
  • Detect message delays – detect “man in the middle attacks”

56

slide-57
SLIDE 57

Resist Attacks

  • Identify actors – identify source of external stimulus. User ID, IP

addresses, protocols, ports, etc.

  • Authenticate actors – actor is who they say they are. Paswords,

digital certificates, biometric, etc.

  • Authorize actors – ensure rights. Provide access control

mechanisms, e.g., roles/classes of actors, need to know.

  • Limit access – network, DMZ.
  • Limit exposure – limit exposure tactic minimize the attack surface.

E.g. limit available ports, users, network access, etc.

  • Encrypt data – protected from unauthorized access. Confidentiality.
  • Separate entities – physical, roles.
  • Change default settings – force users to change settings

periodically.

57

slide-58
SLIDE 58

React to Attacks

Single firewall DMZ

  • Physical or logical subnet that contains and exposes an
  • rganization's external-facing services to a larger and untrusted

network, usually the Internet.

  • Adds an additional layer of security to an organization's local area

network (LAN); an external attacker only has direct access to equipment in the DMZ, rather than any other part of the network.

58

slide-59
SLIDE 59

React to Attacks

  • Revoke access – limit access if under attack
  • Lock computer – repeated login failures
  • Inform actors – notify relevant actors

59

slide-60
SLIDE 60

Recover from Attacks

  • Maintain audit trail
  • Restore system – see availability

60

slide-61
SLIDE 61

Testability Tactics

61

slide-62
SLIDE 62

Control and Observe

  • Specialized interfaces – testing: unit/component test,

module test, simulation.

  • Record/playback – faults can be difficult to re-create.
  • Localize state storage – store state locally.

– To start a system, subsystem, or module in an arbitrary state for a test, it is most convenient if that state is stored in a single place.

  • Abstract data sources – control input.
  • Sandbox – isolate instance of the system from the real

world to enable experimentation.

  • Executable assertions – executed when program is in a

faulty state.

62

slide-63
SLIDE 63

Limit Complexity

  • Limit structural complexity – avoid cyclic

dependencies, encapsulating dependencies, reducing dependencies, etc.

  • Limit non-determinism – limit behavioral complexity.

E.g., uncontrolled parallelism, multithreaded systems, etc., use record/playback. – When you’re designing something, always think “how am I going to test this?

63

slide-64
SLIDE 64

Usability

Characteristics:

  • Learning system features
  • Using a system efficiently
  • Minimize the impact of errors
  • Adapting the system to user needs
  • Increasing confidence and satisfaction

Note: Need to define usability requirements with respect to user type.

64

slide-65
SLIDE 65

Usability – General Scenario

Source – end user Stimulus - user tries to use system efficiently, learn to use the system, minimize the impact of errors, adapt the system, or configure the system. Environment – runtime or configure time. Artifacts – System, or system components the user is interacting with (presentation). Response – provide user with features, or anticipate need. Response measure – task time, learn time, retention, # steps, # errors, # tasks, user satisfaction, gain in user knowledge, ratio of successful to unsuccessful operations, time lost due to errors.

65

slide-66
SLIDE 66

Usability Tactics

66

slide-67
SLIDE 67

Tactics – Support User Initiative

  • Usability is enhanced by giving the user feedback as to what the

system is , and by allowing the user to make appropriate responses.

  • Provide user with locus of control

Support user initiative:

  • Location – sense of location, what step, etc.
  • Cancel – terminate, restore resources, restore state.
  • Undo – maintain state history, provide recovery from error/decisions.
  • Pause/resume – long running operations, may require temporary

freeing of resources.

  • Aggregate – aggregate repetitive operations

67

slide-68
SLIDE 68

Tactics – Support System Initiative

  • When system takes the initiative, it must rely on a model of the user,

the user’s task, and the state of the system.

  • Need to identify the types of models the system uses to predict

either its own behavior or the user’s intention. Support system initiative:

  • Maintain task model – use to determine context so system has a

better idea of what the user is attempting to do, and provide

  • assistance. E.g., capitalize first word of sentence.
  • Maintain user model – model user’s knowledge in system: user’s

behavior, preferences, history. E.g., user interface customization.

  • Maintain system model – system maintains explicit model of itself.

Determine expected system behavior so that appropriate feedback can be given to the user. E.g., progress bar.

68

slide-69
SLIDE 69

Usability – Performance Centered Design

  • Model task as work that needs to be accomplished.
  • Support the user through each step of the task, providing just the

right amount of support/information needed.

  • The use should maintain locus of control, know what step they’re in,

and be able to undo, repeat, terminate, etc.

  • Minimize cognitive load – minimize the cognitive burden of

completing the task. E.g., user should not have to remember things.

  • Characterize user
  • Natural interfaces, common metaphors for natural mapping

69

slide-70
SLIDE 70

Other Quality Attributes

  • Variability – form of modifiability. Ability of a system to

support the production of a set of variants in a preplanned fashion.

  • Portability form of modifiability - Ability to run on other

platforms.

  • Development Distributability – support distributed

software development.

  • Scalability – horizontal and vertical scalability, elasticity

in cloud environments. Scenarios deal with adding/ removing resources.

70

slide-71
SLIDE 71

Other Quality Attributes

  • Deployability – how executable arrives at a host

platform.

  • Mobility – support for mobile use: size, weight, display,

battery life, bandwidth, etc.; mobile context.

  • Monitorability – ability to monitor system while it’s

executing.

  • Safety – avoid entering states that can result in damage.
  • Accuracy, recall, precision, error rate – effectiveness

measures of performance.

71

slide-72
SLIDE 72

72

Relating Tactics to Architectural Patterns

  • Select patterns to realize one or more tactics.
  • Each pattern implements multiple tactics, whether

desired or not.

  • Example: Active Object Design Pattern

– Decouples method execution from method invocation to enhance concurrency and simplify synchronized access to

  • bjects that reside in their own thread of control.

STOP

slide-73
SLIDE 73

73

Relating Tactics to Architectural Active Object Design Pattern

  • Elements:

– Proxy – Method request – Activation List – Scheduler – Servant – Future

  • Tactics:

– Information hiding (modifiability) – Intermediary (modifiability) – Binding time (modifiability) – Scheduling policy (performance)

slide-74
SLIDE 74

74

Communication Gateway

slide-75
SLIDE 75

75

Problem

1. Methods invoked on an object concurrently should not block the entire process in order to prevent degrading the QoS of other methods. 2. Syncrhonized access to shared objects should be simple (not require mutex, low level programming). 3. Applications should be designed to transparently leverage the parallelism available on a HW/SW platform.

slide-76
SLIDE 76

76

Solution

  • For each object that requires concurrent execution,

decouple method invocation on the object from method execution.

– Design decoupling so the client thread appears to invoke an

  • rdinary method.
slide-77
SLIDE 77

77

Components

  • Proxy - represents interface of the object.

– Runs in separate client thread.

  • Servant - provides object implementation (impl in J2EE)

– Runs in thread on server

  • Proxy transforms client's method invocation into a Method Request

which is stored in an Activation Queue by a Scheduler.

  • Scheduler runs concurrently in same thread as the Servant,

dequeing Method Requests from the Activation Queue when they become runnable and dispatching them on the Servant that implements the Active Object.

  • Clients can obtain the results of a method's execution via the Future

returned by the Proxy.

slide-78
SLIDE 78

78

Active Object Pattern

slide-79
SLIDE 79

79

Dynamics

  • 1. Method request construction and

scheduling

  • 2. Method execution
  • 3. Completion
slide-80
SLIDE 80

80

Active Object Pattern

slide-81
SLIDE 81

81

Implementation

1. Servant 2. Proxy and Method Requests 3. Activation Queue 4. Scheduler 5. Determine rendezvous and return value policies

  • 1. Immediate evaluation
  • 2. Deferred evaluation
slide-82
SLIDE 82

82

Variants

  • Integrated scheduler
  • Roles of proxy and Servant can be integrated into the Scheduler

component, though Servants still execute in separate threads.

  • Remove Proxy and Servant and use direct message passing

between client thread and Scheduler thread.

  • Polymorphic futures

– Write-once, read-many synchronization

  • Distributed Active-Object.
  • Thread pool

– Support multiple Servants per Active Object.

slide-83
SLIDE 83

83

Known Uses

  • CORBA ORBs
  • Marquette Medical Systems DO's
  • Enterprise Java Beans
slide-84
SLIDE 84

84

Consequences

  • Enhance application concurrency.
  • Simplify synchronization complexity.
  • Transparently leverage available parallelism.
  • Method execution order can differ from method

invocation order.

  • Eventual consistency.
  • Performance overhead.
  • Complicated debugging.
slide-85
SLIDE 85

85

Relating Tactics to Architectural Patterns