Avoiding distrust in inevitable. Kjell Jrgen Hole e-government - - PowerPoint PPT Presentation

avoiding distrust in
SMART_READER_LITE
LIVE PREVIEW

Avoiding distrust in inevitable. Kjell Jrgen Hole e-government - - PowerPoint PPT Presentation

We study e-government services and develop a trust model to illustrate how incidents affecting few users can cause pervasive distrust, and why it is hard to regain lost trust. We then discuss how to build and maintain trust when the high


slide-1
SLIDE 1

Last updated 16.05.17

Avoiding distrust in e-government

Kjell Jørgen Hole Simula@UiB

We study e-government services and develop a trust model to illustrate how incidents affecting few users can cause pervasive distrust, and why it is hard to regain lost trust. We then discuss how to build and maintain trust when the high complexity of e-government infrastructures makes incidents inevitable.

Overview

❖ Introduction ❖ Defining trust ❖ Information infrastructures ❖ Explanatory trust model ❖ Trust is fragile ❖ Tipping points ❖ Distrust is robust ❖ Building and preserving trust 2

Introduction

3

slide-2
SLIDE 2

E-government in Norway

❖ The Norwegian government is developing e-services

for citizens and companies

❖ applications, invoicing, appointments, and various

types of reports will be handled electronically

❖ sensitive information such as health and tax data

are to be sent over the Internet to personal devices

4

Norwegian citizens now receive information from the government in personal digital “mail boxes.”

Trust modeling

❖ We’ll model a population of users that influence each

  • ther’s level of trust in e-government services

❖ The model explains why ❖ trust decreases rapidly when distrust starts to spread ❖ it is hard to determine which incidents will lead to

widespread distrust

❖ it is difficult to create pervasive trust when there is

much distrust

5

While the model provides one set of explanations, other explanations are also possible.

Defining trust

6

slide-3
SLIDE 3

Trust as a computational construct

❖ Here, an individual’s trust in an entity is given by three

mutually exclusive states

❖ trust ❖ mistrust ❖ distrust

7

The literature contains many different definitions of ‘trust’ because the concept it is both context- and agent-dependent.

State of trust

❖ An individual that trusts an entity has a positive

expectation of the entity’s future behavior

❖ The individual will cooperate with the entity even

though there is a possibility that the entity will misbehave and inflict cost or damage

❖ The entity gains the individual’s trust over time

through repeated actions benefiting the individual

8

State of mistrust

❖ An individual harboring mistrust believes the uncertainty

is too large to expect a particular behavior from an entity

❖ a citizen may believe in a government’s desire to

deliver secure services, but has no confidence in the government’s ability to deliver

9

slide-4
SLIDE 4

State of distrust

❖ An individual distrusting an entity believes it will

deliberately act against her in a given situation

❖ a distrusting citizen may think that the government

uses collected information to spy on individuals

10

Trust varies over time

11

Trust levels Distrust Mistrust Cooperation threshold Time Noncooperation threshold A B An individual’s development

  • f trust over time

Trust

Trust is situational and varies over time. In area A, the individual trusts an entity enough to cooperate. In area B, the individual actively distrusts the entity and will take action against it, convinced that the entity will respond in turn. Between the cooperation and noncooperation thresholds is mistrust, in which the individual believes the entity’s intent to deliver a certain service quality but is not certain of the entity’s ability to do so.

Mutual influence

❖ Since most users do not fully understand the reasons for

incidents in national computer systems, they will seek advice from others

❖ The users’ trust is particularly influenced by the

  • pinions of family, friends, and co-workers

❖ especially when they all start to discuss incidents

widely reported by the media

12

slide-5
SLIDE 5

Population has degrees of trust

❖ Note that the whole population has different degrees of

trust, mistrust, and distrust at the same time measured by the fractions of individuals in each of the three states

13

The three fractions of trust, mistrust, and distrust sum to one.

Information infrastructures

14

Infrastructure definition

❖ A national information infrastructure is a socio-

technological system that consists of

❖ stakeholders, ❖ networked computer systems, ❖ security and privacy policies, and ❖ threats such as equipment failure, extreme weather,

hacking, and sabotage

15

slide-6
SLIDE 6

LHR events

❖ Incidents occur in infrastructures all the time, but users

do not detect most of the events because automated mechanisms and system operators limit their impacts

❖ From the users’ point of view, infrastructures tend to be

stable over long periods, punctuated by large-impact, hard-to-predict, and rare (LHR) events

16

LHR events are outliers that are also referred to as black or gray swans. See lecture on antifragile ICT systems to learn more about LHR events.

Complex adaptive system

❖ We view a national information infrastructure

as a complex adaptive system and LHR events as surprising and extreme global behavior

17

The complexity is due to the many interactions between the users and the infrastructure, the large amounts of communications between the subsystems, the influence of changing policies and threats, and the adaption of stakeholders and infrastructure to internal and external changes.

LHR incidents are inevitable

❖ Because national information infrastructures are complex

adaptive systems, LHR incidents will occur no matter the quality of the risk management

❖ Next, we develop a model to study how LHR incidents

affect a population’s trust in information infrastructures, especially e-government services

❖ we’ll focus on incidents leading to widespread distrust

18

While improved risk management can assess and mitigate more incidents, incidents will still occur because an infrastructure has too many dynamic interactions for humans to even enumerate all possible rare and extreme behaviors of a system.

slide-7
SLIDE 7

Explanatory trust model

19

Simple trust model

❖ Two-dimensional, discrete-time, cellular automaton ❖ individuals are represented by 10,000 patches on a

square that wraps around at the edges

❖ synchronous (deterministic) updates of trust states

20

Simple, generic trust model shaped as a doughnut.

States of trust

❖ An individual’s state of trust is

given by the color of the patch

❖ trust is green ❖ mistrust is yellow ❖ distrust is red 21

slide-8
SLIDE 8

Realization of mutual influence

❖ At each time step, the state of an

individual is updated based on its

  • wn state and the states of its eight

neighbors

22

The figure shows the so-called Moore neighborhood.

Set of update rules

(used in examples)

(1) A green patch changes to yellow when there are maximum four green neighbors (2) A yellow patch turns red when there are maximum three green neighbors (3) A red patch becomes yellow when minimum seven neighbors are green (4) A yellow patch turns green if minimum six neighbors are green

23

Alternative sets of update rules

24

Changes Color-changing thresholds Maximum number of green neighbors ⌅ → ⌅ 3 3 3 3 3 3 3 3 4 4 4 4 4 4 ⌅ → ⌅ 1 1 1 1 2 2 2 2 1 1 2 2 3 3 Minimum number of green neighbors ⌅ → ⌅ 6 6 7 7 6 6 7 7 6 7 6 7 6 7 ⌅ → ⌅ 5 6 5 6 5 6 5 6 6 6 6 6 6 6

Rules from previous page

Each column defines a set of four update rules. The first two entries in a column define the maximum number of green neighbors causing changes towards distrust, while the last two entries define the minimum number of green neighbors needed to change away from distrust.

slide-9
SLIDE 9

Properties of rule sets (1)

❖ An individual with trust goes through a period

  • f mistrust before developing distrust

❖ An individual with distrust develops mistrust

before trust

25

The rule sets all have the same properties. These properties were selected simply because they reflect reasonable assumptions about trust.

Properties of rule sets (2)

❖ Individuals that have trusted an entity for a long time

are reluctant to mistrust or distrust the entity

❖ Distrusting individuals are even more reluctant to ever

again trust an entity that has violated their trust and caused pain or damage

26

Properties of rule sets (3)

❖ Since mistrust is believed to be a less stable state than

distrust, an individual harboring mistrust develops distrust when surrounded by much mistrust

27

Users harboring mistrust will (directly or indirectly) experience trust- reducing events in the future because incidents are inevitable in a complex adaptive systems. Hence, it is reasonable to believe that users with mistrust will develop distrust when they are surrounded by enough mistrust.

slide-10
SLIDE 10

Explanatory model

❖ The trust model is non-predictive in the sense that it

cannot forecast a population’s trust in a real system

❖ However, it offers an explanation of how the degree

  • f trust changes in a large community of users

28

See the lecture on antifragile ICT systems to better understand why it is, at best, very hard to predict extreme global behavior in complex adaptive systems.

Trust is fragile

29

Development of distrust

❖ We first study how a high degree of trust can turn into a

high degree of distrust

❖ We concentrate on incidents reported in the media

creating some percentage of initial mistrust

❖ At the start of a model run, a selectable percentage of

the patches (chosen uniformly at random) are yellow and the rest are green

30

While most incidents go unnoticed by the media, a few incidents are widely

  • reported. Not all reported events are very serious from a technical point of

view, but extensive media coverage can still create mistrust among a significant fraction of users.

slide-11
SLIDE 11

31

Time step t=0, 27% initial mistrust (no distrust)

Example of model run.

32

t=1, distrust starting to appear

33

t=3, distrust is spreading

slide-12
SLIDE 12

34

t=20

35

t=40

36

t=60

slide-13
SLIDE 13

37

t=80

38

t=100

39

t=165, 100% final distrust

slide-14
SLIDE 14

Initial mistrust → final distrust

❖ Average fraction of final distrust as a function of the

initial fraction mistrust. A transition starts around 15%

40

0% 25% 50% 75% 100% 10% 12% 14% 16% 18% 20% 22% 24% 26% 28% 30% 32% 34%

Resulting fraction of distrust Initial fraction of mistrust Distrust

Average fraction of distrust as a function of the initial fraction of mistrust in a population of 10,000 individuals. Each bar in the plot is an average over 100 runs with the same initial mistrust fraction. As long as the initial density

  • f mistrust is less than 15 percent, the resulting distrust fraction is less than

1 percent on average. Above that, however, a transition to distrust occurs and grows rapidly, reaching 99 percent by a mistrust fraction of 28 percent. Experiments with the additional 13 rule sets revealed similar sharp transitions to massive distrust starting at fairly low percentages of initial mistrust (between 16 and 33 percent).

Model implication

❖ The users’ trust in an e-government system is fragile

because an incident affecting few users can create massive distrust when extensive media reporting create enough initial mistrust

❖ Of course, an incident affecting many users directly

may create enough initial mistrust without any “help” from the media

41

Initial tipping points

42

See “P . J. Lamberson and S. E. Page, Tipping Points” for a thorough discussion on different types of tipping points. We’ll concentrate on ‘initial tipping points.’

slide-15
SLIDE 15

Tipping point defined

❖ Let Wt(x0) denote the value of a deterministic

process at time t=0,1,…, where x0 is an input at t=0

❖ Let δ denote a small positive constant ❖ The process Wt(x0) has an (initial) tipping point at

x0=a when the values of Wt(a–δ) and Wt(a+δ) are very different for t≥T, where T is a positive constant

43

If gradually adjusting the initial state causes a discontinuous jump in the state of a system at some future time, then the system has an initial tipping

  • point. More generally, a tipping point (or critical point) is a narrow transition

domain separating two well-defined phases.

Explanation

❖ What the definition means is that small changes in

x0 near the threshold point cause the future values

  • f W(x0) to rip apart

44

0% 25% 50% 75% 100% 10% 12% 14% 16% 18% 20% 22% 24% 26% 28% 30% 32% 34%

Resulting fraction of distrust Initial fraction of mistrust Distrust

Trust model has tipping point

❖ The input to the model is the

initial fraction of mistrust

❖ As this fraction goes from 14%


to 28%, the expected fraction of distrust increases from nearly
 0% to nearly 100%

❖ There is a tipping point at 21%

initial mistrust

W(21%–7% mistrust) ≈ 0% distrust

W(21%+7% mistrust) ≈ 100% distrust

45

Small changes to the initial percentage of mistrust around 21% leads to large changes in the expected value of the final percentage of distrust.

slide-16
SLIDE 16

Types of tips

❖ Direct tip—occurs when a gradual change in a variable

results in a discontinuous jump in future values of that same variable

❖ Contextual tip—occurs when a gradual change in a

variable x causes a discontinuous jump in future values

  • f some other variable y

46

Direct tips change where a system will go. Contextual tips change the set of states where the system can go.

Trust model has a contextual tip

❖ The trust model’s variable of interest is the expected

percentage of final distrust

❖ The expected distrust is determined by the percentage

  • f initial mistrust

❖ the fraction of initial mistrust is again determined

by the amount of media reporting

47

Positive feedback vs. tipping point

❖ Many apparent tipping points are not tips but rather

inevitable upticks in adoption rates driven by positive feedbacks

❖ Initial tip in the trust model: ❖ the tipping point is given by a value of the initial

percentage of mistrust

❖ the subsequent rise in the spread of distrust is merely

the inevitable spread caused by positive feedback

48

“Visual tipping points” can be misleading.

slide-17
SLIDE 17

Distrust is robust

49

Development of trust

❖ Next, we determine when an initial percentage of

distrust stabilizes at a high percentage of trust

❖ At the start of a model run, a selectable percentage

  • f the patches (chosen uniformly at random) are

green and the rest are red

50 51

Time step t=0, 50% initial trust and 50% initial distrust

slide-18
SLIDE 18

52

t=1, 18% trust, 34% mistrust, and 48% distrust

53

t=2, 6% trust, 14% mistrust, and 80% distrust

54

t=3, 2% trust, 4% mistrust, and 94% distrust

slide-19
SLIDE 19

55

t=4, 1% trust, 1% mistrust, and 98% distrust

56

t=5, 99% distrust

57

t=6, 99.8% distrust

slide-20
SLIDE 20

58

t=7, no trust!

59

t=8, 100% final distrust

Comment on model run

❖ Even with 50% initial trust, the model reverts to

100% distrust, indicating that it is hard to build pervasive trust when there are much distrust

60

slide-21
SLIDE 21

61

Time step t=0, 84% initial trust and 16% initial distrust

62

t=1, mistrust occurs

63

t=2

slide-22
SLIDE 22

64

t=6

65

t=10

66

t=20

slide-23
SLIDE 23

67

t=30

68

t=40

69

t=60

slide-24
SLIDE 24

70

t=69, model stabilizes

Comment on model run

❖ A very large initial percentage of trust is needed to

prevent distrust from spreading

71

Initial trust → final trust

72

0% 25% 50% 75% 100% 76% 78% 80% 82% 84% 86% 88% 90% 92% 94% 96% 98% 100%

Resulting fraction of trust Initial fraction of trust Trust

❖ Average fraction of final trust as a function of the initial

fraction of trust. A transition starts around 80%.

Average fraction of trust as a function of the initial fraction of trust in a population of 10,000 individuals. A transition to from distrust (0 percent resulting trust) to trust does not start until the initial trust fraction is around 80 percent, which demonstrates the robustness of distrust.

slide-25
SLIDE 25

Model implication

❖ The model indicates (see plot) that it is very hard to

create widespread trust when there is much initial distrust

❖ Since individuals distrusting an entity are very reluctant

to once more trust the same entity, it will take a sustained effort over a long period to rebuild trust

❖ there is no guarantee that such an effort will succeed

73

Different sets of update rules

❖ All fourteen sets of update rules (see earlier table) result

in the same behaviors

❖ trust is fragile ❖ distrust is robust

74

A commercial entity whose customers develop a large degree of distrust is unlikely to recover. A governmental entity in the same situation may “force” citizens to use the systems. The long-term consequences of such behavior is outside the scope of this lecture.

Building and preserving trust

75

slide-26
SLIDE 26

Why build and preserve trust?

❖ Since trust is fragile and distrust is robust, it is

important to develop infrastructures that facilitate the creation of trust and avoid the formation of distrust

❖ We discuss how to build and maintain a population’s

trust in an e-government infrastructure offering many services

76

Suggested methods

  • 1. Employ user-focused and iterative development
  • 2. Deploy cloud-based services
  • 3. Prepare alternative services
  • 4. Make digital services voluntary
  • 5. Build a good track record
  • 6. Learn from small failures

77

  • 1. Software development

❖ The UK Cabinet Office has a team, called the

Government Digital Service (GDS), tasked with transforming government digital services

❖ GDS has moved the web presence of all UK

government departments to www.gov.uk

❖ this web platform publishes government

information and provides access to online transactional services

78

https://gds.blog.gov.uk

slide-27
SLIDE 27

GDS insight—determine user needs

❖ To build trusted e-government services, GDS has found

that developers first need to thoroughly understand the users’ needs

❖ Rather than making assumptions, developers must

analyze real data from similar services and interview future users to determine their needs

❖ To maintain trust, developers need to revisit services

and make alterations as users’ needs change over time

79

GDS insight—iterative development

❖ According to GDS, developers should start small and

iterate often

❖ Frequent iterations reduce the risk of big failures and

turn small failures into lessons

❖ Release Minimum Viable Products early, test them with

real users, and move from Alpha to Beta releases while adding features and refinements based on user feedback

80

Interesting talk and accompanying slides: http://www.youtube.com/watch?v=eDBhUP3i5hM&t=0m7s http://www.slideshare.net/DigEngHMG/agile-21113-1

Why iterative development?

❖ Viewing e-government infrastructures as complex

adaptive systems partly explains the success of GDS’ development methodology

81

Since it is very hard to predict the long-term global behavior of complex systems, iterative and test- driven approaches are needed to ensure sufficient scalability, stability, and robustness of new services

slide-28
SLIDE 28

GDS insight—keep it simple

❖ To further build and maintain trust, it is vital to make

services simple to use for diverse groups of users with varying technological skills

❖ If an individual does not understand how to use a

service because it is too complicated, then the individual is likely to lose trust in the service

❖ To keep services simple, developers need to prioritize

which users’ needs they want to realize

82

  • 2. Traditional systems vs. clouds

❖ There are problems with tightly dependent e-government

services running on the same platform:

❖ failure of one digital service tends to


affect other services

❖ unplanned downtime during a year


can be significant

83

Two services are tightly dependent if the functionality of one service is badly affected when the other service misbehaves or goes down.

Advantages of cloud (1)

❖ A cloud-computing platform facilitates loosely dependent

service oriented architectures with graceful degradation

❖ the platform can handle large spikes in the number


  • f users accessing a service

❖ virtual servers can easily be replaced when there


are problems

84

Two services are loosely dependent if most of the functionality a service is preserved when the other service malfunctions or goes down.

slide-29
SLIDE 29

Advantages of cloud (2)

❖ A cloud had multiple data centers in different

physical locations connected by redundant networks

❖ If a whole data center goes down, then a service can

be moved to another data center, assuming that the necessary data is stored in multiple locations

❖ While different services may depend on the same

data, the services themselves should fail nearly independently of each other

85

Reduced risk of losing trust

❖ A government faces a large loss of trust when all e-

government services fail simultaneously due to problems with the underlying platform

❖ A cloud-based solution is more robust to system

failures because the consequences of local failures are unlikely to spread

86

A government may decide to run its own private cloud for political and legal reasons.

  • 3. Alternative services (1)

❖ Whether or not cloud computing is used, there is a

small possibility of a rare, catastrophic incident taking down a whole platform and all its services for a long time

❖ If there are no alternatives to the services, then a long

simultaneous failure of all services is intolerable to a government because distrust will spread to many citizens

87

slide-30
SLIDE 30

Alternative services (2)

❖ It is a good idea to have alternative solutions to the most

important services to further reduce the possibility of mistrust and distrust spreading in the population

❖ A government could for example run its services in a

private cloud and use another cloud in an emergency

88

  • 4. Voluntary digital services

❖ A government can make it difficult for its citizens to

continue using paper-based government services because the goal is to free up public sector resources

❖ It may even be tempting for a government to create a

legal obligation to use e-government services to ensure large resource savings

89

Mandatory services create mistrust

❖ Unfortunately, mandatory use of digital services is

likely to create mistrust, or even distrust, because citizens have

❖ little or no control over a government’s actions, ❖ some citizens lack the computer skills needed to

use the services, and

❖ others have disabilities forcing them to depend

  • n help from others

90

slide-31
SLIDE 31

Voluntary services

❖ It should be possible to opt out of any e-government

service without undue difficulty to avoid mistrust and distrust among citizens

91

  • 5. Track record

❖ It is counterproductive for a government to downplay

an infrastructure’s high complexity because it makes incidents inevitable in the long run

❖ It is a particularly bad policy to rely on “spin control”

after incidents have occurred

92

Build a good track record

❖ A government should instead gain trust by creating a

good track record from the start of a new service

❖ Dissemination of practical information to users via the

web, social media, and the press are ways to build trust

93

slide-32
SLIDE 32

Fix problems

❖ A government must demonstrate competence and

quickly fix problems when a large incident occurs

❖ If the government has a good track record, then users

are quite forgiving when they are convinced that an incident was caused by a technical problem

94

Clarify intensions

❖ Since the loss of trust can be huge when users suspect

malicious intent, a government must clarify its intentions, especially how it will use and not use collected personal information, to prevent rapid deterioration of trust during an incident

95

  • 6. Learn from failures

❖ Netflix has tools that introduce small failures into their

cloud-based subscription service to learn how to make it increasingly robust to large failures

❖ Hystrix ❖ Chaos Monkey ❖ Latency Monkey

96

techblog.netflix.com

slide-33
SLIDE 33

Hystrix

❖ Hystrix utilizes a “circuit-breaker” method to shut

down requests to system services if their latencies or numbers of failures become too large

❖ The tool supports graceful degradation by isolating

failures to the affected services

97

Chaos Monkey

❖ The tool Chaos Monkey induces failures by disabling

random production instances

❖ The tool ensures robustness to failures because absence

  • f robustness will cause a service to fail early, thus

forcing Netflix’s developers to improve the robustness

98

Latency Monkey

❖ Latency Monkey introduces random latencies between

services to simulate network degradation and to ensure that services are robust to latency spikes and other networking issues

99

slide-34
SLIDE 34

Relevance to e-government

❖ Since it is better to learn from frequent small failures

than waiting for a huge failure before improving the robustness, similar tools should be developed to increase the robustness of e-government systems in the cloud

100

References

101

  • L. H. Nestås and K. J. Hole, “Building and maintaining

trust in Internet voting,” IEEE Computer Magazine, vol. 45, no. 5, 2012, pp. 74–80

  • K. J. Hole, “Management of hidden risks,” IEEE

Computer Magazine, vol. 46, no. 1, 2013, pp. 65–70

❖ P. J. Lamberson and S. E. Page, “Tipping points,” SFI

working paper: 2012–002, 2012; www.santafe.edu/ media/workingpapers/12-02-002.pdf

102