Complex collective choices Luigi Marengo 1 Dept. of Management, - - PowerPoint PPT Presentation

complex collective choices
SMART_READER_LITE
LIVE PREVIEW

Complex collective choices Luigi Marengo 1 Dept. of Management, - - PowerPoint PPT Presentation

Complex collective choices Luigi Marengo 1 Dept. of Management, LUISS University, Roma, lmarengo@luiss.it Based on joint work with G. Amendola, G.Dosi, C. Pasquali and S. Settepanella Complex collective decisions we consider complex


slide-1
SLIDE 1

Complex collective choices

Luigi Marengo

  • 1Dept. of Management, LUISS University, Roma, lmarengo@luiss.it

Based on joint work with G. Amendola, G.Dosi, C. Pasquali and S. Settepanella

slide-2
SLIDE 2

Complex collective decisions

◮ we consider “complex” multidimensional decisions, in the

sense that:

◮ they involve several items (features) ◮ there are non-separabilities and non-monotonicities

(interdependencies) among such items

slide-3
SLIDE 3

A simple example: “What shall we do tonight?”

◮ C = {movie, theater, restaurant, stay home, . . . } ◮ the object “going to the movies” is defined by:

◮ with whom ◮ which movie ◮ which theater ◮ what time ◮ . . .

◮ the object “stay home” is defined by:

◮ with whom ◮ to do what ◮ e.g. watch TV, or have a drink put on a nice record and see

what happens . . .

◮ which show ◮ which movie ◮ what we eat ◮ . . .

slide-4
SLIDE 4

Some obvious non-standard properties

  • 1. objects typically do not partition the set of traits/features
  • 2. in general there are obvious non-separabilities and

non-monotonicities (interdependencies) among traits

◮ e.g. I might prefer Franc

¸oise to Corrado as instance of the “with whom” if associated to “staying at home” and “tˆ ete-` a-tˆ ete dinner”, but Corrado to Franc ¸oise as an instance of the “with whom” if associated to “going to the football match” and “with ten more male friends”

slide-5
SLIDE 5

The general question

◮ how does the aggregation of items/features into objects

determines collective outcomes

slide-6
SLIDE 6

Two families of models

  • 1. a committee where a group of people choose (e.g. by

pairwise majority voting) a value for all items according

  • nly to their preferences
  • 2. an organization where decision rights are divided are

delegated to individual agents and outcomes have some “objective” value notions of authority and power:

◮ in the committee model: power of object construction

(putting items together to form an object of choice) and power of agenda

◮ in the organization model: power of allocating decisions

(delegation) and power of vetoing and overruling decisions

slide-7
SLIDE 7

The Committee Model I

◮ Choices are made over a set of N elements or features

F = {f1, f2, . . . , fN}, each of which takes on a value out of a finite set possibilities.

◮ Simplifying assumption: such a set is the same for all

elements and contains two values labelled 0 and 1: fi ∈ {0, 1}.

◮ The space of possibilities is given by 2N possible choice

configurations: X = {x1, x2, . . . , x2N}.

slide-8
SLIDE 8

The Model II

◮ There exist h individual agents A = {a1, a2, . . . , ah}, each

characterized by a (weak) ordering on the set of choice configurations

◮ We call this ranking agent k’s individual decision surface

Ωk.

slide-9
SLIDE 9

The Model III

◮ Given a status quo xi and an alternative xj agents

sincerely vote according to their preferences.

◮ A majority rule is used to aggregate their preferences:

ℜ : (Ω1, Ω2, . . . , Ωh) → Ω.

slide-10
SLIDE 10

The Model IV

◮ Given an initial configuration and a social decision rule ℜ

this process defines a walk on the social decision surface which can either:

  • 1. end up on a social optimum
  • 2. cycle forever among a subset of alternatives.
slide-11
SLIDE 11

Objects (Modules)

Let I = {1, 2, . . . , N} be the set of indexes. An object (decision module) Ci ⊆ I The size of object Ci is its cardinality |Ci|. An object scheme is a set of modules: C = {C1, C2, . . . , Ck} such that

k

  • i=1

Ci = I (. . . but not necessarily a partition)

slide-12
SLIDE 12

Agendas

An agenda α = Cα1Cα2 . . . Cαk over the object set C is a permutation of the set of objects which states the order according to which objects are examined.

slide-13
SLIDE 13

Voting procedure

We use the following algorithmic implementation of majority voting:

  • 1. repeat for all initial conditions x = x1, x2, . . . , x2N
  • 2. repeat for all objects Cαi = Cα1, Cα2, . . . , Cαk until a cycle
  • r a local optimum is found;
  • 3. repeat for j=1 to 2|Cαi |

◮ generate an object-configuration Cj

αi of object Cαi

◮ vote between x and x′ = Cj

αi ∨ x(C−αi)

◮ if x′ ℜ x then x′ becomes the new current configuration

slide-14
SLIDE 14

Stopping rule

We consider two possibilities:

  • 1. objects which have already been settled cannot be

re-examined

  • 2. objects which have already been settled can be

re-examined and if new social improvements have become possible

slide-15
SLIDE 15

Walking on social decision surfaces

Given an object scheme C = {C1, C2, . . . , Ck}, we say that a configuration xi is a preferred neighbor of configuration xj with respect to an object Ch ∈ C if the following three conditions hold:

  • 1. xi ℜ xj
  • 2. xi

ν = xj ν ∀ν /

∈ Ch

  • 3. xi = xj

We call Hi(x, Ci) the set of neighbors of a configuration x for

  • bject Ci.

A path P(xi, C) from a configuration xi and for an object scheme C is a sequence, starting from xi, of preferred neighbors: P(xi, C) = xi, xi+1, xi+2, . . . with xi+m+1 ∈ H(xi+m, C) A configuration xj is reachable from another configuration xi and for decomposition C if there exist a path P(xi, C) such that xj ∈ P(xi, C).

slide-16
SLIDE 16

Social outcomes

◮ A configuration x is a local optimum for the

decomposition scheme C if there does not exist a configuration y such that y ∈ H(x, C) and y ≻ℜ x.

◮ A cycle is a set X0 = {x1 0, x2 0, . . . , xj 0} of configurations

such that x1

0 ≻ℜ x2 0 ≻ℜ . . . ≻ℜ xj 0 ≻ℜ x1 0 and that for all

x ∈ X0, if x has a preferred neighbor y ∈ H(x, C) then necessarily y ∈ X0.

slide-17
SLIDE 17

The relevance of objects I

◮ object construction mechanisms forego and constrain

choices.

◮ Influence of the generative mechanism:

  • 1. define the sequence of voting;
  • 2. define which subset of alternatives undergoes examination.
slide-18
SLIDE 18

The relevance of objects II

◮ Different sets of objects may generate different social

  • utcomes.

◮ Social optima do – in general – change when objects are

different both because:

  • 1. the subset of generated alternatives is different (and some

social optima may not belong to many of these subsets)

  • 2. the agenda is different (and this may determine different
  • utcomes).

◮ Framing power appears therefore as a more general

phenomenon than agenda power.

slide-19
SLIDE 19

Results in a nutshell

◮ Under general conditions (notably if preferences are not

fully separable) the answer to the previous question is entirely dependent upon decision modules.

◮ We show algorithmically that, given a set of individual

preferences:

  • 1. by appropriate modifications of the decision modules it is

possible to obtain different social outcomes.

  • 2. cycles `

a la Condorcet-Arrow may also appear and disappear by appropriately modifying the decision modules.

  • 3. the median voter theorem is also dependent upon the set of

alternatives (median voter may be transformed into outright loser)

◮ trade-off decidability-manipulability: “finer” objects make

cycles disappear but generate many local optima (social

  • utcome will depend on initial status quo) and simplify the

pairwise voting process

slide-20
SLIDE 20

Results I

◮ Social outcomes are, in general, dependent upon the

  • bjects scheme

◮ Consider a very simple example in which 5 agents have a

common most preferred choice.

◮ By appropriately modifying the objects scheme one can

  • btain different social outcomes or even the

appearance/disappearance of intransitive limit cycles.

slide-21
SLIDE 21

Results II

Rank Agent1 Agent2 Agent3 Agent4 Agent5 1st 011 011 011 011 011 2nd 111 000 010 101 111 3rd 000 001 001 111 000 4th 010 110 101 110 010 5th 100 010 000 100 001 6th 110 111 110 001 101 7th 101 101 111 010 110 8th 001 100 100 000 100

slide-22
SLIDE 22

Results III

◮ With C = {{f1, f2, f3}} the only local optimum is the global

  • ne 011 whose basin of attraction is the entire set X.

◮ With C = {{f1}, {f2}, {f3}} we have the appearance of

multiple local optima and agenda-dependence.

◮ With C = {{f1, f2}, {f3}} multiple local optima but

agenda-independence.

slide-23
SLIDE 23

Object-dependent cycles I

Redefining modules can make path dependence disappear.

◮ Consider the case of three agents and three objects with

individual preferences expressed by: Order Agent 1 Agent 2 Agent 3 1st x y z 2nd y z x 3rd z x y

slide-24
SLIDE 24

Object-dependent cycles II

◮ Social preferences expressed through majority rule are

intransitive and cycle among the three objects: x ≻ℜ y and y ≻ℜ z, but z ≻ℜ x.

◮ Imagine that x,y,z are three-features objects which we

encode according to the following mapping: x → 000, y → 100, z → 010

slide-25
SLIDE 25

Object-dependent cycles III

◮ Suppose that individual preferences are given by:

Order Agent 1 Agent 2 Agent 3 1st 000 100 010 2nd 100 010 000 3th 010 000 100 4th 110 110 110 5th 001 001 001 6th 101 101 101 7th 011 011 011 8th 111 111 111

slide-26
SLIDE 26

Object-dependent cycles IV

  • 1. With C = {{f1, f2, f3}} the voting process always ends up in

the limit cycle among x,y and z.

  • 2. The same happens is each feature is a separate object:

C = {{f1}, {f2}, {f3}}.

  • 3. However, with:

C = {{f1}, {f2, f3}}

  • r with:

C = {{f1, f3}, {f2}} Voting always produces the unique global social optimum 010 in both cases.

slide-27
SLIDE 27

Median voter I

Order Ag1 Ag2 Ag3 Ag4 Ag5 Ag6 Ag7 1st 1 2 3 4 5 6 7 2nd 2 3 4 5 6 7 6 3rd 1 2 3 4 5 5 4th 3 4 5 6 7 4 4 5th 4 1 2 3 3 3 6th 5 5 6 7 2 2 2 7th 6 6 1 1 1 1 8th 7 7 7 Median voter theorem: an example

slide-28
SLIDE 28

Median voter II

Order Ag1 Ag2 Ag3 Ag4 Ag5 Ag6 Ag7 1st 001 010 011 100 101 110 111 2nd 010 011 100 101 110 111 110 3rd 000 001 010 011 100 101 101 4th 011 100 101 110 111 100 100 5th 100 000 001 010 011 011 011 6th 101 101 110 111 010 010 010 7th 110 110 000 001 001 001 001 8th 111 111 111 000 000 000 000 If C = {{f1, f2, f3}} there is unique social optimum 100 (median voter’s most preferred) If C = {{f1}, {f2}, {f3}} two local optima: 100 and 011 (the

  • pposite of median voter’s most preferred).
slide-29
SLIDE 29

Simulation Results with random agents I

◮ For the objects scheme C1, i.e. a single decision module

containing all the features, we have almost always intransitive cycles and that these cycles are rather long (almost 40 for N=8, 120 for N=12 different choice configuration on average).

◮ At the other extreme, i.e. the set of finest objects in most

cases we do not observe cycles, but choice ends in a local

  • ptimum.

◮ the number of local optima increases exponentially: with

N = 8 about 16 local optima, with N = 12 over 300 local

  • ptima
slide-30
SLIDE 30

Simulation Results II

◮ A very clear trade-off between the presence of cycles and

the number of local optima.

◮ When large objects are employed, cycles almost certainly

  • ccur.

◮ The likelihood rapidly drops when finer and finer objects

are employed, but in parallel the number of local optima increases.

◮ This implies that a social outcome becomes well defined

but which social outcome strongly depends upon the specific objects employed and the sequence in which they are examined.

slide-31
SLIDE 31

The organization model

◮ decisions are allocated to different agents by a principal ◮ there are good and bad decisions (i.e. social outcomes are

ranked by some objective performance evaluation) and the principal want to get the best decisions

◮ however principal and agent do not know what the

decisions are

slide-32
SLIDE 32

Background

◮ when knowledge is distributed in organizations how should

decisions be allocated?

◮ co-location of knowledge and decision rights (Hayek 1945,

Jensen-Meckling 1992)

◮ but delegation generates agency problems to be solved by

incentives and/or authority

◮ additional complication: delegation, incentives and

authority may interact in unexpected ways (the problem of motivation)

◮ (. . . maybe agency problems have been overrated in the

literature?)

slide-33
SLIDE 33

Our contribution

  • 1. not only self-interest but incommensurable beliefs, i.e.

“. . . the problem that arises when different individuals or groups hold sincere but differing beliefs about the nature of the problem and its solutions” (Rumelt, 1995)

  • 2. if knowledge is distributed delegation is limited not only by

agency problems but also by complexity and uncertainty:

◮ interdependencies (externalities) among pieces of

knowledge

◮ the principal may not know where knowledge is actually

located

slide-34
SLIDE 34

Incommensurable beliefs

◮ agency models assume that conflict in organizations arises

because individuals have diverging objectives and information is asymmetric

◮ but agents may have different cognitions, views, ideas,

visions on how to achieve a common objective (especially when facing non-routines situations)

◮ this is a source of cognitive conflict: diverging ideas about

the appropriate course of action

◮ and a source of political conflict: the actions of one agent

produce externalities on the principal and on the other agents

slide-35
SLIDE 35

Some likely properties

◮ conflicting views and conflicting interests are often

intertwined

◮ conflicting views may be harder to reconcile and symmetric

information may not help

◮ one may not want to fully reconcile them if there is

uncertainty of what should be done

◮ mis-aligned views may be a fundamental driver of learning ◮ thus the principal faces a trade off between:

◮ having her views implemented as closely as possible ◮ use the agents’ different views to learn and discover better

policies

slide-36
SLIDE 36

The Model: policy landscape

◮ a set of of n (binary) features (or policies)

F = {f1, f2, . . . , fn}

◮ X is the set of 2n policy vectors and xi = [f i 1, f i 2, . . . , f i n] one

generic element

◮ an objective and exogenously determined ranking of policy

vectors according to performance (complete and transitive)

  • rder: x ≻N y
slide-37
SLIDE 37

The Model: principal, agents and organization

◮ a principal Π and h agents A = {a1, a2, . . . , ah} with

1 ≤ h ≤ n

◮ all of them with a complete and transitive preference

  • rdering over policy vectors: Π and ai

◮ a decomposition of decision rights D = {d1, d2, . . . , dk}

such that:

h

  • i=1

di = P and di dj = ∅ ∀i = j (for simplicity the principal does not take directly any decision)

◮ the organizational structure is a mapping of the set D

  • nto the set A of agents, plus an agenda (a permutation of

the set of agents) giving the sequence of decision (if any)

slide-38
SLIDE 38

Examples of organizational structure

assuming four policy items:

◮ {a1 ← {p1, p2, p3, p4}}, i.e. one agent has control on all

four policies

◮ {a1 ← {p1}, a2 ← {p2}, a3 ← {p3}, a4 ← {p4}}, i.e. four

agents have each control on one policy

◮ {a1 ← {p1, p2}, a2 ← {p3, p4}}, i.e. two agents have each

control on two policies

◮ {a1 ← {p1}, a2 ← {p2, p3, p4}}, i.e. two agents with

“asymmetric” responsibilities: one has control on the first policy item and the other on the remaining three

slide-39
SLIDE 39

Agents’ decisions

◮ when asked to choose between xi and xj an agent selects

the vector which ranks higher in his preference ordering

◮ unless the principal uses authority:

  • 1. veto: the principal can impose the status quo if she prefers

it to agent’s choice

  • 2. fiat: the principal can impose her preferred substring to the

agent

slide-40
SLIDE 40

Organizational decisions

◮ an initial status quo policy is (randomly) given ◮ following the agenda, one agent chooses the policy items

assigned to him that, given the current value of the policies not in his control, determine the policy vector that ranks higher in his ordering

◮ unless the principal forces him to choose a different vector ◮ the process is repeated for all agents (according to

agenda) until an organizational equilibrium or a cycle are reached, with or without agenda repetition

slide-41
SLIDE 41

Two models

  • 1. getting what the principal wants when she knows what she

wants: control

  • 2. getting what the principal wants when she does not know

what she wants: the principal does not only want to control agents, but also to learn from them and experiment whether their rankings (or part of them) are better (closer to the “true” one) than her own

slide-42
SLIDE 42

Summary of results

Both problems are better solved by finer delegation structure: decision must be partitioned as much as possible.

◮ a finer delegation structure generates control: the principal

can get very close to her preferred decision even without exercizing power (divide and conquer)

◮ if the principal does not know what she wants, a finer

delegation structure induces more experimentation and learning (divide and learn)

◮ use of authority of course increases control and has an

inverted U-shape effect on learning (with veto more effective than fiat)

slide-43
SLIDE 43

Getting what you want when you know what you want

Rank Agent1 Agent2 Agent3 Principal 1st 011 011 011 000 2nd 111 000 010 101 3rd 000 001 100 111 4th 010 110 101 110 5th 100 010 000 100 6th 110 111 110 001 7th 101 101 111 010 8th 001 100 001 011

slide-44
SLIDE 44

Getting what you want when you know what you want

Rank Agent1 Agent2 Agent3 Principal 1st 011 011 011 000 2nd 111 000 010 101 3rd 000 001 100 111 4th 010 110 101 110 5th 100 010 000 100 6th 110 111 110 001 7th 101 101 111 010 8th 001 100 001 011 Example I: how to get a different equilibria With the organizational structure {a1 ← {p1}, a2 ← {p2}, a3 ← {p3}, with agenda (a1, a2, a3) and the initial status quo [0, 1, 1], [0, 0, 0] is an equilibrium

slide-45
SLIDE 45

Different Global Optima

Order Agent1 Agent2 Agent3 1st 001 000 001 2nd 110 111 110 3rd 000 001 000 4th 010 010 010 5th 100 100 100 6th 011 011 011 7th 111 101 111 8th 101 110 101 Example II: cycles or different unique equilibria

slide-46
SLIDE 46

◮ Structure {a1 ← {p1, p2}, a2 ← {p3}} always generates the

cycle [001] → [000] → [110] → [111] → [001]. It is therefore a structure in which intra-organizational conflict does never settle into an equilibrium

◮ Structure {a1 ← {p1}, a2 ← {p2}, a3 ← {p3}} has the

unique equilibrium [001] that is reached from every initial condition

◮ Structure {a1 ← {p1}, a2 ← {p2, p3} also produces a

unique equilibrium but a different one, i.e. vector [000]

slide-47
SLIDE 47

The role of organizational structure

We simulate random problems with 8 policies, random principals and agents and the following organizational structures:

◮ O1: a1 ← {1, 2, 3, 4, 5, 6, 7, 8} ◮ O2: a1 ← {1, 2, 3, 4}, a2 ← {5, 6, 7, 8} ◮ O4: a1 ← {1, 2}, a2 ← {3, 4}, a3 ← {5, 6}, a4 ← {7, 8} ◮ O8: a1 ← {1}, a2 ← {2}, a3 ← {3}, a4 ← {4}, a5 ←

{5}, a6 ← {6}, a7 ← {7}, a8 ← {8}

slide-48
SLIDE 48

Organizational structure, equilibria and cycles I

With agenda repetition. Average number of equilibria and cycles over 1000 randomly generated problems:

  • Org. Structure
  • No. of equilibria

Share of cycles O8 2.78 (1.22) 0.78 O4 1.89 (0.98) 0.74 O2 1.03 (0.45) 0.58 O1 1.00 (0.00) 0.00 Organizational equilibria and cycles for different

  • rganizations

(n=8, 1000 repetitions, standard deviation in brackets)

slide-49
SLIDE 49

Organizational structure, equilibria and cycles II

Without agenda repetition:

  • Org. Structure
  • N. of different final policy vectors

O8 41.93 (3.14) O4 27.73 (2.45) O2 10.30 (1.22) O1 1 (0.0) Table 5: Number of different outcome vectors without agenda reiteration and without overruling (n=8, 1000 repetitions, standard deviation in brackets) Divide and conquer!!

slide-50
SLIDE 50

Veto power

It increases decidability by sharply reducing the number of cycles: P(veto)

  • N. optima
  • N. cycles

Control loss

  • Perform. loss

0.0 0.94 202.94

  • 161.20
  • 159.51

0.3 13.88 146.99

  • 71.88
  • 14.45

0.5 27.60 86.45

  • 65.82
  • 6.90

0.8 46.67 14.46

  • 65.74
  • 3.93

1.0 56.65 0.00

  • 64.61
  • 3.16

The effect of veto in O8

slide-51
SLIDE 51

Fiat power

similar to veto, but fewer local optima, therefore more control but less performance: P(fiat)

  • N. optima
  • N. cycles

Control loss

  • Perform. loss

0.0 0.94 202.94

  • 161.20
  • 159.51

0.3 15.48 192.81

  • 2.36
  • 13.91

0.5 29.63 138.20

  • 0.59
  • 7.27

0.8 35.13 36.55

  • 0.03
  • 6.04

1.0 28.82 0.00 0.00

  • 7.78

The effect of fiat in O8

slide-52
SLIDE 52

Fiat power with coarser partitions

In O2 fiat produces worse results than in O8: P(fiat)

  • N. optima
  • N. cycles

Control loss

  • Perform. loss

0.0 0.99 154.08

  • 156.86
  • 161.69

0.3 8.32 164.39

  • 0.20
  • 28.65

0.5 9.84 119.49

  • 0.01
  • 24.99

0.8 9.47 25.64 0.00

  • 26.16

1.0 8.29 0.00 0.00

  • 31.14

The effect of fiat in O2

slide-53
SLIDE 53

Learning

◮ principal and agents may adaptively learn by trial and error ◮ when a new organizational equilibrium is tried against a

status quo, principal and agents observe which of the two rank better

◮ and modify their rankings if they differ from the observed

  • nes

◮ agents may also learn and adapt to the principal’s

preferences (persuasion, docility)

slide-54
SLIDE 54

The fundamental trade-off

◮ increased used of veto, fiat and more docile agents

increase control

◮ up to a a certain point it increases also experimentation

and learning (because of less cycles and more local

  • ptima)

◮ but above that level also experimentation and learning get

curbed by control

slide-55
SLIDE 55

Veto power and principal’s learning

Figure: The effect of veto power on principal’s learning in O8 and O2

slide-56
SLIDE 56

Fiat power and principal’s learning

Figure: The effect of fiat power on principal’s learning in O8 and O2

slide-57
SLIDE 57

The effect of fiat and principal’s learning on performance and control

Figure: The effect of fiat power and principal’s learning on performance and control in O8

slide-58
SLIDE 58

The effect of veto power on control with agents’ docility

Figure: The effect of veto power on control with agents’ docility in O8

slide-59
SLIDE 59

Principal’s learning with high or low agents’ docility

Figure: Principal’s learning with high or low agents’ docility in O8 for different probabilities of veto

slide-60
SLIDE 60

Performance with high or low agents’ docility

Figure: Average and best performance with high or low agents’ docility in O8 for different probabilities of veto