SLIDE 1 Some Afterthoughts on Knowledge-based Obligation
Rahul Bendre1 2 Rohit Parikh1 3 Andreas Witzel1 4
1 Graduate Center of CUNY 2 IIT Kanpur 3 Brooklyn College of CUNY 4 ILLC, University of Amsterdam
Jan 9, 2009
SLIDE 2
Outline
Introduction The Easy Cases The Kitty Genovese case Issues with the Original Model Ideas and Suggestions Action Tuples as Events Adding Probabilities and Expected Values Putting it all Together Applying the Framework Previous Examples Revisited Diffusion of Obligation Moral Obligation: a Co-ordination Game Summary References
SLIDE 3
Outline
Introduction The Easy Cases The Kitty Genovese case Issues with the Original Model Ideas and Suggestions Action Tuples as Events Adding Probabilities and Expected Values Putting it all Together Applying the Framework Previous Examples Revisited Diffusion of Obligation Moral Obligation: a Co-ordination Game Summary References
SLIDE 4 Introduction
In [KBO] the authors developed a framework for dealing with situations where an agent’s moral obligation depends on her
- knowledge. Two contrasting cases discussed in that paper were:
◮ Example 1:
Uma is a physician whose neighbour is ill. Uma does not know and has not been informed. Uma has no
- bligation (as yet) to treat the neighbour.
◮ Example 2:
Uma is a physician whose neighbour Sam is ill. The neighbour’s daughter Ann comes to Uma’s house and tells her. Now Uma does have an obligation to treat Sam, or perhaps call in an ambulance or a specialist. The difference in obligation arises because in the second example Uma has Knowledge which she does not have in the first.
SLIDE 5 Knowledge Framework
The [KBO] paper uses the history based semantics of [PR].
◮ Events happen sequentially to form Global History. ◮ Local History of any agent h is a homomorphic image of the
finite prefix of the actual Global History Ht.
◮ The Local View function (λi) is used to derive the local
history from the global history as λi(Ht) = h
◮ Each Local History may be compatible with many Global
- Histories. [h]i = {H ∈ H|λi(Ht) = h}
◮ There exists an equivalence relation ∼i given by: Ht ∼i H′ t iff
λi(Ht) = λi(H′
t) ◮ Agent knows formula φ about the global situation if φ is true
- f all global histories compatible with her local history.
The resulting framework along with an the assignment of social utilities to the Global Histories is used to define the obligation of an agent to do that action a which maximizes the social utility.
SLIDE 6
Knowledge-based Obligation
◮ An agent i has a knowledge-based obligation to perform
action a iff a is an action that (only) i can perform and i knows that it is good to perform a.
◮ For an agent i to know that it is good to perform an action a,
all maximum valued extensions of all the histories considered possible by i must contain the next action as a.
◮ An underlying assumption for this framework is that all the
agents act in a turn-based manner. We shall see how this assumption affects the application of the framework to certain examples.
SLIDE 7 Sam and Uma
Example 1: Uma is a physician whose neighbour Sam is ill. Uma does not know and has not been informed. Uma has no obligation (as yet) to treat the neighbour.
◮ Histories in the protocol are:
H = {H1 = (¬v¬h), H2 = (vh), H3 = (¬vh), H4 = (v¬h)} in the decreasing order of social utility.
◮ Uma has no access to know Sam’s action (the first event) so
her local history at this time consists of a non-informative clock tick c.
◮ At this time, her local history is compatible with all histories
in the protocol and she has a Default Obligation1 to do action ¬h. Thus, Uma has no obligation to offer her help to Sam due to lack
1A system of [Grove Spheres] is defined on the protocol and a default
- bligation is an obligation in only the most plausible histories considered by the
agent.
SLIDE 8 Sam, Ann and Uma
Example 2: Uma is a physician whose neighbour Sam is ill. The neighbour’s daughter Ann comes to Uma’s house and tells her. Now Uma does have an obligation to treat Sam, or perhaps call an ambulance or a specialist.
◮ H = {(¬v¬m¬h), (vmh), (¬v¬mh), (vm¬h)}. Thus, the
protocol prohibits Ann from lying.
◮ After Uma receives the message, at time t2, her local history
looks like this: (cm).
◮ This knowledge makes histories (¬v¬m¬h) and (¬v¬mh)
incompatible with her local history.
◮ Out of the remaining histories (vmh), and (vm¬h), Uma now
knows that (only) she has the action h which will guarantee the maximum valued history making it her Knowledge Based Obligation. The [KBO] framework works perfectly with these cases to capture the notion of a moral obligation and the influence of ‘knowledge’
SLIDE 9
The story
The K.G. case is a classical murder case posing challenges to attempts of explaining when it is that feelings of obligation arise.
◮ In the early morning hours of March 13, 1964 Catherine
Genovese was brutally attacked and killed in the Kew Gardens section of Queens, New York City.
◮ Many neighbours saw what was happening, but no one called
the police.
◮ People reasoned that there must have been many calls already
and that there was no point in their calling and adding to the commotion.
◮ When the cops finished polling the immediate neighbourhood,
they discovered at least 38 people who had heard or observed some part of the fatal assault on Kitty Genovese. Some 35 minutes passed between Kitty being attacked and someone calling the police. This runs counter to the intuition that the more people are present, the more likely it should be that someone of them will intervene and help.
SLIDE 10 Bystander Effect
◮ The Kitty Genovese case sparked off a line of research in
psychology to examine the so called bystander effect.
◮ Diffusion of Responsibility is offered as one of the explanations
- f this effect where each bystander assumes that someone else
is going to intervene and so refrains from acting himself.
◮ It is intriguing to try and formalize the underlying issues of
knowledge and obligation in this case and diffusion of responsibility seems to be the explanation which most naturally lends itself to mathematical modeling.
◮ Some technical assumptions in the [KBO] framework create
problems with explaining the diffusion of responsibility in the K.G. case.
◮ The modifications to the [KBO] framework will be directed to
be able to capture this idea of diffusion of responsibility. In the subsequent sections we will look at the issues with the
- riginal model and some suggestions to overcome them.
SLIDE 11 The Original Model
The following are the salient points of the original [KBO] model:
◮ The original framework consists of the following:
◮ A set of agents; ◮ Sets of (disjoint) actions, or events, for each agent; ◮ A set of possible histories, which are basically (infinite)
sequences of events that take place;
◮ And an associated value representing that history’s utility to
society.
◮ All possible extensions to any finite prefix of a possible history
start with some action of one same agent. That is, at any point in time, there is exactly one agent whose turn it is to take the next action or effect the next event.
SLIDE 12
The Original Model
◮ Agents can only observe certain events (including their own
actions and possibly those of other agents), and can only distinguish histories which differ in events observable by them (given by λi and ∼i).
◮ An agent knows everything which holds of all the histories
that he cannot distinguish from the actual one.
◮ Some agent’s action a is obligatory at some finite prefix of a
history, if all maximum-valued extensions of that prefix start with a, and she knows this. Now, we would like to try to use this formal framework in order to explain why no single bystander felt an obligation to act in the K.G. case.
SLIDE 13
The Model applied to the KG case
In [KBO], the formal details are not spelled out; trying to do that, we encounter some complications.
◮ First, assume that the values of histories decrease with time
(in this emergency case).
◮ So, due to the (commonly known) turns, the agent whose
turn it is immediately after the event does have the obligation to act because there is exactly one history with maximum value and that agent knows this.
◮ On the other hand, assume that all histories in which anyone
helps at just any time have the same utility. Then, indeed no single agent has the obligation to help.
SLIDE 14
Issues with the Model
◮ So, the only way to explain the K.G. case using the framework
from [KBO] is by making the unintuitive assumption that it does not matter how much time passes before someone calls the police.
◮ And, in a way, that removes the whole twist of the case, since
if it indeed did not matter how much time passed, then it would not be a scandal that it took 35 minutes for the first call to come in.
◮ The assumption of turn-based actions is the main cause for
the framework to fail, at a technical level, to explain the behavior observed in the KG case.
SLIDE 15
Overview
In the course of the following section, we will consider the following modifications and additions to the framework.
◮ We need some way to remove the turn-based actions
framework and so, instead of single actions, action-tuples will be considered events.
◮ A Probability Distribution will be specified over the Protocol,
i.e. the set of all Global Histories H.
◮ Using the probability distribution, social utility for histories
will be specified as Expected Values.
◮ A threshold δ will be specified to be able to define obligation
as the maximization of expected social utility in the modified framework.
SLIDE 16 Action Tuples as Events
To do away with turn-based actions, we need to incorporate concurrent actions.
◮ We will consider action-tuples, where the ith entry will
correspond to the action chosen by the ith agent.
◮ Each agent chooses an action from his set of valid actions at
that time.
◮ The set of valid actions can include the action of ‘doing
nothing’ for each agent.
◮ The ordered tuple will then become the event that takes place
at that instant of time.
◮ Histories will, as before, be (infinite) sequences of events; in
this case — (infinite) sequences of action-tuples.
Note: The idea of action tuples as events may lead one to feel that all events need to be performed. This would leave no room for events like ‘began raining’. We could introduce a new agent nature to take the responsibility of ‘performing’ these random events.
SLIDE 17
Uncertainty . . .
◮ Suppose an event (a1, a2, . . . , an) takes place where ai’s are
the actions chosen by the corresponding agents.
◮ Assume that agent i is able to see the actions of all other
agents except agent j, then agent i will see the following event — (a1, a2, . . . , ?, . . . , an), where ? is the no-information event at the jth position.
◮ And, due to this uncertainty, the event witnessed by agent i
can be compatible with many events (actually, with all possible agent-j-actions at the jth position).
◮ So, we will re-define the local view function (λi) to take the
finite prefix of the Global History (i.e. Ht) and produce the local history of agent i by replacing the global events with uncertain local events: λi((a1, a2, . . . , aj, . . . an)) = (a1, a2, . . . , ?, . . . , an).
◮ The equivalence relation ∼i will be retained to mean the same
thing as in the original framework.
SLIDE 18
. . . and Obligation
The notion of obligation will have to be modified to mean the following:
◮ An agent i has obligation to do an action a at some finite
prefix of a history, if all maximum-valued extensions of that prefix (or other finite prefixes considered possible by that agent) start with the event where the ith action in the tuple is the action a and that she knows this.
◮ In other words, an obligation arises for an agent if she knows
that she has an action by which she can ensure that, all the so-called good (actually, best) ways in which the world may evolve, will be preserved irrespective of what the other agents choose to do.
SLIDE 19
Adding Probabilities and Expected Values
We assume that the agents start with a common prior over the global histories. This assumption may seem troubling, but Aumann, [Au] discusses this very issue, and we will treat his discussion as an adequate defense of our assumption. For describing the common prior over the set of global histories, consider P, a probability distribution over H, the (infinite) set of all histories. The probability of an event W (subset of H) is written as P(W ). Similarly, the social value/payoff of an event W (subset of H) is written as E[W ]. In this framework, an agent may need to consider the conditional probability and the conditional expected value. These are written as P(W1|W2) and E[W1|W2] respectively having the usual meanings.
SLIDE 20
Putting it all Together
We have the following in our framework:
◮ A set of agents {1, 2, . . . , n}. ◮ Sets of agents’ performable actions A1, A2, . . . ◮ Sets of agents’ observable actions A∗ 1, A∗ 2, . . . ◮ The set of events E whose elements are of the form
(a1, a2, . . . , an).
◮ Histories which are sequences (finite or infinite) of events. ◮ The protocol H. ◮ The probability distribution and expected values defined over
H.
◮ The threshold parameter δ.
SLIDE 21 Probabilistic Knowledge-based Obligation
Now, we would define the probabilistic knowledge-based obligation as follows: An agent, at time t will have the probabilistic knowledge-based
- bligation to do action a if the expected value of doing action a is
‘δ’ greater than the expected value of not doing it. Let us try to apply this framework to some examples.
SLIDE 22 Sam, Ann and Uma
◮ In Example 1 the addition of the probabilities and expected
values allow Uma, in the absence of precise knowledge, to reason based on the probability of Sam falling ill.
◮ Similarly, in Example 2, with the use of action tuples, we
would be in a position to represent the event of sending and receiving the message more clearly.
◮ If Ann sent a letter; (c, sm, c) and (c, c, rm) would be two
distinct events.
◮ If instead, Ann made a phone call to Uma, the event would be
(c, sm, rm).
◮ Also, the protocol could be extended to include cases of Ann’s
lying and the probabilities would be useful to indicate the tendency of Ann to lie; which in turn can be used to consider Uma’s obligation to help.
SLIDE 23
Combined Obligation
As defined, Obligation for an agent to do a particular action is the result of a considerably large difference in the expected social utility of doing that particular action as against not doing it. Consider the following scenario:
◮ Two agents A1 and A2 with possible action sets {a, b} and
{c, d} respectively.
◮ The events (a, c) and (b, d) have a higher social utility as
compared to events (b, c) and (a, d).
◮ The agents get different (partial) information about reality
and compute expected values. In the above scenario, it could be the case that action a becomes an obligation for agent A1, while action d becomes an obligation for agent A2. But, the combination of the two actions is the event (a, d) with a low social utility.
SLIDE 24 Kitty Genovese Case Explored
After the agents (n in number) view the assault, they reason as follows:
◮ There are 2n events possible (each agent has to decide either
to call the police or not to call).
◮ Only n events are favourable to society (the events when
exactly one agent calls).
◮ Also, for each agent, other n − 1 agents can not see the action
- f this agent calling, and can still choose to call at a later time.
◮ These histories are also not favourable as they add redundant
information.
◮ The expected payoff of calling is far too low. Hence, all
agents decide not to call.
◮ This continues till the payoff for waiting longer becomes
smaller and smaller as time passes.
◮ At one instant, the expected value of calling exceeds the
threshold to become the obligation.
SLIDE 25 Coordination Games and Obligation
Comparison of moral obligation with action-tuples and coordination games.
◮ All agents are playing for a common goal which makes moral
- bligation similar to coordination games.
◮ In coordination games, each individual agent has a personal
payoff (possibly slightly different from that of the other agents) but in obligation there is only one (society) payoff.
◮ The typical dilemma in stag hunt or choosing sides
coordination games can be considered as a diffused obligation for the agents.
SLIDE 26
Stag Hunt
Consider a tribal society where two tribesmen are to go hunting for food for the tribe. The following payoffs for the society hold: a Stag for dinner is 10, two Hare are worth 6 and a single Hare is just worth 3. Before leaving, the first hunter overhears the second hunter’s wish to eat Stag meat for dinner and this ‘knowledge’ together with the probabilities makes going for the Stag an obligation for the first hunter. On the other hand, the second hunter looks at the probability distribution of the first hunter’s tendencies and realizes that the expected value of going for the Hare will be higher. Net result, Hunter 1 goes for the Stag and Hunter 2 goes for the Hare with a payoff of 3 (lowest possible) for the society.
SLIDE 27
Summary
◮ The [KBO] has the right intuitions but the framework fails
technically to explain the observations of the Kitty Genovese case due to the presence of turn-based actions.
◮ This difficulty is overcome by introducing action-tuples as
events.
◮ Diffusion of responsibility or obligation as a phenomenon can
be captured with this framework.
◮ The addition of probabilities and expected values makes the
framework suitable for numerical analysis if necessary.
◮ The concept of moral obligation with action-tuples and
strategy-profiles is similar in a lot of respects to coordination games.
SLIDE 28
References
Aumann, Robert, Econometrica, 55:1, (1987) 1-18. Bibb Latane and John M. Darley. “Group inhibition of bystander intervention in emergencies.”, in Journal of Personality and Social Psychology, 10(3):215–221, November 1968. Eric Pacuit, Rohit Parikh and Eva Cogan “The logic of knowledge based obligation”, in Synthese, 149 (2006) 311-341. Rachel Manning, Mark Levine, and Alan Collins. “The kitty genovese murder and the social psychology of helping: The parable of the 38 witnesses.”, in American Psychologist, 62(6):555–562, 2007. Parikh, R., and R. Ramanujam, “A Knowledge based Semantics of Messages”, in J. Logic, Language and Information, 12, (2003) 453-467. Grove, Adam, “Two Modellings for Theory Change”, in Journal of Philosophical Logic, 17, (1988) 157-170.