philosophy
play

PHILOSOPHY 2018-2019 JELLE DE BOER Lecture 1 This lecture, today - PowerPoint PPT Presentation

PHILOSOPHY 2018-2019 JELLE DE BOER Lecture 1 This lecture, today Practical matters Introduction Values Wellbeing, happiness Subjectivism Relativism Grade components Multiple Choice Exam: 60% Duo Essay: 40%


  1. Relativism ■ Cultural relativism: different cultures vary in systems of moral norms ■ Does it follow that there is no culture independent universal morality? ■ No, not necessarily: – Perhaps there is and somehow no culture has dicovered this system of universal norms – Or varying cultures and their systems of norms are somehow rooted in a more (fundamental?) system of universal norms

  2. Moral relativism ■ Variant of cognitivism. ■ Moral statements have truth values, they are true or false. ■ They are true or false relative to a specific culture.

  3. Moral relativism - objections ■ Certain values and norms are common to all cultures. ■ No objective standpoint to criticize the morality of a specific culture. Or to decide a moral discussion between members from different cultures. ■ The idea of moral progress still possible?

  4. Normative relativism? ■ “ each culture should have its own morality ” ■ “ one should be tolerant of different cultures”  These are universal claims.  And do not follow from moral relativism. A moral relativist can also say that one should not be tolerant.

  5. DECISION THEORY & GAME THEORY Lecture 2

  6. Decision theory - branches ■ Individual decision theory: studies decision making when actors are confronted with various ‘ states of nature’. (sometimes ‘ decion theory ’ in a narrower sense) ■ Game theory: studies decision making when actors interact with each other. ■ Social choice theory: studies how to derive a collective decision from individual preferences. (not addressed in this course)

  7. Rational actor Mental states, two basic categories: ■ Beliefs: mind-to-world direction of fit → Mental content must mirror the world ■ Desires: world-to-mind direction of fit → World must mirror the mental content

  8. Example: mental content Belief [glass of beer]: representation of glass of beer in de world.  Mind to world direction of fit Desire [glass of beer]: bring about change in the world (e.g. I ask the bartender for a glass of beer) so that the world comes to match this mental state.  World to mind direction of fit Elizabeth Anscombe: desire is like a shopping list, a belief is like an inventory list.

  9. Actors have  desires  beliefs  + rationality Formalised in decision theory:  preferences over outcomes  assign probabilities to outcomes  + these satisfy consistency requirements (axioms of the theory)

  10. Descriptive - Normative Decision making can be studied:  Descriptively: psychology, behavioral economics → study how people actually make choices. In the lab or in the field.  Normatively: decision theory → studies how people should make decisions.

  11. ■ Conception of rationality: means-ends rationality  Not about the ends or goals that a person sets himself (substantive rationality) → external to the analysis  But, given these goals, what would be the rational thing to do?

  12. Formalize decision problem 1. Acts 2. States 3. Outcomes Action: function (state) = outcome  Can be done in a matrix (or table), tree or vector.

  13. What is the decision table? You contemplate studying medicine or going to a dance academy. You reason that going to a dance academy may result in an exciting life but only when the economy is not in a recession. Because when then budgets for culture will be cut and you will end up being poor. Becoming a doctor in a growing economy gets you a good life and under a recession it will still offer you a reasonably good life.

  14. Recession No recession Dance academy poor exciting Medicine reasonably good good

  15. Decision making under ignorance Various rules: ■ Dominance ■ Maximin  we will only look at this one – leximin ■ Maximax ■ Minimax regret ■ Insufficient reason ■ Optimism-pessimism

  16. Maximin – avoid the worst case scenario A1: -3 S1 S1 S2 S2 S3 S3 S4 S4 A2: 2 A1 1 -3 5 6 A3: -10 A2 2 2 3 3 A3 4 6 -10 5 2 is the highest  select A2

  17. Decision making under risk Knowledge about the probabilities → standard rule: maximize expected utility = max. {prob . utility} Can also be done with e.g. money (or time or..), if Utlity is a linear function of this factor. (But for many people money has decreasing marginal utility)

  18. Relation Income – Happiness, countries

  19. Utility scales and axioms Ordinal utility function & interval utllity function Preferences must satisfy 3 axioms - asymmetry - completeness 2 extra axioms - transitivity - independence - continuity

  20. Von Neumann en Morgenstern interval scale Construct a scale by taking two extremes – say, a top item and a lousy item – and compare the choice alternatives with lotteries over these extremes. Example: firstly rank the alternatives: Porsche Volkswagen Skoda Now choose a top item & a lousy item to construct the scale, e.g. Ferrari & Honda

  21. Ask the actor what lottery over the Ferrari (F) and the Honda (H) would leave him/her indifferent to a Porsche / Volkswagen / Skoda for certain. A says: Porsche ̴ 0,8 F 0,2 D Volkswagen ̴ 0,5 F 0,5 D Skoda ̴ 0,2 F 0,8 D

  22. Porsche ̴ 0,8 F 0,2 H Volkswagen ̴ 0,5 F 0,5 H Skoda ̴ 0,2 F 0,8 H Assume U(Ferrari) = 100, U(Honda) = 0, then U(Porsche) = 0,8.100 + 0,2.0 = 80 U (Volkswagen) = 0,5.100 + 0,5.0 = 50 U (Skoda) = 0,2.100 + 0,8.0 = 20

  23. So when preferences over a set of alternatives of an actor satisfy:  Asymmetry  Completeness  Transitivity  Independence  Continuity Then one can derive a cardinal (interval) VNM utility function: then one can assign interval numbers to the alternatives.

  24. Application VNM: Health Utilities Policy makers in health care need a measure for the quality of health states from the perspective of patients. - For example for Qaly: Quality Adjusted Life Year = life expectancy x quality remaining years

  25. Other method to measure this quality:  Rating scale, e.g. Visual analog scale Death 100% Healthy Illness P Illness Q Validity relatively weak Sensitive to end of the scale bias (people tend to avoid extremes of the • scale) • Sensitive to spreading bias (people tend to spread outcomes equally over the scale)

  26. Game theory Analyses interaction-structure between individuals and solution concepts Instead of states of nature: other individuals

  27. Prisoner’s Dilemma Cooperate Defect Cooperate 2, 2 0, 3 Defect 3, 0 1, 1

  28. Sequential: first one actor chooses, then the other Game tree

  29. Repeated game Repeating the game alters the strategic nature of the game. One shot PD leads to mutual defection and a collectively suboptimal equililibrium. Repeated PD offers cooperative possibilities (when indefinitively repeated)

  30. Rational strategies in a repeated PD Always cooperate? → susceptibe to exploitation by defecting actor. Always defect?  Equilibrium strategy, but does not reap cooperative benefits.

  31. Tit for tat, direct reciprocity  Start with cooperation.  In each next round mirror what the other player did in the previous round. Axelrod (1984): Tit for Tat most successful strategy in computer tournament.

  32. - Reputation plays a role - Information can also come from third parties (indirect reciprocity) Multiple strategies possible:  Win, shift; Loose, stay - both C in previous round  C  Defect - Both D in previous round  C with prob . β  Tit for tat - Other C, I D in previous round  D  50% cooperate 50% defect - Other D, I C in previous round  D  Etc. Which strategies succeed is often tested by evolutionary simulations

  33. Stag Hunt game Two hunters: hunt stag or hunt hare Stag Hare Stag 3, 3 0, 2 Hare 2, 0 1, 1 What to do?

  34. 2 rational considerations in a Stag Hunt: 1. Maximize pay off 2. Risk avoidance Cooperation requires trust - Stag hunt game a.k.a. Assurance Game In evolutionary simulations with random pairing: hare hunters take over the population, stag hunters go extinct.

  35. Nash equilibrium A combination of strategies is a Nash Equilibrium (NE) if neither party has a reason to unilaterally change its strategy. Stag Hunt: [Stag, Stag] & [Hare, Hare] are both Nash Equilibria.

  36. What are the Nash equilibria (in pure strategies)? C 1 C 2 C 1 C 2 R 1 2, 2 1, 3 R 1 2, 1 0, 0 R 2 3, 1 0, 0 R 2 0, 0 1, 2 C 1 C 2 R 1 2, 1 1, 0 R 2 3, 0 0, 1

  37. Evolutionary game theory Pay off = number of offspring, reproduction Individuals do not make choices but follow fixed strategies. After each round there is reproduction, new generations (older generations die) Evolutionary stable strategy (ESS): population with species that follow this strategy cannot be invaded by another species that follow another strategy.

  38. ESS is always also a Nash Equilibrium. However, not every Nash Equilibrium is an ESS. Hi Lo game 1,1 0,0 0,0 2,2 → way to reduce the number of Nash equilibria (and get a unique solution) Evolutionary game theory can also be used for players who are boundedly rational and act on the basis of conditioning (stimulus-response) trial & error learning  gradually towards equilibrium.

  39. CONSEQUENTIALISM AND UTILITARIANISM Lecture 3

  40. Case: Data-driven innovation: Big Data for Growth and Well-being “data -driven innovation has become a key pillar of 21st century growth, with the potential to significantly enhance productivity, resource efficiency, economic competitiveness, and social l well-be being ing .” Source: The Organisation for Economic Co-operation and Development (OECD) report “Data - driven innovation”

  41. Normative ethical theories 1. Consequentialism, Utilitarianism 2. Deontology 3. Social contract theory 4. Virtue ethics

  42. Person → Action → Consequences ↑ ↑ ↑ Virtue ethics Deontology Consequentialism Utilitarianism Interdependency actors (as in game theory ) → Social contract theory

  43. Consequentialism Consequentialisme: moral worth is in the consequences of an action.  That is, in the value(s) that are realized (e.g. freedom, wellbeing /happiness/utility, knowledge, beauty, etc.)  An action is morally good if it has good consequences, given the possible actions.  Can be monistic or pluralistic in terms of values.  Value(s) can be maximixed, but not necessarily (another possibility would be e.g. egalitarian)

  44. Utilitarianism ■ Subset of consequentialism. ■ Monistic: only utility (= wellbeing) counts ■ Maximizes / promotes utility ■ What is utility or wellbeing ? → lecture 1 – Hedonism – Preference satisfaction (as in decision theory, lecture 2) – Objective list

  45. Prominent utilitarians Jeremy Bentham (1748-1832), John Stuart Mill (1806-1873), Henry Sidgwick (1828-1900), Derek Parfit (1942 – 2017), Peter Singer (1946-) Bentham: ..“this fundamental axiom, it is the greatest happiness of the greatest number that is the measure of right and wrong.” Mill: “ Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.”

  46. Procedure 1. Definine concept of utility. • Hedonism / preference satisfaction / objective list 2. What are the possible alternative actions? 3. Determine for each alternative action its total expected utility. • Expected utility = probability x utility • Total = aggregate: all those who are involved 4. The action that max [total utility] = the morally good action = one’s obligation to perform.

  47. Characteristics  Impartiality: no one is privileged. – Anyone who can be more or less happy (can suffer) belongs to the moral community. – Argumentative basis for: women’s right to vote, abolishment slavery, animal welfare/rights  Traditional moral rules (thou shalt not steal, thou shalt not lie) are not absolute. • Those rules must be interpreted as flexible, e.g. lying is good if it increases total utility.

  48.  Forward looking: consequences are ahead, in the future. The past is irrelevant. – E.g., punish somebody because that increases future utility in society, not because he deserves punishment.  Insatiable: any further increase of utulity is better, and morally required. – Variant that drops this: satisficing utilitarianism.

  49. Case: Big data for growth and wellbeing Some facts from: https://worldhappiness.report/ed/2019/big-data-and-well-being/ ■ Number of likes on Facebook  correlates with individual Life Satisfaction (i.L.S.) (but not strongly) ■ Sentiment analysis (Positive emotion terms, negative emotion terms) Twitter  correlates with i.L.S. (but not strongly) ■ Drug prescriptions from administrative datasets of a population  correlates with i.L.S. (more strongly)

  50. ■ Google trend data on frequency of positive terms to do with work, health, family  correlates with i.L.S. (more strongly) ■ Sentiment analysis Twitter Mexico  correlation with events (more strongly) ■ Aggregate sentiment data  correlates with between countries / groups variation (more strongly)

  51. How can these data be used? ■ reduces the reliance on expensive large surveys. ■ governments and companies can target the low mood / life satisfaction areas with specific policies.

  52. Philosophical issues ■ Which concept of well-being? For which use? ■ How to interpret low correlation mood/sentiment measures with life satisfaction? ■ Target the low mood / life satisfaction areas with specific policies seems to presuppose utilitarian calculus: justified? ■ Most data are retrieved without consent.

  53. ■ Ability to measure some proxies well may (unwillingly) move other important things to the background. ■ How to deal with those other important things, e.g.: freedom? Possible answers: – No need! Everything is already incorporated in well-being measure. – Can be measured, e.g. in terms of opportunity sets, but cannot be compared with well- being (e.g. must have a threshold value, must be prioritized) – Can be measured and compared (to a sufficient extent): a utility function can be construed.

  54. General criticism and discussion 1. Is utility all that matters? Aren’t there other intrinsic values?  Is the completeness axiom correct? 2. Rules like ‘ thou shalt not steal ’ are inflexible. They concern fundamental rights that cannot be traded against considerations of utility/wellbeing. E.g. it is wrong to sacrifice innocent people in order to max [utility]. No exploitation of minorities. 3. Heavy information processing: for each situation calculate expected utility. 4. Integrity (and separateness) of persons: individuals are more than carriers of utility.

  55. 5. Backward looking reasons are important. E.g. one deserves punishment for what one had done. 6. Special relations are important: family and friends have a higher priority than strangers.

  56. Responses utilitarianism  Bite the bullet: e.g. Peter Singer: most criticism is an irrational product of our evolutionary and cultural past.  Modifications: e.g. indirect / rule utilitarianism utilitarian argument: • total utility everybody utilitarian calculating < total utility everybody follows rules • System of rules that apply to all in a society.

  57. Indirect / rule utilitarianism - discussion ■ Problem for indirect/rule utilitarianism: rule fetishism: must a rule always be followed, no matter the circumstances? Even when it is obvious that it does not yield max [U]? – Response: the rules are rules of thumb, plans for the future. utilitarian calculus → design system of global rules to max [U] Follow these rules as long as there is no reason to reconsider (and to recalculate and redesign).

  58. ■ Other problem: what to do in an actual situation is derived from a hypothetical situation. ■ Again another problem: does it provide the appropriate moral justification? Example: I save my own child instead of 2 strange children. Why? Well, because this rule is element of a system that max [U]… …. Isn’t that one thougt too many? (Bernard Williams)

  59. Contemporary utilitarian Peter Singer https://www.ted.com/talks/peter_singer_the_why_and_how_of_effec tive_altruism?language=nl

  60. DEONTOLOGY AND SOCIAL CONTRACT THEORY Lecture 4

  61. Thought-experiment in ethics: trolley problem Are you going to throw the switch?

  62. Trolley problem part 2 Are you going to push the fat man?

  63. Deontology Founding father: Immanuel Kant (1724-1804) Kant: moral worth is not to be found in the consequences of an action. E.g. lying or stealing or killing is not bad because of the bad consequences that these actions may happen to have but because they are bad actions, period. How to understand this?

  64. Example X helps Y to cross the street. Is moral worth to be found in the consequences? Suppose X does it because he: - actually wants to gain approval from Y and bystanders? - actually symphathizes with Y? - actually expects something in return from Y? - actually experiences pleasure from doing this?

  65. In such cases, the consequences are the same but the action is not good: the person does not act out of duty but only dutiful , according to duty. What makes an action good then, if not the consequences? “I helped her crossing the street because that is the right thing to do.” “But this is circular !” Patience… Moral worth most clearly shows itself when other motives are (somehow) absent, e.g. when someone’s mood is clouded – and one still does the right thing.

  66. Doing the right thing looks pretty formal now. Kant: that is exactly right! Principle underlying the right intention = lawlike, like a natural law. Only this law is a law that humans impose on themselves.

  67. Difference between humans and animals Kant: the rational nature of human creatures. Animals are driven by inclinations and impulses → subject to natural laws But humans can also impose laws on themselves, and follow them. (This gives us freedom)

  68. Kant and Newton Newton: everything in the universe is subject to natural laws. Kant: morality has universal scope and necessity → just like Newton’s laws. Only: humans impose the laws on themselves.

  69. Categorical imperative (1) Universal law formulation Act only according to that maxim by which you can at the same time will that it should become a universal law Categorical: not contingent on one’s own desires (such imperatives Kant calls ‘ hypothetical ’) and not on the circumstances. Kant’s idea: moral reasons are universally binding, irrespective of time, place, person.

  70. Example: lying, breaking a promise Can this be action guiding for you & can you at the same want that everybody acts like this? Would be self defeating.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend