07.05.2020
On the Ethics of Algorithmic Decision-Making in Healthcare
- Dr. Thomas Grote
Ethics and Philosophy Lab, Cluster of Excellence “Machine Learning: New Perspectives for Science”; thomas.grote@uni-tuebingen.de
On the Ethics of Algorithmic Decision-Making in Healthcare Dr. - - PowerPoint PPT Presentation
On the Ethics of Algorithmic Decision-Making in Healthcare Dr. Thomas Grote Ethics and Philosophy Lab, Cluster of Excellence Machine Learning: New Perspectives for Science ; thomas.grote@uni-tuebingen.de 07.05.2020 My aims in this talk
07.05.2020
Ethics and Philosophy Lab, Cluster of Excellence “Machine Learning: New Perspectives for Science”; thomas.grote@uni-tuebingen.de
2 | 07.05.2020
3 | 07.05.2020
(i) Machine learning in medicine – between hype and hope (ii) Ethical issues at the individual level (iii) Ethical issues at the institutional level (iv) The ethics of medical machine learning – a glimpse into the future
4 | 07.05.2020
the application of machine learning within clinical medicine.
learning algorithms exceeding the abilities of medical professionals in the detection of disease or the prediction of critical health conditions.
5 | 07.05.2020
promise in the prediction of acute health threats, from cardiovascular disease, to kidney deterioration or circulatory failure.
6 | 07.05.2020
(i) improve the reliability of medical diagnosis, (ii) act as monitoring systems in intensive care, (iii) help in the discovery of new biomarkers, (iv) and at the hospital level, render the allocation of resources more efficient.
7 | 07.05.2020
do not mirror realistic settings.
clinical medicine still needs to be established.
8 | 07.05.2020
(i) Individual (ii) Institutional (iii) Population-level/public health
9 | 07.05.2020
diagnostic accuracy than a clinician, the complexity of its model makes it difficult to interpret why it made a given diagnostic decision.
10 | 07.05.2020
(i) A clinician will observe a bundle of symptoms. (ii) She makes various conjectures. (iii) She makes an inference to the best explanation. (iv) She decides on a given medical intervention. (v) Follow-up.
11 | 07.05.2020
12 | 07.05.2020
Peer Disagreement
competent experts who are in a similar epistemic position.
back and suspects that she has disease x, whereby her credences are 0.8.
diagnoses disease y, with a credence of 0.8
13 | 07.05.2020
The Problem of Automated Deferrence
avoids being held accountable.
14 | 07.05.2020
Patient Autonomy
15 | 07.05.2020
Overcoming Opacity
16 | 07.05.2020
Ethical Concerns at the Institutional Level
17 | 07.05.2020
Accountability Gap(s)
18 | 07.05.2020
How to Close the Gap?
19 | 07.05.2020
Fairness I
20 | 07.05.2020
Fairness II
United States to predict health care needs of patients.
patients.
miscalibration has been that the health-costs of individual patients were used as the main predictor.
restricted for Black patients.
21 | 07.05.2020
Fairness III
populations will almost inevitably be underrepresented. An academic hospital somewhere in Central Europe will most likely have lesser Black patients than Whites.
expose members of said groups to higher risks (as the medical contain contains sensitive information on the individual patients)
22 | 07.05.2020
Normative Alignment I
defer normative decisions to algorithms.
medicine.
23 | 07.05.2020
Normative Alignment II
treatment? (i) Highest cure rate (ii) Lowest risks (iii) The least suffering
24 | 07.05.2020
25 | 07.05.2020
The Political Economy of Medical AI/ML
the big tech companies.
that we should be worried about, if any?
26 | 07.05.2020
by black boxes.
clinical practice.
Excellence “Machine Learning—New Perspectives for Science”, EXC 2064, project number 390727645).
If you want to get in touch: Thomas.grote.@uni-tuebingen.de