Fidelity versus Interpretability Derek Bridge Insight Centre for - - PowerPoint PPT Presentation
Fidelity versus Interpretability Derek Bridge Insight Centre for - - PowerPoint PPT Presentation
Explaining Recommendations: Fidelity versus Interpretability Derek Bridge Insight Centre for Data Analytics University College Cork, Ireland Overview Recommender Systems Explaining Recommendations Case Studies Concluding Remarks
Overview
- Recommender Systems
- Explaining Recommendations
- Case Studies
- Concluding Remarks
RECOMMENDER SYSTEMS
What is a Recommender System?
- Software that helps
users discover
– new music and other media – cultural artefacts such as works of art and architecture – products and services – travel experiences – …
- Recommendations must
typically be
– relevant to the user (‘personalized’) and the context-of-use (‘contextualized’) – diverse – serendipitous – …
Photo by Nickolai Kashirin (CC by 2.0)
A Scenario
A hungry academic …. …receives a recommendation for a place-to-eat but …not within walking distance …a fusion-style cuisine with which the academic is unfamiliar. …Her confidence in the recommendation might be improved by an explanation…
A Scenario
A hungry academic …. …receives a recommendation for a place-to-eat but …not within walking distance …a fusion-style cuisine with which the academic is unfamiliar. …Her confidence in the recommendation might be improved by an explanation…
high cost high uncertainty
Types of Recommender System
- Content-based
- Collaborative
– User-based nearest-neighbours – Item-based nearest-neighbours – Matrix factorization
Training set Build Model
Types of Recommender System
- Content-based
- Collaborative
– User-based nearest-neighbours – Item-based nearest-neighbours – Matrix factorization
User & Context-of-use Training set Build Model
Types of Recommender System
- Content-based
- Collaborative
– User-based nearest-neighbours – Item-based nearest-neighbours – Matrix factorization
User & Context-of-use Training set Build Model Recommendation
Content-Based
Crime, drama Adventure, drama, fantasy Western Action, sci-fi Comedy, drama, romance
User-Based Nearest-Neighbours
5 4 4 3 4 5 2 5 3 3 2 5 4 1 5
User-Based Nearest-Neighbours
5 4 4 3 4 5 2 5 3 3 2 5 4 1 5
User-user similarity
Item-Based Nearest-Neighbours
5 4 4 3 4 5 2 5 3 3 2 5 4 1 5
Item-Based Nearest-Neighbours
5 4 4 3 4 5 2 5 3 3 2 5 4 1 5
Item-item similarity
Matrix Factorization
𝑜 movies 𝑛 users
Matrix Factorization
𝑜 movies 𝑜 movies 𝑛 users 𝑛 users 𝑔 latent factors 𝑔 latent factors
≈ ×
Matrix Factorization
𝑜 movies 𝑜 movies 𝑛 users 𝑛 users 𝑔 latent factors 𝑔 latent factors
≈ ×
Ever More Complex Models
Hybrids and Ensembles Latent Feature Spaces Multi- Objective Systems Deep Models
Interpretable Models
- Intelligible global
descriptions of systems
– E.g. decision trees – E.g. linear models (esp. sparse linear models)
- Challenges
– preserving accuracy – intelligibility, e.g. when there are many features or highly-engineered features – protecting Intellectual Property
- Interpretable deep
models
– learn to associate semantic feature with nodes in hidden layers
Outlook Humidity No Yes Yes Wind No Yes Sunny Overcast Rain High Normal Strong Weak
DARPA’s XAI Initiative
https://www.darpa.mil/program/explainable-artificial-intelligence
EXPLAINING RECOMMENDATIONS
Explanations are Relational
Explanations are Relational
- Recommendation only
– “You might like Never Let Me Go”
Explanations are Relational
- Recommendation only
– “You might like Never Let Me Go”
- Recommendation plus description
– “You might like Never Let Me Go, a 2010 dystopian drama based on the 2005 novel of the same name…”
Explanations are Relational
- Recommendation only
– “You might like Never Let Me Go”
- Recommendation plus description
– “You might like Never Let Me Go, a 2010 dystopian drama based on the 2005 novel of the same name…”
- Recommendation plus explanation
– “You liked Atonement, so you might also like Never Let Me Go”
User & Context-of-use
Intermediaries in Explanations
Recommendation Items Users Features [Vig et al., 2009] User
Intermediaries in Explanations
Recommendation Items Users Features [Vig et al., 2009] User likes
Intermediaries in Explanations
Recommendation Items Users Features [Vig et al., 2009] User likes are similar to
Intermediaries in Explanations
Recommendation Items Users Features [Vig et al., 2009] User likes is similar to are similar to
Intermediaries in Explanations
Recommendation Items Users Features [Vig et al., 2009] User likes is similar to are similar to who like
Intermediaries in Explanations
Recommendation Items Users Features [Vig et al., 2009] User likes likes is similar to are similar to who like
Intermediaries in Explanations
Recommendation Items Users Features [Vig et al., 2009] User likes likes is similar to are similar to who like are present in
Explanation Dimensions
Explanations Interpretable Actionable Cheap-to- compute Sound and Complete (Fidelity) Ethical
Fidelity
Soundness How truthful each element in an explanation is with respect to the underlying system Completeness The extent to which an explanation describes all of the underlying system Fidelity
[Kulesza et al., 2013]
Fidelity
Soundness How truthful each element in an explanation is with respect to the underlying system Completeness The extent to which an explanation describes all of the underlying system Fidelity
[Kulesza et al., 2013]
increasing trust, fewer requests for clarification, better understanding
White-Box Explanations
Training set Build Model
User & Context-of-use
White-Box Explanations
Training set Build Model
User & Context-of-use
White-Box Explanations
Training set Build Model Recommendation + “trace” data
User & Context-of-use
White-Box Explanations
Training set Build Model Explanation Generation Recommendation + “trace” data Recommendation + Explanation
Sound explanations User & Context-of-use
White-Box Explanations
Training set Build Model Explanation Generation Recommendation + “trace” data Recommendation + Explanation
Black-Box Explanations
Training set Build Model
User & Context-of-use
Black-Box Explanations
Training set Build Model
User & Context-of-use
Black-Box Explanations
Training set Build Model Recommendation
User & Context-of-use
Black-Box Explanations
Training set Build Model Explanation Generation Recommendation Recommendation + Explanation
User & Context-of-use
Black-Box Explanations
Training set Build Model Explanation Generation Recommendation Recommendation + Explanation
User & Context-of-use
Black-Box Explanations
Training set Build Model Explanation Generation Other data Recommendation Recommendation + Explanation
User & Context-of-use
Black-Box Explanations
Training set Build Model Explanation Generation Other data Recommendation Recommendation + Explanation Queries
User & Context-of-use
Black-Box Explanations
Training set Build Model Explanation Generation Other data Recommendation Recommendation + Explanation Queries Model-agnostic, probably not sound
Why Explain?
Scrutability Trust Persuasion Decision- support
CASE STUDIES
CASE STUDY A
White-Box Explanations of User-Based Nearest-Neighbours Recommendations
Explaining User-Based Nearest Neighbours Recommendations
- Difficulties
– Often 50+ neighbours – Userids are meaningless: strangers! – Profiles are large and private
Recommendation Users User is similar to who like
Explaining User-Based Nearest Neighbours Recommendations
- Difficulties
– Often 50+ neighbours – Userids are meaningless: strangers! – Profiles are large and private – Good at persuading [Herlocker et al, 2000] – Less good for trust and decision-support [Bilgic & Mooney, 2005]
Recommendation Users User is similar to who like
User & Context-of-use
Explaining User-Based Nearest Neighbours Recommendations
Training set Build Model Explanation Generation Recommendation + neighbours Recommendation + Explanation
CASE STUDY B
Item-Based Explanations for User-Based Nearest-Neighbours Recommendations
Item-Based Explanations
- Good for trust and decision-support [Bilgic &
Mooney, 2005]
- Familiar, e.g. Amazon:
– “Customers who bought Atonement also bought Never Let Me Go”
Recommendation Items User likes are similar to
Item-Based Explanations for User-Based Recommendations
Recommendation Users User is similar to who like
Item-Based Explanations for User-Based Recommendations
Explanation partners
- The user’s
neighbours
Candidate items
- The movies the
user has in common with her partners
Association rules
- Rules that link
candidates to the recommended item
[Bridge & Dunleavy, 2014; Kaminskas, Durão & Bridge, 2017]
User & Context-of-use
Item-Based Explanations for User-Based Recommendations
Training set Build Model Explanation Generation Recommendation + neighbours Recommendation + Explanation
User & Context-of-use
Item-Based Explanations for User-Based Recommendations
Training set Build Model Explanation Generation Recommendation + neighbours Recommendation + Explanation Mine association rules
CASE STUDIES C & D
Using Queries in Black-Box Explanations
(but these two case studies are not recommender systems!)
User & Context-of-use
Queries in Black-Box Explanations
Training set Build Model Explanation Generation Recommendation Recommendation + Explanation Queries
Are You Safe To Drive?
Weight Units consumed [Doyle et al, 2004; Bridge & Cummins, 2006]
Are You Safe To Drive?
Weight Units consumed [Doyle et al, 2004; Bridge & Cummins, 2006]
Are You Safe To Drive?
Weight Units consumed [Doyle et al, 2004; Bridge & Cummins, 2006]
Are You Safe To Drive?
Weight Units consumed [Doyle et al, 2004; Bridge & Cummins, 2006] decision boundary
Are You Safe To Drive?
Weight Units consumed [Doyle et al, 2004; Bridge & Cummins, 2006] decision boundary
LIME: Explanations for any Classifier
𝑦 Training set Build Model Build Local Model arg min
∈𝐻
ℒ 𝑔, , Π𝑦 + Ω() Classification Classification + Explanation Queries [Ribeiro et al, 2016]
CASE STUDY E
Opinionated Recommendation
Opinionated Recommendation
- Reviews as source of
features & sentiment
- Recommendations
that balance similarity & sentiment
- Novel explanation
generation
- Re-ranking of
recommendations
[Muhammad et al., 2016]
Opinionated Recommendation
[Muhammad et al., 2016]
Opinionated Recommendation
[Muhammad et al., 2016]
CASE STUDY F
Recommendation-By-Explanation
Recommendation-By-Explanation
[Rana & Bridge, 2017]
Recommendation-By-Explanation
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate [Rana & Bridge, 2017]
Recommendation-By-Explanation
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
Pearl Harbour
- fiancé-fiancee
relationship
- shooting
- secret-mission
- volunteer
- parachute
- …
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
Pearl Harbour
- fiancé-fiancee
relationship
- shooting
- secret-mission
- volunteer
- parachute
- …
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
Pearl Harbour
- fiancé-fiancee
relationship
- shooting
- secret-mission
- volunteer
- parachute
- …
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
Big Fish
- romantic-rivalry
- carnival
- secret-mission
- parachute
- …
Pearl Harbour
- fiancé-fiancee
relationship
- shooting
- secret-mission
- volunteer
- parachute
- …
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
Big Fish
- romantic-rivalry
- carnival
- secret-mission
- parachute
- …
Pearl Harbour
- fiancé-fiancee
relationship
- shooting
- secret-mission
- volunteer
- parachute
- …
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
Big Fish
- romantic-rivalry
- carnival
- secret-mission
- parachute
- …
Pearl Harbour
- fiancé-fiancee
relationship
- shooting
- secret-mission
- volunteer
- parachute
- …
The Illusionist
- fiancé-fiancee
relationship
- shooting
- secret-love
- broken-
engagement
- star-cross-lovers
- …
The Notebook
- star-crossed-
lovers
- secret-love
- broken-
engagement
- volunteer
- u.s.-army
- romantic-rivalry
- self-discovery
- ….
Candidate User’s past preferences [Rana & Bridge, 2017]
Recommendation-By-Explanation
94 95 96 97 98 R-by-E CB 75 80 85 90 95 R-by-E CB 2 4 6 8 10 R-by-E CB
Precision Diversity Surprise
Comparison
Generate & rank recommendations Generate explanations
Classical approaches
Generate recommendations Generate explanations Re-rank recommendations
Opinionated Recommendation
Generate reasons to recommend (explanations) Recommend those with the best reasons
Recommendation
- By-Explanation
CONCLUDING REMARKS
The Future
- Drive explanation
design & evaluation by explanation goals
- Bring explanations into
the heart of recommenders
- Design explanations
that go beyond relevance to the user
– context-aware – diversity – serendipity
The Future
- Design ways to present
and visualise recommendations and explanations
- Design explanations for
systems that are more conversational
– why are you recommending this? – what aren’t you recommending that? – why this ahead of that? – what would you recommend if things were different? (counterfactuals)
Acknowledgements
Lisa Cummins Kevin Dunleavy Fred Durão Arpit Rana Marius Kaminskas Dónal Doyle Padraig Cunningham
and all other members of the Recommender Systems Group, Insight Centre for Data Analytics
References
- Bilgic, Mustafa and Mooney, Raymond: Explaining Recommendations: Satisfaction vs. Promotion, Proceedings of
Beyond Personalization 2005: A Workshop on the Next Stage of Recommender Systems Research at the 2005 International Conference on Intelligent User Interfaces, 2005
- Bridge, Derek and Dunleavy, Kevin: If you liked Herlocker et al.'s explanations paper, then you might like this
paper too, Proceedings of the Workshop on Interfaces and Human Decision Making for Recommender Systems (Worskhop Programme of the Eighth ACM Conference on Recommender Systems), pp.22-27, 2014
- Cummins, Lisa and Bridge, Derek: KLEOR: A Knowledge Lite Approach to Explanation Oriented
Retrieval, Computing and Informatics, vol.25(2-3), pp.173-193, 2006Doyle
- Herlocker, Jonathan L. and Konstan, Joseph A. and Riedl, John: Explaining Collaborative Filtering
Recommendations, Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, pp.241— 250, 2000
- Kaminskas, Marius and Durao, Fred and Bridge, Derek: Item-Based Explanations for User-Based
Recommendations, Proceedings of eKNOW 2017, The Ninth International Conference on Information, Process, and Knowledge Management, pp.65-70, 2017
- Kulesza, Todd and Stumpf, Simone and Burnett, Margaret and Yang, Sherry and Kwan, Irwin and Wong, Weng-
Keen: Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models, Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing, 2013
- Khalil Muhammad, Aonghus Lawlor, Barry Smyth: On the Use of Opinionated Explanations to Rank and Justify
- Recommendations. Proceedings of the FLAIRS Conference, pp.554-559, 2016
- Rana, Arpit and Bridge, Derek: Explanation Chains: Recommendations by Explanation, Proceedings of the Poster
Track of the 11th ACM Conference on Recommender Systems, CEUR Workshop Proceedings, vol-1905, 2017
- Ribeiro, Marco and Singh, Sameer and Guestrin, Carlos: "Why Should I Trust You?": Explaining the Predictions of
Any Classifier, http://arxiv.org/abs/1602.04938, 2016
- Vig, Jesse and Sen, Shilad and Riedl, John: Tagsplanations: Explaining Recommendations Using Tags, Proceedings
- f the 14th International Conference on Intelligent User Interfaces, pp.47—56, 2009