controlling fairness and bias in
play

Controlling Fairness and Bias in Dynamic Learning-to-Rank ACM SIGIR - PowerPoint PPT Presentation

Controlling Fairness and Bias in Dynamic Learning-to-Rank ACM SIGIR 2020 Marco Morik * , Ashudeep Singh * , Jessica Hong , Thorsten Joachims TU Berlin, Cornell University Dynamic Learning-to-Rank +1 +1 +1 +1 +1 1 2


  1. Controlling Fairness and Bias in Dynamic Learning-to-Rank ACM SIGIR 2020 Marco Morik *† , Ashudeep Singh *‡ , Jessica Hong ‡ , Thorsten Joachims ‡ † TU Berlin, ‡ Cornell University

  2. Dynamic Learning-to-Rank +1 +1 +1 +1 +1 1 2 +1 +1 …. 3 +1 Update Update Update 4 5 6 User 1 User 3 User 2 Candidate Set for query x

  3. Problem 1: Selection Bias due to position • Number of clicks is a biased estimator of relevance. • Lower positions get lower attention. • Less attention means fewer clicks. • Rich-get-richer dynamic: What starts at the bottom has little opportunity to rise in the ranking. Position Bias

  4. Problem 2: Unfair Exposure User Distribution Probability Ranking Principle 1 2 [Robertson, 1977]: G left 51% Rank documents by probability of 49% relevance → 𝑧 ∗ . 3 Left Leaning Right Leaning Maximizes utility for virtually any 4 5 Prefer G left Prefer G right measure U of ranking quality G right news news 𝑧 ∗ ≔ argmax 𝑧 U 𝑧|𝑦 articles. articles. 6 Ranking by true average relevance leads to unfair rankings.

  5. Position-Based Exposure Model Definition: Exposure(j) = 1/log(1+j) Exposure 𝑓 𝑘 is the probability a users observes the item at position 𝑘 . G right 0.51 Relevance = 51% 0.71 Exposure of Group: 0.02 difference in expected relevance. 0.32 difference in expected 𝐹𝑦𝑞 𝐻 𝑦 = ෍ 𝑓 𝑘 exposure. 𝑘∈G G left 0.49 0.39 Relevance = 49% How to estimate? • Eye tracking [Joachims et al. 2007] 0 0.5 1 • Intervention studies [Joachims et al. 2017] • Intervention harvesting [Agarwal et al. 2019, Disparate exposure allocation: A small difference in Fang et al. 2019] average relevance, leads to a large difference in average exposure!

  6. Outline • Exposure Model • Fairness Notions • FairCo Algorithm • Unbiased Average Relevance estimation • Unbiased Relevance estimation for Personalization

  7. Exposure Fairness Impact Fairness Goal Exposure is allocated based on relevance The expected impact (e.g. clickthrough of the group. rate) is allocated based on merit. 𝐽𝑛𝑞 𝐻 𝑦 = 𝑔(𝑆𝑓𝑚 𝐻 𝑦 ) 𝐹𝑦𝑞 𝐻 𝑦 = 𝑔 𝑆𝑓𝑚 𝐻|𝑦 For the position bias model, 𝐽𝑛𝑞 𝑒 𝑦 = 𝐹𝑦𝑞 𝑒|𝑦 𝑆𝑓𝑚 𝑒 𝑦 Constraint Make exposure proportional to relevance Make the expected impact proportional (per group) to relevance (per group) 𝐹𝑦𝑞(𝐻 0 |𝑦) 𝐹𝑦𝑞 𝐻 1 |𝑦 = 𝑆𝑓𝑚(𝐻 0 |𝑦) 𝐽𝑛𝑞(𝐻 0 |𝑦) 𝐽𝑛𝑞 𝐻 1 |𝑦 = 𝑆𝑓𝑚(𝐻 0 |𝑦) 𝑆𝑓𝑚 𝐻 1 𝑦 . 𝑆𝑓𝑚(𝐻 1 |𝑦) . 𝐸 𝐹 𝐻 0 , 𝐻 1 𝐸 𝐽 𝐻 0 , 𝐻 1 Disparity = 𝐹𝑦𝑞(𝐻 0 |𝑦) 𝑆𝑓𝑚(𝐻 0 |𝑦) − 𝐹𝑦𝑞 𝐻 1 |𝑦 = 𝐽𝑛𝑞 𝐻 0 |𝑦 − 𝐽𝑛𝑞 𝐻 1 |𝑦 Measure 𝑆𝑓𝑚(𝐻 1 |𝑦) . . 𝑆𝑓𝑚 𝐻 0 |𝑦 𝑆𝑓𝑚 𝐻 1 |𝑦 [Singh & Joachims. Fairness of Exposure in Rankings. KDD 2018]

  8. Exposure(j) = 1/log(1+j) G right 0.51 Relevance = 51% 0.71 G left 0.49 0.39 Relevance = 49% 0 0.5 1 Does not satisfy Fairness of Exposure or Fairness of Impact.

  9. Dynamic Learning-to-Rank Sequentially present rankings to users that ❑ Maximize Expected User Utility 𝔽 𝑉 𝑦 ❑ Ensure Unfairness 𝐸 𝜐 goes to 0 with 𝜐 .

  10. Fairness Controller (FairCo) LTR Algorithm Proportional Controller: Linear feedback FairCo: Ranking at time 𝜐 control system where correction is ෠ 𝜏 𝜐 = argsort 𝑒∈𝒠 𝑆 𝑒 𝑦 + 𝜇 err τ 𝑒 proportional to the error. ෠ 𝑆 𝑒 𝑦 : Estimated 𝐻 𝑗 (෡ 𝐹 (𝐻 𝑗 , 𝐻(𝑒))) 𝑓𝑠𝑠 𝜐 𝑒 = 𝜐 − 1 max 𝐸 𝜐 𝜇 > 0 Conditional Relevance • Theorem: When the problem is well posed, FairCo ensures that 𝐸 𝜐 → 0 as 𝜐 → ∞ at the 1 rate of 𝒫 𝜐 . • Requirements: • Estimating Average Relevances ෠ 𝑆(𝑒) . • Estimating Unbiased Conditional Relevances ෠ 𝑆 𝑒 𝑦 for personalization.

  11. Estimating Average Relevances • Average number of clicks is not a consistent estimator. • IPS weighted clicks: 𝑑 𝑢 𝑒 : Click on 𝑒 at time 𝑢 . 𝑞 𝑢 𝑒 : Position bias at the position of 𝑒. [Joachims et al., 2017] 𝑆 IPS 𝑒 is an unbiased estimator of a document’s relevance. • ෠

  12. Experimental Evaluation Simulation on Ad Fontes Media Bias Dataset A user’s relevance is a function of their polarity and the news article’s polarity, and their openness. G left 𝜍 𝑒 < 0 … Prefer Prefer G right 𝜍 𝑒 ≥ 0 G left news G right news … articles. articles. Goal : Present rankings to a sequence of users to maximize their utility while providing fair exposure to the news articles Sample user 𝑣 𝑢 is drawn with a Each news source in the polarity parameter 𝜍 𝑣 𝑢 ∈ relative to their average relevance over the dataset has a polarity assigned [−1, 1] and an openness 𝜍 𝑒 ∈ [−1, 1] . user population. parameter 𝑝 𝑢 ∈ (0.05, 0.55) .

  13. Can FairCo break the Rich-get-richer dynamic? Effect of the initial ranking after 3000 users. Click count based ranking converges to unfair rankings due to the initial bias. FairCo keeps the Unfairness low for any amount of head start.

  14. Can FairCo ensure fairness for Minority user groups? Trades off utility for fairness when there is an imbalance in user distribution. FairCo converges to fair ranking for all user distributions.

  15. Outline • Exposure Model • Fairness Notions • FairCo Algorithm Selection Bias Fairness • Unbiased Average Relevance estimation • Unbiased Relevance estimation for Personalization

  16. D-ULTR: Relevance Estimation for Personalized Ranking ෠ 𝑆 𝑥 : Output of a Neural Network with weights 𝑥 . 𝑑 𝑢 𝑒 : Click on 𝑒 at time 𝑢 . 𝑞 𝑢 𝑒 : Position bias at position of 𝑒. 𝑆 𝑥 𝑒 𝒚 𝑢 − Relevance of document 𝑒 for query 𝒚 𝒖 . • To estimate: ෠ • Train the neural network by minimizing ℒ 𝑑 w . • ℒ 𝑑 w is unbiased i.e. in expectation it is equal to a full information squared loss (with no position bias).

  17. Evaluation on Movielens dataset • Completed a subset of Movielens dataset ( 10k × 100 ratings matrix) using matrix factorization. • Selected 100 movies from top-5 production companies in ML-20M dataset. Groups: MGM, Warner Bros, Paramount, 20th Century Fox, Columbia. • Selected 10k most active users. • User features 𝑦 𝑢 come from this matrix factorization. Goal : Present ranking to each user 𝑣 𝜐 to maximize NDCG while making sure the production companies receive fair share of exposure relative to the average relevance of their movies.

  18. Does FairCo ensure fairness with effective personalization? Exposure Unfairness Impact Unfairness Personalized Rankings achieve high utility (NDCG), while reducing Unfairness to 0 with 𝜐.

  19. Conclusions • Identified how biased feedback leads to unfairness and suboptimal ranking in Dynamic-LTR. • Proposed FairCo to adaptively enforce amortized fairness constraints while relevances are being learned. • Easy to implement and computationally efficient at serving time. • The algorithm breaks the rich-get-richer effect in Dynamic-LTR.

  20. Thank you! Controlling Fairness and Bias in Dynamic Learning-to- Rank Marco Morik *† , Ashudeep Singh *‡ , Jessica Hong ‡ , Thorsten Joachims ‡ † TU Berlin, ‡ Cornell University ACM SIGIR 2020

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend