optimal targeting of customers for a last minute sale
play

Optimal Targeting of Customers for a Last-Minute Sale R. Cominetti, - PowerPoint PPT Presentation

Optimal Targeting of Customers for a Last-Minute Sale R. Cominetti, J. Correa, J. San Mart n Universidad de Chile Journ ees Franco-Chiliennes dOptimisation Universit e de Toulon Mai 2008 Want to sell business class


  1. Optimal Targeting of Customers for a Last-Minute Sale R. Cominetti, J. Correa, J. San Mart´ ın Universidad de Chile Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008

  2. Want to sell business class upgrades Who should get the offer? Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 1

  3. The Process — (1) Set of customers N Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 2

  4. (2) Address the offer to S ⊆ N Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 3

  5. (3) Customers accept/reject A ⊆ S ⊆ N Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 4

  6. (4) Winner is chosen at random Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 5

  7. The Single Item case Setting Implication set of clients i ∈ N select S ⊆ N to offer different revenue for each client v i want high revenue clients different acceptance probability p i want high pbb clients until sold out first respondent wins last minute no time for regret revenue = discount price − normal price × prob buys anyway Goal balance probabilities and revenues so that the selected S ⊆ N maximizes expected revenue Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 6

  8. The Problem — Discrete Version • Expected revenue for S ⊆ N is: revenue if prob that prob that A accepts A accepts S \ A rejects � �� � � �� � � �� � v ( A ) � � � V S = · p i · (1 − p i ) | A | A ⊆ S i ∈ A i ∈ S \ A Problem: find V ∗ = max S ⊆ N V S • V S can be computed in O ( n 3 ) using convolutions Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 7

  9. The Problem — Continuous Version • Offer is made to each client with probability x i P [ i accepts ] = P [ Y i = 1] = x i · p i = y i revenue if prob that prob that A accepts A accepts N \ A rejects � �� � � �� � � �� � v ( A ) � � � V ( y ) = · y i · (1 − y i ) | A | A ⊆ N i ∈ A i ∈ N \ A Problem: find V ∗ = 0 ≤ y i ≤ p i V ( y ) max • Both problems are equivalent since V ( y ) is linear in each variable y i Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 8

  10. Threshold Strategies • If offer is made to a customer reporting v i , shouldn’t we also consider those customers with higher values? • Find a threshold value V and offer to all clients such that v i ≥ V We assume v 1 ≥ v 2 ≥ · · · ≥ v n • Optimal threshold found in O ( n 4 ) : max 1 ≤ i ≤ n V { 1 ,...,i } • Typical in revenue management ... but... Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 9

  11. Threshold Strategies are not optimal! V (1) = 1 2 · 2 = 1 p i v i V (1 , 2) = 1 4 · 2 + 1 4 · 1 + 1 4 · 3 2 + 1 4 · 0 = 1 . 125 0.5 2 0.5 1 V (1 , 2 , 3) = 1 4 · 2 . 9 2 + 1 4 · 1 . 9 2 + 1 4 · 3 . 9 3 + 1 4 · 0 . 9 = 1 . 15 1 0.9 V (1 , 3) = 1 2 · 2 . 9 2 + 1 2 · 0 . 9 = 1 . 175 • Every subset can be optimal • Sorting by probability or expected value is also sub-optimal Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 10

  12. Heuristics: 10 customers, 200 instances Algorithm % opt Min ratio Avg ratio Time 100.0 1.0000 1.0000 1611.2 optimal 93.0 0.9916 0.9998 23.5 threshold 47.5 0.8168 0.9771 0.4 lp-relax 74.0 0.9775 0.9988 21.5 lp2-relax 99.0 0.9918 0.9999 15.4 in-out Problem complexity is open Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 11

  13. Threshold is 1 2 -optimal: LP relaxation Rewrite objective function as V ( y ) = � i ∈ N v i π i 1 π i = P [ i accepts and wins ] = y i E [ 1+ S i ] S i = � j � = i Y j (number of competitors) � = ⇒ 0 ≤ π i ≤ p i and i ∈ N π i ≤ 1 Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 12

  14. Threshold is 1 2 -optimal: LP relaxation Rewrite objective function as V ( y ) = � i ∈ N v i π i 1 π i = P [ i accepts and wins ] = y i E [ 1+ S i ] S i = � j � = i Y j (number of competitors) � = ⇒ 0 ≤ π i ≤ p i and i ∈ N π i ≤ 1 Consider the relaxation (upper bound) V ∗ ≤ V LP = �� � i ∈ N v i y i : � max i ∈ N y i ≤ 1 0 ≤ y i ≤ p i and use it to get a 1 2 -optimal threshold strategy Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 13

  15. Polynomial approximation algorithm alg LP • LP solution in O ( n ) : find largest k with � k i =1 p i ≤ 1 and set  p i if i ≤ k   1 − � k y i = LP i =1 p i if i = k + 1   0 otherwise Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 14

  16. Polynomial approximation algorithm alg LP • LP solution in O ( n ) : find largest k with � k i =1 p i ≤ 1 and set  p i if i ≤ k   1 − � k y i = LP i =1 p i if i = k + 1   0 otherwise • y LP is a randomized strategy equivalent to � select { 1 , . . . , k } with probability [ � k +1 i =1 p i − 1] /p k +1 select { 1 , . . . , k + 1 } with probability [1 − � k i =1 p i ] /p k +1 Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 15

  17. Polynomial approximation algorithm alg LP • LP solution in O ( n ) : find largest k with � k i =1 p i ≤ 1 and set  p i if i ≤ k   1 − � k y i = LP i =1 p i if i = k + 1   0 otherwise • y LP is a randomized strategy equivalent to � select { 1 , . . . , k } with probability [ � k +1 i =1 p i − 1] /p k +1 select { 1 , . . . , k + 1 } with probability [1 − � k i =1 p i ] /p k +1 • De-randomize in O ( n 3 ) : max between V { 1 ,...,k } and V { 1 ,...,k +1 } Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 16

  18. In the example: = 1 y LP 1 2 p i v i = 1 y LP 2 2 0.5 2 y LP = 0 3 0.5 1 = 1 2 · 2 + 1 V LP 2 · 1 = 1 . 5 1 0.9 = 1 4 · 2 + 1 4 · 1 + 1 4 · 3 2 + 1 V ( y LP ) 4 · 0 = 1 . 125 Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 17

  19. LP is a 1 Theorem: alg 2 − approximation algorithm Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 18

  20. LP is a 1 Theorem: alg 2 − approximation algorithm Proof: By Jensen’s inequality 1 1 1 1+1 = 1 1 E [ 1+ S i ] ≥ 1+ E ( S i ) = j ≥ j � = i y LP 2 1+ � hence V ( y LP ) = � � 1+ S i ] ≥ 1 1 i ∈ N v i y LP i E [ i ∈ N v i y LP i 2 so that V ∗ ≥ V ( y LP ≥ 1 2 V ∗ . LP ) ≥ 1 2 V � Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 19

  21. Alternative: Hyperbolic relaxation y i y i π i ≥ j � = i y j ≥ 1+ � 1+ � j ∈ N y j V ∗ ≥ � i ∈ N v i y i max 1+ � i ∈ N y i 0 ≤ y i ≤ p i • Common-lines problem in transit equilibrium (Chriqui&Robillard’75) • Optimum is a threshold strategy • Linear-time algorithm: max k [ v 1 p 1 + · · · + v k p k ] / [1 + p 1 + · · · + p k ] • Also a 1 2 -approximation algorithm Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 20

  22. Improved 2 3 -approximation Let x = P [ S = 0] so that x + � i ∈ N π i = 1 1 Moreover π i = y i E [ 1+ S i ] with 1 1 · P [ S i =0] + 1 E [ 1+ S i ] ≤ 2 · P [ S i > 0] 1 = 2 (1 + P [ S i =0]) 1 x = 2 (1 + 1 − y i ) and then y i ≤ p i implies π i ≤ p i x 2 (1 + 1 − p i ) Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 21

  23. Hence we get the alternative LP relaxation V ∗ ≤ V max � = i ∈ N v i z i LP 2 z i ≤ p i x 2 (1 + 1 − p i ) x + � i ∈ N z i = 1 x, z i ≥ 0 Algorithm alg LP 2 • Find a basic optimal solution ( z ∗ , x ∗ ) for LP 2 2 z ∗ LP 2 • Set y = i ...either 0 or p i except for one value! 1+ x ∗ i 1 − pi • De-randomize y LP 2 to get a set of the form { 1 , . . . , k } Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 22

  24. LP 2 is a 2 Theorem: alg 3 -approximation Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 23

  25. LP 2 is a 2 Theorem: alg 3 -approximation V ( y ) = � 1+ S i ] ≥ � v i y i 1 Proof: i ∈ N v i y i E [ i ∈ N 1+ � j � = i y j we get V ( y LP 2 ) ≥ � i ∈ N v i z ∗ LP 2 Replacing y i γ i with i 2(1+ x ∗ ) γ i = 1 − pi ) . i )(1+ x ∗ (3 − x ∗ − 2 z ∗ Since V ∗ ≤ � 3 . This is obvious if x ∗ = 0 . i ∈ N v i z ∗ i we need γ i ≥ 2 Else, since ( z ∗ , x ∗ ) is a basic solution exactly one of the two inequalities x ∗ i = p i involving z ∗ i is tight: if z ∗ i > 0 then z ∗ 2 (1 + 1 − p i ) so that 2(1+ x ∗ ) 1 − pi ) ≥ 2 γ i = 3 . (3 − p i − x ∗ 1 − pi )(1+ x ∗ � Journ´ ees Franco-Chiliennes d’Optimisation — Universit´ e de Toulon — Mai 2008 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend