feature based dynamic pricing
play

Feature-Based Dynamic Pricing Maxime Cohen 1 , 2 Ilan Lobel 1 , 2 - PowerPoint PPT Presentation

Feature-Based Dynamic Pricing Maxime Cohen 1 , 2 Ilan Lobel 1 , 2 Renato Paes Leme 2 1 NYU Stern 2 Google Research Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it


  1. Feature-Based Dynamic Pricing Maxime Cohen 1 , 2 Ilan Lobel 1 , 2 Renato Paes Leme 2 1 NYU Stern 2 Google Research

  2. Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it in the market. Setup: In each timestep:

  3. Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it in the market. Setup: In each timestep: 1. Receives an item with feature vector x t ∈ R d . e.g. x t = (2 bedroom , 1 bathroom , no fireplace , Brooklyn , ... )

  4. Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it in the market. Setup: In each timestep: 1. Receives an item with feature vector x t ∈ R d . e.g. x t = (2 bedroom , 1 bathroom , no fireplace , Brooklyn , ... )

  5. Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it in the market. Setup: In each timestep: 1. Receives an item with feature vector x t ∈ R d . e.g. x t = (2 , 1 , 0 , 1 , .. )

  6. Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it in the market. Setup: In each timestep: 1. Receives an item with feature vector x t ∈ R d . e.g. x t = (2 , 1 , 0 , 1 , .. ) 2. Chooses a price p t for the house.

  7. Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it in the market. Setup: In each timestep: 1. Receives an item with feature vector x t ∈ R d . e.g. x t = (2 , 1 , 0 , 1 , .. ) 2. Chooses a price p t for the house. 3. Observes if the house was sold or not.

  8. Real estate agent problem In each timestep the real estate agent receives a house to sell and needs to decide which price to put it in the market. Setup: In each timestep: 1. Receives an item with feature vector x t ∈ R d . e.g. x t = (2 , 1 , 0 , 1 , .. ) 2. Chooses a price p t for the house. 3. Observes if the house was sold or not. ◮ if p t ≤ v ( x t ), we sell and make profit p t . ◮ if p t > v ( x t ), we don’t sell and make zero profit.

  9. Challenges and Assumptions Learn/Earn or Explore/Exploit: We don’t know the market value v ( x t ). Contextual problem: The product is different in each round and adversarially chosen.

  10. Challenges and Assumptions Learn/Earn or Explore/Exploit: We don’t know the market value v ( x t ). Contextual problem: The product is different in each round and adversarially chosen. Assumptions:

  11. Challenges and Assumptions Learn/Earn or Explore/Exploit: We don’t know the market value v ( x t ). Contextual problem: The product is different in each round and adversarially chosen. Assumptions: 1. Linear model: v ( x t ) = θ ⊤ x t for θ ∈ R d .

  12. Challenges and Assumptions Learn/Earn or Explore/Exploit: We don’t know the market value v ( x t ). Contextual problem: The product is different in each round and adversarially chosen. Assumptions: 1. Linear model: v ( x t ) = θ ⊤ x t for θ ∈ R d . 2. The parameter θ is unknown but fixed.

  13. Challenges and Assumptions Learn/Earn or Explore/Exploit: We don’t know the market value v ( x t ). Contextual problem: The product is different in each round and adversarially chosen. Assumptions: 1. Linear model: v ( x t ) = θ ⊤ x t for θ ∈ R d . 2. The parameter θ is unknown but fixed. 3. Normalization: � x t � ≤ 1 , ∀ t , � θ � ≤ R .

  14. Goal and Applications Goal: Minimize worst-case regret. T � θ ⊤ x t − p t · 1 { p t ≤ θ ⊤ x t } Regret = t =1 Applications : online advertisement, real-estate, domain pricing, ...

  15. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1].

  16. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1]. Binary search: p 2 p 1 0 1 K 0 = K 1 = K 2 = don’t sell sell

  17. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1]. Binary search: p 2 p 1 0 1 K 0 = K 1 = K 2 = don’t sell sell

  18. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1]. Binary search: p 2 p 1 0 1 K 0 = K 1 = K 2 = don’t sell sell

  19. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1]. Binary search: p 2 p 1 0 1 K 0 = K 1 = K 2 = don’t sell sell

  20. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1]. Binary search: p 2 p 1 0 1 K 0 = K 1 = K 2 = don’t sell sell

  21. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1]. Binary search: p 2 p 1 0 1 K 1 = K 2 = K 0 = don’t sell sell ◮ after log(1 /ǫ ) rounds we know θ ∈ [ˆ θ, ˆ θ + ǫ ]. ◮ so ˆ θ always sells so: � � Regret ≤ log 1 T − log 1 ǫ + · ǫ = O (log T ) ǫ for ǫ = O (log T / T ).

  22. Non-contextual settting Simple setting : One dimensional ( d = 1) + no context x t = 1 , ∀ t . Regret = θ T − � t p t · 1 { p t ≤ θ } . and θ ∈ [0 , 1]. Binary search: p 2 p 1 0 1 K 1 = K 2 = K 0 = don’t sell sell ◮ after log(1 /ǫ ) rounds we know θ ∈ [ˆ θ, ˆ θ + ǫ ]. ◮ so ˆ θ always sells so: � � Regret ≤ log 1 T − log 1 ǫ + · ǫ = O (log T ) ǫ for ǫ = O (log T / T ). ◮ Leighton & Kleinberg: Optimal Regret = O (log log T ).

  23. Contextual Setting : Knowledge Sets Knowledge sets K t All θ compatible with observations so far.

  24. Contextual Setting : Knowledge Sets Knowledge sets K t All θ compatible with observations so far. x t don’t sell K t +1 p t line p t line K t +1 K t sell θ ⊤ x t < p t θ ⊤ x t ≥ p t

  25. Contextual Setting : Knowledge Sets Knowledge sets K t All θ compatible with observations so far. x t don’t sell K t +1 p t line p t line K t +1 K t sell θ ⊤ x t < p t θ ⊤ x t ≥ p t

  26. Contextual Setting : Knowledge Sets Knowledge sets K t All θ compatible with observations so far. x t don’t sell K t +1 p t line p t line K t +1 K t sell θ ⊤ x t < p t θ ⊤ x t ≥ p t

  27. Contextual Setting : Knowledge Sets Knowledge sets K t All θ compatible with observations so far. x t don’t sell K t +1 p t line p t line K t +1 K t sell θ ⊤ x t < p t θ ⊤ x t ≥ p t

  28. Contextual Setting : Knowledge Sets Knowledge sets K t All θ compatible with observations so far. x t don’t sell K t +1 p t line p t line K t +1 K t sell θ ⊤ x t < p t θ ⊤ x t ≥ p t

  29. Contextual Setting : Knowledge Sets Knowledge sets K t All θ compatible with observations so far. Price ranges p t ∈ [ p t , p t ] p t = min θ ∈ K t θ ⊤ x t (exploit price, always sells) p t = max θ ∈ K t θ ⊤ x t (never sells) x t don’t sell K t +1 p t line p t line K t +1 K t sell θ ⊤ x t < p t θ ⊤ x t ≥ p t

  30. Game: multi-dimensional binary search ˆ θ

  31. Game: multi-dimensional binary search ˆ θ

  32. Game: multi-dimensional binary search ˆ θ

  33. Game: multi-dimensional binary search ˆ θ

  34. Game: multi-dimensional binary search ˆ θ

  35. Game: multi-dimensional binary search ˆ θ

  36. Game: multi-dimensional binary search ˆ θ

  37. Game: multi-dimensional binary search ˆ θ

  38. Game: multi-dimensional binary search ˆ θ

  39. Game: multi-dimensional binary search ˆ θ Our Goal: Find ˆ θ such that � θ − ˆ θ � ≤ ǫ , since | θ ⊤ x t − ˆ θ ⊤ x t | ≤ ǫ for all contexts x t .

  40. Idea # 1 Plan: Narrow down K t to B (ˆ θ, ǫ ) and exploit from then on. Issues with this approach: ◮ You may never see a certain feature. ◮ Some features might be correlated. ◮ Often it is not good to wait to profit.

  41. Idea # 2 Plan: Explore if there if enough uncertainty about θ ⊤ x t . Compute p t = max θ ∈ K t θ ⊤ x t and p t = min θ ∈ K t θ ⊤ x t and exploit if | p t − p t | ≤ ǫ Which price to use in exploration: From 1-dimensional binary search, we can try: p t = 1 � � p t + p t 2

  42. Idea # 2 Plan: Explore if there if enough uncertainty about θ ⊤ x t . Compute p t = max θ ∈ K t θ ⊤ x t and p t = min θ ∈ K t θ ⊤ x t and exploit if | p t − p t | ≤ ǫ Which price to use in exploration: From 1-dimensional binary search, we can try: p t = 1 � � p t + p t 2 Thm: Regret of this approach is exponential in d . Intuition: Shaving corners of a polytope in higher dimensions.

  43. Idea # 3 Plan: Choose the price to split K t in two halfs of equal volume. Issues with this approach: ◮ Not easily computable.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend