tight bounds on minimax regret under logarithmic loss via
play

Tight Bounds on Minimax Regret under Logarithmic Loss via - PowerPoint PPT Presentation

Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance Blair Bilodeau 1,2,3 , Dylan J. Foster 4 , and Daniel M. Roy 1,2,3 Presented at the 2020 International Conference on Machine Learning 1 Department of Statistical Sciences,


  1. Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance Blair Bilodeau 1,2,3 , Dylan J. Foster 4 , and Daniel M. Roy 1,2,3 Presented at the 2020 International Conference on Machine Learning 1 Department of Statistical Sciences, University of Toronto 2 Vector Institute 3 Institute for Advanced Study 4 Institute for Foundations of Data Science, Massachusetts Institute of Technology

  2. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n :

  3. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image.

  4. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. • Assign a probability to whether the image is adversarially generated.

  5. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. • Assign a probability to whether the image is adversarially generated. • Observe the true label.

  6. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. • Assign a probability to whether the image is adversarially generated. • Observe the true label. • Incur penalty based on prediction and observation.

  7. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X • Assign a probability to whether the image is adversarially generated. • Observe the true label. • Incur penalty based on prediction and observation.

  8. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X p t ∈ [0 , 1] • Assign a probability. Prediction ˆ • Observe the true label. • Incur penalty based on prediction and observation.

  9. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X p t ∈ [0 , 1] • Assign a probability. Prediction ˆ • Observe the true label. Observation y t ∈ { 0 , 1 } • Incur penalty based on prediction and observation.

  10. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X p t ∈ [0 , 1] • Assign a probability. Prediction ˆ • Observe the true label. Observation y t ∈ { 0 , 1 } • Incur penalty. Loss ℓ log (ˆ p t , y t ) = − y t log(ˆ p t ) − (1 − y t ) log(ˆ p t )

  11. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X p t ∈ [0 , 1] • Assign a probability. Prediction ˆ • Observe the true label. Observation y t ∈ { 0 , 1 } • Incur penalty. Loss ℓ log (ˆ p t , y t ) = − y t log(ˆ p t ) − (1 − y t ) log(ˆ p t ) Notice that ℓ log equals the negative log likelihood of y t under the model ˆ p t .

  12. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X p t ∈ [0 , 1] • Assign a probability. Prediction ˆ • Observe the true label. Observation y t ∈ { 0 , 1 } • Incur penalty. Loss ℓ log (ˆ p t , y t ) = − y t log(ˆ p t ) − (1 − y t ) log(ˆ p t ) Notice that ℓ log equals the negative log likelihood of y t under the model ˆ p t . Challenges

  13. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X p t ∈ [0 , 1] • Assign a probability. Prediction ˆ • Observe the true label. Observation y t ∈ { 0 , 1 } • Incur penalty. Loss ℓ log (ˆ p t , y t ) = − y t log(ˆ p t ) − (1 − y t ) log(ˆ p t ) Notice that ℓ log equals the negative log likelihood of y t under the model ˆ p t . Challenges • We do not rely on data-generating assumptions.

  14. Contextual Online Learning with Log Loss Example: Image Identification For rounds t = 1 , . . . , n : • Receive an image. Context x t ∈ X p t ∈ [0 , 1] • Assign a probability. Prediction ˆ • Observe the true label. Observation y t ∈ { 0 , 1 } • Incur penalty. Loss ℓ log (ˆ p t , y t ) = − y t log(ˆ p t ) − (1 − y t ) log(ˆ p t ) Notice that ℓ log equals the negative log likelihood of y t under the model ˆ p t . Challenges • We do not rely on data-generating assumptions. • ℓ log is neither bounded nor Lipschitz.

  15. Measuring Performance with Regret Without model assumptions, guaranteed small loss on predictions is impossible.

  16. Measuring Performance with Regret Without model assumptions, guaranteed small loss on predictions is impossible. If I can’t promise about the future, can I say something about the past?

  17. Measuring Performance with Regret Without model assumptions, guaranteed small loss on predictions is impossible. If I can’t promise about the future, can I say something about the past? Consider a relative notion of performance in hindsight. • Relative to a class F ⊆ { f : X → [0 , 1] } , consisting of experts f ∈ F . • Compete against the optimal f ∈ F on the actual sequence of observations.

  18. Measuring Performance with Regret Without model assumptions, guaranteed small loss on predictions is impossible. If I can’t promise about the future, can I say something about the past? Consider a relative notion of performance in hindsight. • Relative to a class F ⊆ { f : X → [0 , 1] } , consisting of experts f ∈ F . • Compete against the optimal f ∈ F on the actual sequence of observations. n n � � R n (ˆ p ; F , x , y ) = p t , y t ) − inf Regret: ℓ log (ˆ ℓ log ( f ( x t ) , y t ) . f ∈F t =1 t =1

  19. Measuring Performance with Regret Without model assumptions, guaranteed small loss on predictions is impossible. If I can’t promise about the future, can I say something about the past? Consider a relative notion of performance in hindsight. • Relative to a class F ⊆ { f : X → [0 , 1] } , consisting of experts f ∈ F . • Compete against the optimal f ∈ F on the actual sequence of observations. n n � � R n (ˆ p ; F , x , y ) = p t , y t ) − inf Regret: ℓ log (ˆ ℓ log ( f ( x t ) , y t ) . f ∈F t =1 t =1 This quantity depends on • ˆ p : Player predictions, • F : Expert class, • x : Observed contexts, • y : Observed data points.

  20. Summary of Results We control the minimax regret using the sequential entropy of the experts F .

  21. Summary of Results We control the minimax regret using the sequential entropy of the experts F . • Minimax regret: the smallest possible regret under worst-case observations. • Sequential entropy: a data-dependent complexity measure for F .

  22. Summary of Results We control the minimax regret using the sequential entropy of the experts F . • Minimax regret: the smallest possible regret under worst-case observations. • Sequential entropy: a data-dependent complexity measure for F . Contributions

  23. Summary of Results We control the minimax regret using the sequential entropy of the experts F . • Minimax regret: the smallest possible regret under worst-case observations. • Sequential entropy: a data-dependent complexity measure for F . Contributions • Improved upper bound for expert classes with polynomial sequential entropy.

  24. Summary of Results We control the minimax regret using the sequential entropy of the experts F . • Minimax regret: the smallest possible regret under worst-case observations. • Sequential entropy: a data-dependent complexity measure for F . Contributions • Improved upper bound for expert classes with polynomial sequential entropy. • Novel proof technique that exploits the curvature of log loss to avoid a key “truncation step” used by previous works.

  25. Summary of Results We control the minimax regret using the sequential entropy of the experts F . • Minimax regret: the smallest possible regret under worst-case observations. • Sequential entropy: a data-dependent complexity measure for F . Contributions • Improved upper bound for expert classes with polynomial sequential entropy. • Novel proof technique that exploits the curvature of log loss to avoid a key “truncation step” used by previous works. • Resolve the minimax regret with log loss for Lipschitz experts on [0 , 1] p with matching lower bounds.

  26. Summary of Results We control the minimax regret using the sequential entropy of the experts F . • Minimax regret: the smallest possible regret under worst-case observations. • Sequential entropy: a data-dependent complexity measure for F . Contributions • Improved upper bound for expert classes with polynomial sequential entropy. • Novel proof technique that exploits the curvature of log loss to avoid a key “truncation step” used by previous works. • Resolve the minimax regret with log loss for Lipschitz experts on [0 , 1] p with matching lower bounds. • Conclude the minimax regret with log loss cannot be completely characterized using sequential entropy.

  27. Minimax Regret n n � � Regret: R n (ˆ p ; F , x , y ) = ℓ log (ˆ p t , y t ) − inf ℓ log ( f ( x t ) , y t ) . f ∈F t =1 t =1 Minimax regret: an algorithm-free quantity on worst-case observations . R n ( F ) = sup inf p 1 sup sup inf p 2 sup · · · sup inf p n sup R n (ˆ p ; F , x , y ) . ˆ ˆ ˆ x 1 y 1 x 2 y 2 x n y n

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend