liege university francqui chair 2011 2012 lecture 5
play

Liege University: Francqui Chair 2011-2012 Lecture 5: Algorithmic - PowerPoint PPT Presentation

Liege University: Francqui Chair 2011-2012 Lecture 5: Algorithmic models of human behavior Yurii Nesterov, CORE/INMA (UCL) March 23, 2012 Yu. Nesterov () Algorithmic models of human behavior 1/31 March 23, 2012 1 / 31 Main problem with the


  1. Liege University: Francqui Chair 2011-2012 Lecture 5: Algorithmic models of human behavior Yurii Nesterov, CORE/INMA (UCL) March 23, 2012 Yu. Nesterov () Algorithmic models of human behavior 1/31 March 23, 2012 1 / 31

  2. Main problem with the Rational Choice Rational choice assumption is introduced for better understanding and predicting the human behavior. It forms the basis of Neoclassical Economics (1900). The player ( Homo Economicus ≡ HE) wants to maximize his utility function by an appropriate adjustment of the consumption pattern. As a consequence, we can speak about equilibrium in economical systems. Existing literature is immense. It concentrates also on ethical, moral, religious, social, and other consequences of rationality. (HE = super-powerful aggressively selfish immoral individualist.) NB: The only missing topic is the Algorithmic Aspects of rationality. Yu. Nesterov () Algorithmic models of human behavior 2/31 March 23, 2012 2 / 31

  3. What do we know now? Starting from 1977 (Complexity Theory, Nemirovski & Yudin), we know that optimization problems in general are unsolvable . They are very difficult (and will be always difficult) for computers, independently on their speed. How they can be solved by us, taking into account our natural weakness in arithmetics? NB: Mathematical consequences of unreasonable assumptions can be disastrous. Perron paradox: The maximal integer is equal to one. Proof: Denote by N the maximal integer. Then 1 ≤ N ≤ N 2 ≤ N . Hence, N = 1 . Yu. Nesterov () Algorithmic models of human behavior 3/31 March 23, 2012 3 / 31

  4. What we do not know In which sense the human beings can solve the optimization problems? What is the accuracy of the solution? What is the convergence rate? Main question: What are the optimization methods ? NB: Forget about Simplex Algorithm and Interior Point Methods! Be careful with gradients (dimension, non-smoothness). Yu. Nesterov () Algorithmic models of human behavior 4/31 March 23, 2012 4 / 31

  5. Outline 1 Intuitive optimization (Random Search) 2 Rational activity in stochastic environment (Stochastic Optimization) 3 Models and algorithms of rational behavior Yu. Nesterov () Algorithmic models of human behavior 5/31 March 23, 2012 5 / 31

  6. Intuitive Optimization Problem: x ∈ R n f ( x ) , min where x is the consumption pattern. Main difficulties: High dimension of x (difficult to evaluate/observe). Possible non-smoothness of f ( x ). Theoretical advice: apply gradient method x k +1 = x k − hf ′ ( x k ). (In the space of all available products!) Hint: we live in an uncertain world. Yu. Nesterov () Algorithmic models of human behavior 6/31 March 23, 2012 6 / 31

  7. Gaussian smoothing Let f : E → R be differentiable along any direction at any x ∈ E . Let us form its Gaussian approximation f ( x + µ u ) e − 1 2 � u � 2 du , 1 � f µ ( x ) = κ E where κ def e − 1 2 � u � 2 du = (2 π ) n / 2 . � = E In this definition, µ ≥ 0 plays a role of the smoothing parameter . Why this is interesting? Define y = x + µ u . Then 1 2 µ 2 � y − x � 2 − 1 � f µ ( x ) = f ( y ) e dy . Hence, µ n κ E 2 µ 2 � y − x � 2 1 − 1 � ∇ f µ ( x ) = f ( y ) e ( y − x ) dy µ n +2 κ E (!) f ( x + µ u ) e − 1 2 � u � 2 u du f ( x + µ u ) − f ( x ) e − 1 2 � u � 2 u du . 1 1 � � = = µκ κ µ E E Yu. Nesterov () Algorithmic models of human behavior 7/31 March 23, 2012 7 / 31

  8. Properties of Gaussian smoothing If f is convex, then f µ is convex and f µ ( x ) ≥ f ( x ). If f ∈ C 0 , 0 , then f µ ∈ C 0 , 0 and L 0 ( f µ ) ≤ L 0 ( f ). If f ∈ C 0 , 0 ( E ), then, | f µ ( x ) − f ( x ) | ≤ µ L 0 ( f ) n 1 / 2 . Random gradient-free oracle: Generate random u ∈ E . Return g µ ( x ) = f ( x + µ u ) − f ( x ) · u . µ If f ∈ C 0 , 0 ( E ), then E u ( � g µ ( x ) � 2 ∗ ) ≤ L 2 0 ( f )( n + 4) 2 . Yu. Nesterov () Algorithmic models of human behavior 8/31 March 23, 2012 8 / 31

  9. Random intuitive optimization def f ∗ Problem: = min x ∈ Q f ( x ) , where Q ⊆ E is a closed convex set, and f is a nonsmooth convex function. Let us choose a sequence of positive steps { h k } k ≥ 0 . Method RS µ : Choose x 0 ∈ Q . For k ≥ 0 : a). Generate u k . b). Compute ∆ k = 1 µ [ f ( x k + µ u k ) − f ( x k )]. c). Compute x k +1 = π Q ( x k − h k ∆ k u k ). NB: µ can be arbitrary small. Yu. Nesterov () Algorithmic models of human behavior 9/31 March 23, 2012 9 / 31

  10. Convergence results N � This method generates random { x k } k ≥ 0 . Denote S N = h k , k =0 def U k = ( u 0 , . . . , u k ), φ 0 = f ( x 0 ), and φ k = E U k − 1 ( f ( x k )), k ≥ 1. Theorem: Let { x k } k ≥ 0 be generated by RS µ with µ > 0. Then, N N S N ( φ k − f ∗ ) ≤ µ L 0 ( f ) n 1 / 2 + 2 S N � x 0 − x ∗ � 2 + ( n +4) 2 h k 1 2 S N L 2 h 2 � 0 ( f ) � k . k =0 k =0 x N )) − f ∗ ≤ ǫ , we choose In order to guarantee E U N − 1 ( f (ˆ N = 4( n +4) 2 ǫ R L 2 0 ( f ) R 2 . µ = 2 L 0 ( f ) n 1 / 2 , h k = ( n +4)( N +1) 1 / 2 L 0 ( f ) , ǫ 2 Yu. Nesterov () Algorithmic models of human behavior 10/31 March 23, 2012 10 / 31

  11. Interpretation Disturbance µ u k may be caused by external random factors. For small µ , the sign and the value of ∆ k can be treated as an intuition . We use a random experience accumulated by a very small shift along a random direction. The reaction steps h k are big. (Emotions?) The dimension of x slows down the convergence. Main ability: to fulfil a completely opposite action as compared to the proposed one. (Needs training.) NB: Optimization method has a form of emotional reaction. It is efficient in the absence of a stable coordinate system. Yu. Nesterov () Algorithmic models of human behavior 11/31 March 23, 2012 11 / 31

  12. Optimization in Stochastic Environment � Problem: min x ∈ Q [ φ ( x ) = E ( f ( x , ξ )) ≡ f ( x , ξ ) p ( ξ ) d ξ ], where Ω f ( x , ξ ) is convex in x for any ξ ∈ Ω ⊆ R m , Q is a closed convex set in R n , p ( ξ ) is the density of random variable ξ ∈ Ω. Assumption: We can generate a sequence of random events { ξ i } : N N →∞ 1 � f ( x , ξ i ) → E ( f ( x , ξ )) , x ∈ Q . N i =1 Goal: For ǫ > 0 and φ ∗ = min x ) − φ ∗ ≤ ǫ . x ∈ Q φ ( x ) find ¯ x ∈ Q : φ (¯ �� 1 � m � Main trouble: For finding δ -approximation to φ ( x ), we need O δ computations of f ( x , ξ ) . Yu. Nesterov () Algorithmic models of human behavior 12/31 March 23, 2012 12 / 31

  13. Stochastic subgradients (Ermoliev, Wetz, 70’s) Method: Fix some x 0 ∈ Q and h > 0. For k ≥ 0, repeat: generate ξ k and update x k +1 = π Q ( x k − h · f ′ ( x k , ξ k )). N 1 � Output: ¯ x = x k . N +1 k =0 Interpretation: Learning process in stochastic environment. x )) − φ ∗ ≤ R LR Theorem: For h = N +1 we get E ( φ (¯ N +1 . √ √ L NB: This is an estimate for the average performance. Hint: For us, it is enough to ensure a Confidence Level β ∈ (0 , 1): x ) ≥ φ ∗ + ǫ V φ ] ≤ 1 − β , Prob [ φ (¯ x ∈ Q φ ( x ) − φ ∗ . where V φ = max In the real world we always apply solutions with β < 1. Yu. Nesterov () Algorithmic models of human behavior 13/31 March 23, 2012 13 / 31

  14. What do we have now? After N -steps we observe a single implementation of the random variable ¯ x x )) − φ ∗ ≤ LR with E ( φ (¯ N +1 . √ What about the level of confidence? 1. For random ψ ≥ 0 and T > 0 we have � � � E ( ψ ) = ψ = ψ + ψ ≥ T · Prob [ ψ ≥ T ]. ψ ≥ T ψ< T x ) − φ ∗ and T = ǫ V φ we need 2. With ψ = φ (¯ 1 x )) − φ ∗ ] ≤ LR ǫ V φ [ E ( φ (¯ N +1 ≤ 1 − β . √ ǫ V φ � 2 � 1 LR Thus, we can take N + 1 = . ǫ 2 (1 − β ) 2 V φ NB: 1. For personal needs, this may be OK. What about β → 1? 2. How we increase the confidence level in our life? Ask for advice as many persons as we can! Yu. Nesterov () Algorithmic models of human behavior 14/31 March 23, 2012 14 / 31

  15. Pooling the experience Individual learning process (Forms opinion of one expert) Choose x 0 ∈ Q and h > 0. For k = 0 , . . . , N repeat generate ξ k , and set x k +1 = π Q ( x k − hf ′ ( x k , ξ k )) . N 1 � Compute ¯ x = x k . N +1 k =0 Pool the experience: K x = 1 For j = 1 , . . . , K compute ¯ x j . Generate the output ˆ � ¯ x j . K j =1 Note: All learning processes start from the same x 0 . Yu. Nesterov () Algorithmic models of human behavior 15/31 March 23, 2012 15 / 31

  16. Probabilistic analysis Theorem. Let Z j ∈ [0 , V ], j = 1 , . . . , K be independent random variables K with the same average µ . Then for ˆ Z K = 1 � Z j K j =1 � � � � ǫ 2 K ˆ − 2ˆ Prob Z k ≥ µ + ˆ ǫ ≤ exp . V 2 Corollary. � 2 � K = 2 1 − β , N = 4 1 LR R Let us choose ǫ 2 ln , and h = √ N +1 . ǫ 2 V φ L Then the pooling process implements an ( ǫ, β )-solution. Note: Each 9 in β = 0 . 9 · · · 9 costs 4 . 6 ǫ 2 experts. Yu. Nesterov () Algorithmic models of human behavior 16/31 March 23, 2012 16 / 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend