on learning and information acquisition with respect
play

ON LEARNING AND INFORMATION ACQUISITION WITH RESPECT TO FUTURE - PDF document

ON LEARNING AND INFORMATION ACQUISITION WITH RESPECT TO FUTURE AVAILABILITY OF ALTERNATIVES KAZUTOSHI YAMAZAKI Department of Operations Research and Financial Engineering, Princeton University Abstract. Most bandit frameworks applied to


  1. ON LEARNING AND INFORMATION ACQUISITION WITH RESPECT TO FUTURE AVAILABILITY OF ALTERNATIVES ∗ KAZUTOSHI YAMAZAKI † Department of Operations Research and Financial Engineering, Princeton University Abstract. Most bandit frameworks applied to economic problems such as market learn- ing and job matching are based on the unrealistic assumption that decision makers are fully confident about the future availability of alternatives. In this paper, we study two general- izations of the classical bandit problem in which arms may become unavailable temporarily or permanently, and in which arms may break down and the decision maker has the option to fix them. It is shown that an optimal index policy does not exist for either problem. Nevertheless, there exists a near-optimal index policy in the class of Whittle index policies that cannot be dominated uniformly by any other index policy over all instances of either problem. The index strikes the balance between exploration and exploitation with respect to the availability of alternatives: it converges to the Gittins index as the probability of availability approaches one and to the immediate one-time reward as it approaches zero. Whittle indices are evaluated for Bernoulli arms with unknown success probabilities. 1. Introduction The multi-armed bandit problem has received much attention in economics since it was used in the seminal paper by Rothschild (1974). At each stage, the decision maker must choose among several arms of a slot machine in order to maximize his expected total dis- counted reward over an infinite time horizon. A trade-off is made between exploitation and exploration, or actions that maximize immediate reward and actions acquiring information that may help increase one’s total future reward. Date : December 1, 2008. ∗ This paper was presented at INFORMS annual conference 2007, SIAM Conference in Optimization, IBM Thomas J. Watson Research Center and Princeton University. I am grateful to Savas Dayanik and Warren Powell for their advice and support. I am also indebted to Erhan C ¸ınlar, Ronnie Sircar and Mark Squillante. I thank Dirk Bergemann, Faruk Gul, Ricardo Reis and Yosuke Yasuda for helpful suggestions and remarks. All errors are mine. † Email: kyamazak@princeton.edu. 1

  2. 2 ON LEARNING AND INFORMATION ACQUISITION In economics, bandit formulations are typically used to model rational decision makers facing this trade-off. This framework has been used, for example, to explain market learning, matching and job search, and mechanism design. Rothschild (1974) developed a bandit model for a firm facing a market with unknown demand and showed that it may settle down with suboptimal prices if the true distribution were to be known ex ante . Jovanovic (1979) and Miller (1984) proposed job matching models where a worker wants to choose among several firms. Applications in mechanism design, such as auction design, has been discussed by Bergemann & Valimaki (2006). The multi-agent version of the bandit problem has been studied by Bolton & Harris (1999) and Keller et al. (2005) where several players face the same bandit problem and outcomes in each period are shared between them 1 . Optimal solutions to the classical bandit problem can be characterized by the so-called Gittins index policies (Gittins 1979) where each arm is associated with an index that is a function of the state of the arm, and the expected total discounted reward over an infinite time horizon is maximized if an arm with the largest index is always played. The Gittins index policy reduces the problem dimension considerably. Given N arms, the optimization problem can be split into N independent smaller subproblems. Moreover, at each stage only one arm changes its state, and so at most one index has to be re-evaluated. The proof (see Whittle (1980), Weber (1992) and Tsitsiklis (1994)), however, relies on the condition that the state of an arm changes only when it is played, and does not hold when relaxed. Owing to this limitation, the range of economic problems where some index policies are guaranteed to be optimal is small. The bandit problem with switching costs (see Banks & Sundaram (1994) and Jun (2004)) is an important generalization of the classical bandit. According to Banks & Sundaram (1994), it is difficult to imagine an economic problem where the agent can switch between alternatives without incurring a cost. They also showed that, in the presence of switching costs, there does not exist an index policy that is optimal for every multi-armed bandit problem. For extensions of the classical bandit problem that admit optimal index policies, see Bergemann & Valimaki (2001) and Whittle (1981). 1 For other related economic models, see Bergemann & Hege (1998), Bergemann & Hege (2005), Hong & Rady (2002) (finance), Felli & Harris (1996) (Job matching), Bergemann & Valimaki (2000), Bergemann & Valimaki (2006) (pricing), McLennan (1984), Rustichini & Wolinsky (1995), Keller & Rady (1999) (market learning) and Weitzman (1979), Roberts & Weitzman (1981) (R&D).

  3. ON LEARNING AND INFORMATION ACQUISITION 3 In this paper, we study bandit problems where arms may become unavailable temporarily or permanently whether or not they are played. These are not classical multi-armed bandit problems, and the Gittins index policy is not optimal in general. For example, considering the job matching models in Jovanovic (1979) and Miller (1984), a more natural assumption is that jobs are not always available and their availability is sto- chastic. Indeed, firms adjust their workforce planning according to their financial conditions and workforce demands. Intuitively, the more pessimistic a decision maker is about future availability of the alternatives, the more he focuses on immediate payoffs. Therefore, it is unlikely to expect decision makers to use the Gittins index policy in these situations. In a variation of the above-mentioned problem, we assume that arms may break down, but the decision maker has the option to fix them. Consider, for example, an energy company loses its access to oil because of an unexpected international conflict. It must decide if it is better to reestablish the access or to turn to an alternative energy source, e.g., natural gas or coal. The bandit problem with switching costs is a special case; arms break down immediately if they are not engaged, and if a broken arm is engaged, the switching cost is incurred to pay for repair. We generalize the classical multi-armed bandit problem as follows. There are N arms, and each arm is available with some state/action-dependent probabilities. At each stage, the decision maker chooses M arms to play simultaneously and collects rewards from each played arm. We call an arm active if it is played and passive otherwise. The reward from a particular arm n depends on a stochastic process X n = ( X n ( t )) t ≥ 0 , whose state changes only when it is active. The process X n may represent, for example, the state of the knowledge about the reward obtainable from arm n . At every stage, only a subset of the arms is available. We denote by Y n ( t ) the availability of arm n at time t ; it is 1 if the arm is available at time t , and 0 otherwise. Unlike X n , the stochastic process Y n changes even when the arm is not played. The objective is to find an

  4. 4 ON LEARNING AND INFORMATION ACQUISITION optimal policy that chooses M arms so as to maximize the expected total discounted reward collected over the infinite time horizon. We study the following two problems: Problem 1. Each arm is intermittently available. Its availability at time t is unobservable before time t . The conditional probability that an arm is available at time t + 1 given (i) the state X ( t ) and its availability Y ( t ) of the arm, and (ii) whether or not the arm is played at time t is known at time t . An arm cannot be played when it is unavailable. This problem will not be well-defined unless there are at least M available arms to play at each stage. We can, however, let the decision maker pull fewer than M arms at a time by introducing sufficient number of arms that are always available and always give zero reward. Problem 2. The arms are subject to failure, and the decision maker has the option to repair a broken arm. Irrespective of whether an arm is played at time t , it may break down and may not be available at time t + 1 with some probability that depends on (i) the state X ( t ) of the arm at time t , and (ii) whether or not the arm is played at time t . If an arm is broken, the decision maker then has the option to repair it at some cost (or negative reward) that depends on X ( t ) . Repairing an arm is equivalent to playing the arm when it is broken. If a broken arm is repaired at time t , then it will become available at time t +1 with some conditional probability that depends only on the state X ( t ) of the arm at time t . On the other hand, if it is not repaired, then the arm remains broken at time t + 1 . We show that there does not exist a single index policy which is optimal for every instance of either problem. We propose a competitive index policy in the class of the Whittle index policies for restless bandit problems, and show that there is not a single index policy that is better than the Whittle index policy for every instance of either problem. We evaluate the performance of the Whittle index policy for each type of problem both analytically and numerically.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend