constructing optimal designs on finite experimental
play

Constructing optimal designs on finite experimental domains using - PowerPoint PPT Presentation

Constructing optimal designs on finite experimental domains using methods of mathematical programming Radoslav Harman, Tom a s Jur k, M aria Trnovsk a Faculty of Mathematics, Physics and Informatics Comenius University,


  1. Constructing optimal designs on finite experimental domains using methods of mathematical programming Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Faculty of Mathematics, Physics and Informatics Comenius University, Bratislava mODa8 Almagro, Spain 4th-8th June 2007

  2. Overview of the talk Very brief introduction to optimal design problem on a finite experimental domain. The formulation of the problem of c -optimality as a special linear programming problem. Specification of the simplex algorithm of linear programming for constructing c -optimal designs. Example: Optimal designs for estimating individual coefficients for Fourier regression on a partial circle. The formulation of the problem of optimal designs with respect to various other criteria as a special problem of semidefinite programming or the so-called maxdet programming. Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  3. The model and basic assumptions The regression model for a single observation y : E ( y ) = f ′ ( x ) β ; x ∈ X f ... vector of regr. functions, f : X → R m , lin. independent on X β ... vector of parameters, β ∈ R m X = { x 1 , ..., x k } ... experimental domain The observations are homoscedastic ( Var ( y ) ≡ σ 2 ) and uncorrelated. As is usual, an (asymptotic) design is a probability ξ on X and its information matrix is � ξ ( x ) f ( x ) f ′ ( x ) . M ( ξ ) = x ∈ X A design ξ is an exact design of size n , iff it can be realized by n observations, i.e., iff ξ ( x ) = n x / n for all x ∈ X and some n x ∈ N 0 . Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  4. Definition and properties of c -optimal designs Let c ∈ R m . Let Ξ c be the set of all designs ξ under which is c ′ β estimable, i.e., such that c belongs to the range of M ( ξ ) . If we perform n measurements according to an exact design ξ ∈ Ξ c of size n , then the variance of the Gauss-Markov estimator of c ′ β is: Var ( c ′ ˆ β ξ ) = σ 2 n − 1 c ′ M − ( ξ ) c . Definition ( c -optimality) A design ξ ∗ c ∈ Ξ c is said to be c -optimal iff it minimizes c ′ M − ( ξ ) c among all designs ξ ∈ Ξ c . The value c ′ M − ( ξ ∗ c ) c is then called the c -optimal variance and M ( ξ ∗ c ) is a c -optimal information matrix. There exists a c -optimal design supported on m points or less. For some models and some choices of c , the c -optimal information matrix is not unique and it can be singular. Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  5. Special cases of c -optimality and its relations Examples of specific choices of c : If c is the i -th unit vector ( c = e i ), then ξ ∗ c is optimal for estimating β i . If c = f ( x ) , then ξ ∗ c is optimal for estimating the mean value of response in x . If c = ( c 1 , ..., c m ) ′ , where c i = � A f i ( x ) dx then ξ ∗ c is optimal for � A f ′ ( x ) β dx . estimating Relations of c -optimality to other criteria: The c -optimal variance for c = e i is used in the so-called standardized optimality criteria (Dette 1997). The c -optimal values for c ranging in the unit sphere have implications for the E -optimal value. For every Loewner isotonic criterion Φ , there exists an optimal design supported only on those design points x such that the singular design in x is c -optimal for c = f ( x ) . Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  6. The Elfving set and the Elfving theorem The Elfving set of the model ( f , X ) is E = conv { f ( x 1 ) , ..., f ( x k ) , − f ( x 1 ) , ..., − f ( x k ) } , which is a compact polytope symmetric around 0 m . Theorem (Elfving) The design ξ is c-optimal for the model ( f , X ) and 1 / h 2 is the c-optimum variance if and only if hc = � x ∈ X ǫ ( x ) f ( x ) ξ ( x ) ∈ ∂ E for some selection of signs ǫ ( x ) ∈ {− 1 , 1 } . The Elfving theorem motivated an algorithm for c -optimality in L´ opez-Fidalgo and Rodr´ ıguez-D´ ıaz (2004) requiring nonlinear optimization routines and practical for m ≤ 4. Our approach is based on the observation that 1 / h 2 is the optimal variance iff h is the maximum possible scalar such that hc is a convex combinations of 2 k vectors ± f ( x i ) , which is a linear programming problem. Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  7. Linear programming approach Theorem Consider the linear programming (LP) problem: � � α � 0 m � α � � − c � � � � F max h | = , ≥ 0 , (1) 1 ′ 0 h 1 h 2 k where F = ( f ( x 1 ) , ..., f ( x k ) , − f ( x 1 ) , ..., − f ( x k )) . Then a design ξ is c-optimal for the model ( f , X ) and h − 2 is the c-optimum variance if and only if ξ ( x i ) = α i + α i + k for a solution ( α ′ , h ) ′ of the problem (1). Consequences: Theory of LP specifies to results about c -optimality on finite experimental domains (e.g., duality theorem of LP corresponds to the ”equivalence theorem” for c -optimality). Algorithms (such as the simplex method) for solving LP problems can be applied to construct c -optimal designs. Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  8. Simplex algorithm for c -optimal designs (SAC) Input: The design points x 1 , ..., x k , the vector of regression functions f and a sequence L = ( l 1 , ..., l m ) of indices from { 1 , ..., k } , such that f ( x l 1 ) , ..., f ( x l m ) are independent vectors. 1. Set F ← ( f ( x 1 ) , ..., f ( x k ) , − f ( x 1 ) , ..., − f ( x k )) , B m + 1 ← 2 k + 1. 2. For all i = 1 , ..., m do: If ( F − 1 L c ) i ≥ 0 then B i ← l i else B i ← l i + k . 3. Set h ← ( 1 T m F − 1 B c ) − 1 , ˜ α ← h F − 1 B c , and s j ← F − 1 B F ( j ) for all j / ∈ B . 4. Set j ∗ ← min ∈ B 1 T . If 1 T � � m s j ∗ ≤ 1 go to Step 7. argmax j / m s j 5. Set i ∗ ← min � � �� α i ˜ , where d = s j ∗ − ˜ α r j ∗ . argmin i ∈{ 1 ,..., m } , d i > 0 d i 6. Update B i ∗ ← j ∗ and return to Step 3. 7. For all i = 1 , ..., m do: If B i ≤ k then x ∗ i ← x B i else x ∗ i ← x B i − k . Output: The c -optimal design ξ assigning weights ˜ α 1 , ..., ˜ α m to design points x ∗ 1 , ..., x ∗ m , and c -optimal variance h − 2 . Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  9. Simplex algorithm for c -optimal designs (SAC) The simplex algorithm for c -optimality is an exchange algorithm where the ”pivot rules” for the entering and leaving variables correspond to rules for the choice of design points to be exchanged. At each step, the algorithm calculates optimal weights on the actual set of independent support points (e.g., Pukelsheim and Torsney 1991); the optimal weights correspond to the values of ”basic variables”. The general mechanism of the simplex algorithm for c -optimality is analogous to the so-called Remez procedure published by Studden and Tsay (1976). However, the Remez procedure requires strong nonsingularity assumptions (for instance it cannot deal with singular c -optimal designs, which are very common). On the other hand, the simplex algorithm for c -optimality can be applied for any model and any vector c , was observed to converge in all test problems, and can be modified to a version with guaranteed convergence using an ”anticycling” pivot rule. Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  10. Example: Fourier regression on a partial circle Fourier (trigonometric) regression of degree d : d d � � E ( y ) = β 1 + β 2 k sin ( kx ) + β 2 k + 1 cos ( kx ) , x ∈ X = [ − a , a ] (2) k = 1 k = 1 If a < π we say that the experimental domain X is a ”partial circle”. Optimal designs for (2) on a partial circle were studied in a series of recent papers by Dette, Melas, Pepelyshev ( D , E , and c -optimality). The paper Dette and Melas (2003) describes methods of constructing e s -optimal designs, either by an explicit formula or by means of Taylor expansion for a ≤ L d , s and a ≥ U d , s , where L d , s and U d , s are some critical constants; for many d and s is L d , s strictly less than U d , s . Example: Numeric results obtained by SAC for the cubic ( d = 3) model and ”all” values of a on a dense discretization of X . Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

  11. Example: Fourier regression on a partial circle s � 1 5 Π Π Π ���� ���� �������� 0.6881 Π 3 2 6 Π Π ���� ���� 2 2 0 0 � Π � Π ���� ���� 2 2 Π Π 5 Π 0.6881 Π ���� ���� �������� 3 2 6 Figure: Support points of e 1 -optimal designs (vertical axis) for the cubic trigonometric model on a partial circle with the half-length a of the experimental domain (horizontal axis). Radoslav Harman, Tom´ aˇ s Jur´ ık, M´ aria Trnovsk´ a Constructing optimal designs using methods of mathematical programming

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend