metti v thermal measurements and inverse technique
play

METTI V Thermal Measurements and Inverse Technique Benjamin REMY, - PowerPoint PPT Presentation

METTI V Thermal Measurements and Inverse Technique Benjamin REMY, Stphane ANDRE & Denis MAILLET Benjamin.remy@ensem.inpl-nancy.fr L.E.M.T.A, U.M.R.- C.N.R.S. 7563 / E.N.S.E.M 02, avenue de la Fort de Haye, B.P 160 54 504


  1. METTI V – Thermal Measurements and Inverse Technique Benjamin REMY, Stéphane ANDRE & Denis MAILLET Benjamin.remy@ensem.inpl-nancy.fr L.E.M.T.A, U.M.R.- C.N.R.S. 7563 / E.N.S.E.M 02, avenue de la Forêt de Haye, B.P 160 54 504 Vandoeuvre-Lès-Nancy Cedex - FRANCE ROSCOFF(France) – June 13-18 2011 1

  2. OUTLINE Definition and Vocabulary I. Useful Tools to investigate NLPE Problems II. III. Enhancing the Performances of Estimation Natural Parameters & Dimensional Analysis Natural Parameters & Dimensional Analysis � � Reducing the PEP to make it well- conditioned (Case of the � Contrast Method) Over-Parameterized Models (Case of the “Hot”-Wire � technique � Estimations with models without degrees of Freedom (Case of the Liquid Flash Experiment) Taking the bias into account to reduce the variances on � estimated parameters (Case of the classical “Flash” method) 2

  3. 3

  4. β requires a specific experiment allowing for this quantity to "express Measuring a physical quantity j itself as much as possible" (notion of sensitivity ). This experiment is requires a system onto which inputs u(t ) are applied (stimuli) and whose outputs y( t ) are collected (observations). t is the explanatory variable : it corresponds to time for a pure dynamical experiment. A model M is required to mathematically express the dependence of the system's response with β and to other additional parameters respect to quantity j ( ( ) ) ≠ ≠ = = β β η ( , , ) η ( , , ) t β u t β u k k j j : : y y k mo Many candidates may exist for function η -depending on the degree of complexity reached for modelling the physical process- which may exhibit different mathematical structure –depending for example on the type of method used to solve the model equations. Once this model is established, the physical quantities in vector β acquire the status of model parameters . This model (called knowledge model if it is derived from physical laws and/or conservation principles) is initially established in a direct formulation. Knowing inputs u(t ) and the value taken by parameter β , the output(s) can be predicted. 4

  5. The linear or non linear character of the model has to be determined: � A Linear model with respect to its Inputs (LI structure) is such as: + = + ( t, β , α u α u ) α y ( t, β ,u ) α y ( t, β ,u ) y (1) mo 1 1 2 2 1 mo 1 2 mo 2 � A Linear model with respect to its parameters (LP structure) is such as: + = + ( t, α β α β ,u ) α y ( t, β ,u ) α y ( t, β ,u ) y (2) mo 1 1 2 2 1 mo 1 2 mo 2 The inverse problem consists in making the direct problem work backwards with the objective of getting (extracting) β from ( t, β ,u ) for given inputs and observations y . This is an identification y mo mo process. The difficulty stems here from two points: Measurements y are subjected to random perturbations (intrinsic noise ε ) which in turn will (i) generate perturbed estimated values ˆ β of β , even if the model is perfect: this constitutes an estimation problem. (ii) the mathematical model may not correspond exactly to the reality of the experiment. Measuring = − ˆ the value of β in such a condition leads to a biased estimation E( β ) β true Bias : this corresponds to an identification problem (which model η to use ?) associated to an estimation problem (how to estimate β for a given model?). 5

  6. The estimation/identification process basically tends to make the model match the data (or the contrary). This is made by using some mathematical "machinery" aiming at reducing some gap (distance or norm) = − r β t β u y y ( ) ( , , ) (3) mo One of the obvious goal of NLPE studies is then to be able to assess the performed estimation ˆ β obtained on the estimators (set of V( ) through the production of numerical values for the variances estimated values parameter. This allows to give the order of magnitude of confidence bounds for the estimated values parameter. This allows to give the order of magnitude of confidence bounds for the estimate). NLPE problems require the use of Non Linear statistics for studying such properties of the estimates. Because of the two above-mentioned drawbacks of MBM, the estimated or measured value of a β will be considered as "good" if it is not biased and if its variance is minimum. parameter j Quantifying the bias and variance is also helpful to determine which one of two rival experiments is the most appropriate for measuring the searched parameter (Optimal design). In case of multiple parameters (vector β ) and NLPE problems, it is also helpful to determine which components of vector β are correctly estimated in a given experiment. 6

  7. 7

  8. Sensitivities Sensitivities In the case of a single output signal y with m sampling points for the explanatory variable t and for a model involving n parameters, the sensitivity matrix is ( ) × m n defined as ∂ β nom y (t ; ) = mo i S ∂ i j β j ≠ , β t pour k j k As the problem is NL, the sensitivity matrix has only a local meaning. It is calculated for a given β nom nominal parameter vector . If the model has a LP structure, this means that the sensitivity matrix is independent from β . It can be If the model has a LP structure, this means that the sensitivity matrix is independent from β . It can be expressed as (Lecture 2) n = ∑ β S (t ) β y (t, ) mo j j = j 1 β corresponds to the th th S (t ) to the j parameter j column of matrix S . The sensitivity coefficient j j The primary way of getting information about the identifiability of the different parameters is to analyse sensitivity the coefficients through graphical observations. This is possible only when * S because the parameters of a model do not have in considering reduced sensitivity coefficients j general the same units. ∂ ∂ t β nom t β nom y ( ; ) y ( ; ) * = = = β β mo mo S S ∂ ∂ j j j j β (ln β ) 8 j j ≠ ≠ , β , β t t pour k j pour k j k k

  9. * S j ( ) t gives a first idea TOOL Nr1: A superimposed plot of reduced sensitivity coefficients about the more influent parameters of a problem (largest magnitude) and about possible correlations (sensitivity coefficients following the same evolution). Example : Measurement of thermophysical properties of coatings through Flash method using thermal n = contrast principle. Case 2 e 1 e 2 a λ ρ C ϕ ϕ p T (2 T (1) 0 0 A B (2) Experiment A B experiment λ ρ c Reduced sensitivity coefficients e a = = 1 2 1 1 1 K K and 1 2 λ ρ c for and = = e a K 0 1 . K 1 36 . 2 1 2 2 2 1 2 9

  10. Variance/Covariance Matrix Variance/Covariance Matrix INVERSE ANALYSIS : The model : β T ( t , ) i The Observable: = = + + ε ε β β ( ( , , ) ) Y Y T T t t i i i i i i σ = 0 . 005 ε The experimental noise corrupt the data: i ε = ε = σ ε = σ 2 2 ( ) 0 var( ) cov ( ) Id E i i i n ( ) = ∑ − 2 β β S ( ) Y T ( t , ) i i = i 1 allows to get an estimate of β β via minimization β β 10

  11. INVERSE ANALYSIS : The minimization process ( ) − 1   + t t ( k 1 ) = ( k ) + ( k ) ( k ) ( k ) − ( k ) ˆ ˆ β β β   X X X Y T ( )   indicates the basic tools for inverse analysis = Matrix analysis Sensitivities to parameters   ∂ ∂ ∂ β β β T ( t , ) T ( t , ) T ( t , ) L 1 2 n ∂ β ∂ β ∂ β   ∂ β 1 1 1 T ( t , ) = ∇ = = β T  M M M  X T ( t , ) β ∂ β   ∂ ∂ ∂ β β β T ( t , ) T ( t , ) T ( t , ) L 1 2 n   ∂ β ∂ β ∂ β p p p Variance-covariance matrix ( ) − 1 = σ • Minimum ˆ β 2 T cov X X • Noise assumptions dependent 11

  12. Variance-covariance matrix ( ) − 1 = σ ˆ β 2 T cov X X β β ˆ ˆ cov( , ) ρ = i j σ β σ ij 2 2 β ˆ ˆ i j ρ   L 1   β β β ˆ ˆ ˆ ij L var( ) cov( , )     i i j ˆ ≈ ≈ ρ ρ ≈ ≈ β β β β β β ˆ ˆ ˆ ˆ ( β ( β     β β L L   L L   cor cor ) ) 1 1 cov cov ( ( ) ) cov( cov( , , ) ) var( var( ) ) ij ij i i j j j j       M M O     M M O     ( )   ρ β β ˆ ˆ L var /   i i ij ( )   ρ ≈ ˆ ( β L β β Vcor ) ˆ ˆ var /  ij  j j relative error   Correlation on β i   coefficients M M O   12

  13. ˆ β gives a quantitative point of view about the identifiability of the V ( ) TOOL Nr2: Matrix cor parameters. The diagonal gives a kind of measurement (minimal bound!) of the error made on the estimated parameters (due to the sole stochastic character of the noise, supposed unbiased). The off-diagonal terms (correlation coefficients) are generally of poor interest because of their ± may explain very large variances (errors) on the too global character. Values very close to 1 parameters through a correlation effect. 13

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend