normalization invariant fuzzy logic
play

Normalization-Invariant Fuzzy Logic Need for Normalization - PowerPoint PPT Presentation

Traditional . . . Need for Heavy-Tailed . . . What We Do Normalization-Invariant Fuzzy Logic Need for Normalization Operations Explain Empirical Success of How to Combine Degrees Student Distributions in Describing Deriving Student . . .


  1. Traditional . . . Need for Heavy-Tailed . . . What We Do Normalization-Invariant Fuzzy Logic Need for Normalization Operations Explain Empirical Success of How to Combine Degrees Student Distributions in Describing Deriving Student . . . Measurement Uncertainty Acknowledgments Home Page Hamza Alkhatib 1 , Boris Kargoll 1 , Title Page Ingo Neumann 1 , and Vladik Kreinovich 2 ◭◭ ◮◮ 1 Geod¨ atisches Institut, Leibniz Universit¨ at Hannover ◭ ◮ Nienburger Strasse 1, 30167 Hannover, Germany alkhatib@gih.uni-hannover.de, kargoll@gih.uni-hannover.de Page 1 of 13 neumann@gih.uni-hannover.de 2 Department of Computer Science, University of Texas at El Paso Go Back El Paso, TX 79968, USA, vladik@utep.edu Full Screen Close Quit

  2. Traditional . . . 1. Traditional Engineering Approach to Measure- Need for Heavy-Tailed . . . ment Uncertainty What We Do • Traditionally, in engineering applications, it is assumed Need for Normalization that the measurement error is normally distributed. How to Combine Degrees Deriving Student . . . • This assumption makes perfect sense from the practical Acknowledgments viewpoint. Home Page • For the majority of measuring instruments, the mea- Title Page surement error is indeed normally distributed. ◭◭ ◮◮ • It also makes sense from the theoretical viewpoint: ◭ ◮ – the measurement error often comes from a joint Page 2 of 13 effect of many independent small components, – so, according to the Central Limit Theorem, the Go Back resulting distribution is indeed close to Gaussian. Full Screen Close Quit

  3. Traditional . . . 2. Traditional Engineering Approach (cont-d) Need for Heavy-Tailed . . . What We Do • Another explanation: we only have partial information about the distribution. Need for Normalization How to Combine Degrees • Often, we only know the first and the second moments. Deriving Student . . . • The first moment – mean – represents a bias. Acknowledgments • If we know the bias, we can always subtract it from the Home Page measurement result. Title Page • Thus re-calibrated measuring instrument will have 0 ◭◭ ◮◮ mean. ◭ ◮ • Thus, we can always safely assume that the mean is 0. Page 3 of 13 • Then, the 2nd moment is simply the variance V = σ 2 . Go Back Full Screen Close Quit

  4. Traditional . . . 3. Traditional Engineering Approach (cont-d) Need for Heavy-Tailed . . . What We Do • There are many distributions w/0 mean and given σ . Need for Normalization • For example, we can have a distribution in which we How to Combine Degrees have σ and − σ with probability 1/2 each. Deriving Student . . . • However, such a distribution creates a false certainty – Acknowledgments that no other values of x are possible. Home Page • Out of all such distributions, it makes sense to select Title Page the one which maximally preserves the uncertainty. ◭◭ ◮◮ • Uncertainty can be gauged by average number of bi- ◭ ◮ nary questions needed to determine x with accuracy ε . Page 4 of 13 � • It is described by entropy S = − ρ ( x ) · log 2 ( ρ ( x )) dx . Go Back • Out of all distributions ρ ( x ) with mean 0 and given σ , Full Screen the entropy is the largest for normal ρ ( x ). Close Quit

  5. Traditional . . . 4. Need for Heavy-Tailed Distributions Need for Heavy-Tailed . . . • For the normal distribution, What We Do Need for Normalization − x 2 1 � � ρ ( x ) = √ 2 π · σ · exp . How to Combine Degrees 2 σ 2 Deriving Student . . . • The “tails” – values corresponding to large | x | – are Acknowledgments very light, practically negligible. Home Page • Often, ρ ( x ) decreases much slower, as ρ ( x ) ∼ c · x − α . Title Page � ∞ 0 x − α dx = + ∞ , • We cannot have ρ ( x ) = c · x − α , since ◭◭ ◮◮ � and we want ρ ( x ) dx = 1. ◭ ◮ • Often, the measurement error is well-represented by a Student distribution ρ S ( x ) = ( a + b · x 2 ) − ν . Page 5 of 13 Go Back • Our experience is from geodesy, but the Student dis- tributions is effective in other applications as well. Full Screen • This distribution is even recommended by the Interna- Close tional Organization for Standardization (ISO). Quit

  6. Traditional . . . 5. What We Do Need for Heavy-Tailed . . . • How to explain the empirical success of Student’s dis- What We Do tribution ρ S ( x )? Need for Normalization How to Combine Degrees • We show that a fuzzy formalization of commonsense Deriving Student . . . requirements leads to ρ S ( x ). Acknowledgments • Our idea: uncertainty means that the first value is pos- Home Page sible, and the second value is possible, etc. Title Page • Let’s select ρ ( x ) with the largest degree to which all ◭◭ ◮◮ the values are possible. ◭ ◮ • It is reasonable to use fuzzy logic to describe degrees Page 6 of 13 of possibility. Go Back • An expert marks his/her degree by selecting a number from the interval [0 , 1]. Full Screen Close Quit

  7. Traditional . . . 6. Need for Normalization Need for Heavy-Tailed . . . • For “small”, we are absolutely sure that 0 is small: What We Do µ small (0) = 1 and max µ small ( x ) = 1. Need for Normalization x How to Combine Degrees • For “medium”, there is no x with µ med ( x ) = 1, so Deriving Student . . . max µ med ( x ) < 1. Acknowledgments x • A usual way to deal with such situations is to normalize Home Page µ ( x ) µ ( x ) into µ ′ ( x ) = µ ( y ) . Title Page max y ◭◭ ◮◮ • Normalization is also needed performed when we get ◭ ◮ additional information. Page 7 of 13 • Example: we knew that x is small, we learn that x ≥ 5. Go Back • Then, µ new ( x ) = µ small ( x ) for x ≥ 5 and µ new ( x ) = 0 Full Screen for x < 5, and max µ new ( x ) < 1. x Close Quit

  8. Traditional . . . 7. Need for Normalization (cont-d) Need for Heavy-Tailed . . . What We Do • Normalization is also needed when experts use proba- bilities to come up with the degrees. Need for Normalization How to Combine Degrees • Indeed, the larger ρ ( x ), the more probable it is to ob- Deriving Student . . . serve a value close to x . Acknowledgments • Thus, it is reasonable to take the degrees µ ( x ) propor- Home Page tional to ρ ( x ): µ ( x ) = c · ρ ( x ). Title Page ρ ( x ) • Normalization leads to µ ( x ) = ρ ( y ) . ◭◭ ◮◮ max y ◭ ◮ • Vice versa, if we have the result µ ( x ) of normalizing a Page 8 of 13 µ ( x ) pdf, we can reconstruct ρ ( x ) as ρ ( x ) = µ ( y ) dy. � Go Back Full Screen Close Quit

  9. Traditional . . . 8. How to Combine Degrees Need for Heavy-Tailed . . . • For each x , we thus get a degree to which x is possible. What We Do Need for Normalization • We want to compute the degree to which x 1 is possible How to Combine Degrees and x 2 is possible, etc. Deriving Student . . . • So, we need to apply an “and”-operation (t-norm) to Acknowledgments the corresponding degrees. Home Page • Natural idea: use normalization-invariant t-norms. Title Page • We can compute the normalized degree of confidence ◭◭ ◮◮ in a statement A & B in two different ways: ◭ ◮ – we can normalize f & ( a, b ) to λ · f & ( a, b ); Page 9 of 13 – or, we can first normalize a and b and then apply Go Back an “and”-operation: f & ( λ · a, λ · b ). Full Screen • It’s reasonable to require that we get the same esti- Close mate: f & ( λ · a, λ · b ) = λ · f & ( a, b ) . Quit

  10. Traditional . . . 9. How to Combine Degrees (cont-d) Need for Heavy-Tailed . . . What We Do • It is known that Archimedean t-norms f & ( a, b ) = f − 1 ( f ( a ) + f ( b )) are universal approximators. Need for Normalization How to Combine Degrees • So, we can safely assume that f & is Archimedean: Deriving Student . . . c = f & ( a, b ) ⇔ f ( c ) = f ( a ) + f ( b ) . Acknowledgments Home Page • Thus, invariance means that f ( c ) = f ( a )+ f ( b ) implies f ( λ · c ) = f ( λ · a ) + f ( λ · b ). Title Page • So, for every λ , the transformation T : f ( a ) → f ( λ · a ) ◭◭ ◮◮ is additive: T ( A + B ) = T ( A ) + T ( B ). ◭ ◮ • Known: every monotonic additive function is linear. Page 10 of 13 • Thus, f ( λ · a ) = c ( λ ) · f ( a ) for all a and λ . Go Back • For monotonic f ( a ), this implies f ( a ) = C · a − α . Full Screen • So, f ( c ) = f ( a )+ f ( b ) implies C · c − α = C · a − α + C · b − α , Close and c = f & ( a, b ) = ( a − α + b − α ) − 1 /α . Quit

  11. Traditional . . . 10. Deriving Student Distribution Need for Heavy-Tailed . . . • We want to maximize the degree What We Do Need for Normalization f & ( µ ( x 1 ) , µ ( x 2 ) , . . . ) = (( µ ( x 1 )) − α +( µ ( x 2 )) − α + . . . ) − 1 /α . How to Combine Degrees Deriving Student . . . • The function f ( a ) is decreasing. Acknowledgments • So, maximizing f & ( µ ( x 1 ) , . . . ) is equivalent to minimiz- Home Page ing the sum ( µ ( x 1 )) − α + ( µ ( x 2 )) − α + . . . Title Page def ( µ ( x )) − α dx . � • In the limit, this sum tends to I = ◭◭ ◮◮ � • So, we minimize I under constrains x · ρ ( x ) dx = 0 ◭ ◮ µ ( x ) x 2 · ρ ( x ) dx = σ 2 , where ρ ( x ) = � and µ ( y ) dy. Page 11 of 13 � ( µ ( x )) − α dx under constraints Go Back � • Thus, we minimize Full Screen � � � x 2 · µ ( x ) dx − σ 2 · x · µ ( x ) dx = 0 and µ ( x ) dx = 0 . Close Quit

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend