and Deep Reconstruction Dr. Uwe Kruger Department of Biomedical - - PowerPoint PPT Presentation

and deep reconstruction
SMART_READER_LITE
LIVE PREVIEW

and Deep Reconstruction Dr. Uwe Kruger Department of Biomedical - - PowerPoint PPT Presentation

Projection-based Chemometrics and Deep Reconstruction Dr. Uwe Kruger Department of Biomedical Engineering Jonsson Engineering Center Rensselaer Polytechnic Institute Presentation Outline Motivation for kernel-based methods (kernel density


slide-1
SLIDE 1

Projection-based Chemometrics and Deep Reconstruction

  • Dr. Uwe Kruger

Department of Biomedical Engineering Jonsson Engineering Center Rensselaer Polytechnic Institute

slide-2
SLIDE 2
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 2

Presentation Outline

  • Motivation for kernel-based methods (kernel density

estimation)

  • Principal Component Analysis (PCA) and Kernel

principal component analysis (KPCA)

  • Partial Least Squares (PLS) and Kernel partial least

squares (KPLS)

  • Some ideas on how to integrate nonlinear projection-

based methods for network pruning and detecting/diagnosing anomalies.

slide-3
SLIDE 3
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Motivation for Kernel-Based methods

  • Let’s examine a very simple approach to motivate Cover’s theorem and

the idea behind reproducing kernels:

  • How can we estimate the cumulative distribution function of a random

variable X using a set of n observations drawn from the distribution of X?

  • Let’s try the following naïve estimator:

Slide 3

   

n x S n x x x F

i

   # ˆ

slide-4
SLIDE 4
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Motivation for Kernel-Based methods

  • OK, the n observations, if assumed to be drawn independently, can be

used to formulate a total of n Bernoulli trials (like flipping a coin)

  • two outcomes, the value can be larger or smaller than x ;
  • the probability to be smaller then x (success) is equal to the

cumulative probability distribution function for x, i.e. F (x) ; and

  • for the ith draw (drawing the ith value of the random variable X ), the

probability that xi is smaller than or equal to x is F (x) for 1  i  n.

  • Under these assumptions, S (x) has a binomial distribution with n

degrees of freedom and the probability of success is F (x):

Slide 4

         

x n x

p p x n x f x F p n B x S

          1 ,

                 

x F x nF p np x S V x nF np x S E       1 1

slide-5
SLIDE 5
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Motivation for Kernel-Based methods

  • OK, this implies that the naïve estimator is unbiased:
  • This follows from simple asymptotics!
  • We can develop this one step further by utilizing the fact that the

Binomial distribution can be approximated by a normal distribution with a reasonable degree of accuracy, meaning a large enough sample size: np > 5 and n ( 1 – p ) > 5!

Slide 5

 

 

         

 

                 

 

   

x F x F x F V n x F x F n x F x nF n x S V x F V x F n x nF n x S E x F E

n n n      

          lim ˆ lim ˆ lim 1 1 ˆ ˆ

2 2

slide-6
SLIDE 6
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Motivation for Kernel-Based methods

  • Let’s define a new random variable first:
  • The above confidence interval is computed for a significance of

=0.05!

  • OK, let’s move on and convert this into an integral equation, one

second…

Slide 6

                                                     

x F x nF x nF x x x F x nF x nF x F x nF x nF x x x F x nF x nF x x x Z N x F x nF x nF x S x Z

i i i

                     1 96 . 1 # 1 96 . 1 96 . 1 1 # 96 . 1 1 # 1 , 1

slide-7
SLIDE 7
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Motivation for Kernel-Based methods

Slide 7

                                                                         

x n x F x F x f x x K n x n x F x F x f n x F x F x F x K n n x F x F x F n x F x F x F x n n x F x F x F x x x x x x F x nF x nF x x F x nF x nF

n i i x n i i x n i i i i x i x n i i

d 1 d 96 . 1 1 d 1 d 96 . 1 1 96 . 1 d 1 1 96 . 1 1 96 . 1 d 1 1 96 . 1 if if 1 d 1 96 . 1 d 1 96 . 1

1 1 function delta Dirac spiky" " less slightly 1 1

                                                  

      

           

               

slide-8
SLIDE 8
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Motivation for Kernel-Based methods

  • So what have we got?
  • All we said about the slightly less spiky Dirac delta function is that its

integral must be equal to one, so how about defining it as follows:

Slide 8

                                 

 

     

x f x x K n x f x x F x F n x f x n x F x F x f x n x F x F x f x x K n x n x F x F x f

n i i n n n n i i

                                      

 

        1 1

1 lim d 1 d 1 lim d 1 d 96 . 1 lim d 1 d 96 . 1 1 d 1 d 96 . 1

     

i i x x i

x x x x K e x x K

i

     

        

    2 1

lim

2 2 1

slide-9
SLIDE 9
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Density Estimation

  • The function is referred to as a kernel function and the

derivative shows that, asymptotically, the estimate: converges to the true probability density function for any value of x. The above estimator is defined as a kernel density estimator.

  • Along the same lines, we can also develop an approach to develop nonlinear

counterpart of data-driven chemometric modeling techniques, such as principal component analysis (PCA) and partial least squares (PLS).

  • Essentially, an artificial neural network can be seen as a kernel-based nonlinear

modeling technique, i.e. the neurons are, effectively, small kernels.

  • Let’s start with PCA first, after some more discussions on kernels.

Slide 9

 

i

x x K   

n i i

x x K n

1

1

slide-10
SLIDE 10
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Density Estimation

  • Theoretically, kernel functions other than the Gaussian kernel:

can be considered if their area is equal to 1 and include the Epanechnikov, the triangular and the uniform kernel among others.

  • Theoretically, the derivative showed that the shape of the kernel

function does not influence the estimate in an asymptotic sense.

  • Practically, however, the shape of the kernel function does influence

the accuracy of the estimate. This yields the following general form of the kernel density estimator:

Slide 10

 

2 2 1

2 1        

 

  

i

x x i

e x x K bandwidth , , 1

2 2 1

2 1 1

               

        

h e h x x K h x x K nh

h x x i n i i

i

slide-11
SLIDE 11
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Principal Component Analysis - Introduction

  • Kernel PCA is a generic nonlinear extension to linear PCA (Kruger et

al., 2008).

  • Let’s look at some basics before we go into the kernel stuff.

 singular value decomposition

  • Next, let’s define the following two matrices:

Slide 11

       

T T T n T T T n T T

E E ULP A s s s z z z Z s A z s z As z                                 

2 1 2 1

dim dim

 

 

 

position eigendecom its and matrix Gram , position eigendecom its and matrix covariance data

2 z 2 1 1

     

T T T n T n z

U L U ZZ Z Z Φ P L P Z Z Σ

slide-12
SLIDE 12
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Principal Component Analysis - Introduction

  • Let’s see how we can determine the unknown source variables (up to a

similarity transformation) – which are the principal components:

  • Let’s make the relationship between the source variables and the

measured variables nonlinear, i.e.: which we assume to be bijective!

Slide 12

 

Z U L P ULP Z z Z Φ U L Zz U L z P t P A T UL S ULP A s s s

T T T z T T T T T T n T T 1 1 1 2 1

given that , ,

  

                       

         

                

n T T T

z ψ z ψ z ψ F f P t z ψ f s θ z 

2 1 T

, , ,

slide-13
SLIDE 13
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Principal Component Analysis - Introduction

  • Let’s define the Gram matrix

Slide 13

 

 

 

                                       

                           

nn n n n n n n T n T n T n T T T n T T T T T T n T T n

k k k k k k k k k                        

2 1 2 22 21 1 12 11 2 1 2 2 2 1 2 1 2 1 1 1 centering mean ing incorporat 1 matrix kernel the as defined centering mean ing incorporat 1 z

, , , Z Z K Z Z K z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ z ψ FF 11 I FF 11 I Z Z Φ

slide-14
SLIDE 14
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Principal Component Analysis - Introduction

  • Let’s repeat the “trick” we did when estimating the probability density

function using the kernel density estimator using kernels:

Slide 14

     

                    

                                                                     

1 1 1 ,

2 2 2 1 2 1 2 1 2 2 2 1 2 1 2 2 1 2 1 2 1 2 2 1 2 1 2 2 1

2 1 2 1

      

           z z z z z z z z z z z z z z

Z Z K z ψ z ψ

n n n n j i

e e e e e e e k

j i T ij

slide-15
SLIDE 15
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Principal Component Analysis - Introduction

  • Let’s finalize the definition of the Gram matrix:
  • Next, we carry out the eigendecomposition of the Gram matrix:
  • In a similar fashion to PCA, we can now determine the principal

components:

Slide 15

         

T T n T n T n

11 Z Z K 11 Z Z K 11 11 Z Z K Z Z K Z Z Φ , , , , ,

2

1 1 1 z

   

 

T

ULU Z Z Φ  ,

z

 

 

   

 

 

 

k z Z k A t 1 Z Z K z Z k A ψ z ψ F 11 I U L t

1 F A

                

, , ,

functions basis

  • f

sum weighted a is this network, neural a like 1 1 1

1

T n T T n T

T n T

                 

slide-16
SLIDE 16
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Principal Component Analysis - Introduction

  • Asymptotically, n → , the shape of the basic kernel function is not

important.

  • Theoretically, and this follows from the properties of reproducing

kernels, any function can be constructed in the feature space that maps the nonlinear surface in the data space to become a plane (subspace) in the feature space.

  • The projection in the feature space then yields linear principal

components in the feature space that are related to the source variables in the original variable space – connected through the following mappings:

Slide 16

   

 

 

t θ z f f A t z ψ f s θ z ~ , , ,     

T

slide-17
SLIDE 17
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Partial Least Squares – Introduction

  • Let’s examine the geometric framework that underpins the partial least

squares concept: orthogonally projecting the data points onto directions for the predictor space: cos 𝛽 =

𝒚𝑈𝒙 𝒚 𝒙

with 𝒙 = 1, we get cos 𝛽 𝒚 = 𝒚𝑈𝒙 = 𝑢 and the response space: cos 𝛾 𝒛 = 𝒛𝑈𝒘 = 𝑣 if 𝒘 = 1

Slide 17

x w t y v u  

slide-18
SLIDE 18
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Partial Least Squares – Introduction

  • Let’s examine the random vectors – these can be of a considerable

dimension – X and Y describing the predictor and response sets that are related as follows: 𝒁 = 𝑪𝒀 + 𝑭 - E being a random vector describing uncertainty

  • We could use ordinary least squares to determine the parameter matrix B:

𝑪 = 𝑻𝑍𝑌𝑻𝑌𝑌

−1

  • The problem is that if X has a very large dimension, the inverse of the

covariance matrix SXX may not exist or is badly conditioned!

  • Here is where PLS comes in! Using the projections we discussed before:

𝑈 = 𝒀𝑈𝒙 and 𝑉 = 𝒁𝑈𝒘

  • Now, we select the random variables T and U such that they maximize

their covariance!

Slide 18

slide-19
SLIDE 19
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Partial Least Squares – Introduction

  • This yields the following objective function:

𝐾 = 𝐹 𝑈𝑉 − 𝜇1 𝒙𝑈𝒙 − 1 − 𝜇2 𝒘𝑈𝒘 − 1 𝐾 = 𝒙𝑈𝐹 𝒀𝒁𝑈 𝒘 − 𝜇1 𝒙𝑈𝒙 − 1 − 𝜇2 𝒘𝑈𝒘 − 1 𝐾 = 𝒙𝑈𝑻𝑌𝑍𝒘 − 𝜇1 𝒙𝑈𝒙 − 1 − 𝜇2 𝒘𝑈𝒘 − 1

𝜖𝐾 𝜖𝒙 = 𝑻𝑌𝑍𝒘 − 2𝜇1𝒙 = 𝟏 𝜖𝐾 𝜖𝒘 = 𝑻𝑍𝑌𝒙 − 2𝜇2𝒘 = 𝟏

𝑻𝑌𝑍𝑻𝑍𝑌𝒙 = 4𝜇1𝜇2𝒙 𝑻𝑍𝑌𝑻𝑌𝑍𝒘 = 4𝜇1𝜇2𝒘

  • So, we now have the direction vectors w and v in both spaces!
  • That also means that we have the random variables T and U!
  • Whilst X predicts Y, PLS utilizes T – instead of X – to predict Y!

Slide 19

slide-20
SLIDE 20
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Partial Least Squares – Introduction

  • With 𝑈 = 𝒀𝑈𝒙, we get:

𝑮 = 𝒀 − 𝑼𝒒 and 𝑭 = 𝒁 − 𝑼𝒓 – these being the residual vectors for X and Y, respectively.

  • The parameter vectors p and q can be obtained by solving two least

squares regression problems – minimizing the length of the residual vectors: 𝒒 =

𝐹 𝒀𝑈 𝐹 𝑈2 and 𝒓 = 𝐹 𝒁𝑈 𝐹 𝑈2

  • After that, the PLS algorithm can be repeated using the residual

vectors F and E instead of the original random vectors X and Y.

  • This gives rise to the following iterative algorithm, which is referred to

as the standard PLS algorithm and detailed on the next slide. This algorithm was first published by Herman Wold in 1966.

Slide 20

slide-21
SLIDE 21
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Partial Least Squares – Algorithm

  • Let X and Y be two data matrices that store a total of n data points drawn from the

random vectors X and Y, respectively:

  • The first step is to normalize the data matrices, i.e. the observations in each

column are mean centered and scaled to have a unit variance:

Slide 21

                     

nN n n N N nN n n N N

y y y y y y y y y x x x x x x x x x              

2 1 2 22 21 1 12 11 2 1 2 22 21 1 12 11

Y X

   

 

   

 

T T T n Y T T T n X T n T n

x 1 Y x 1 Y σ x 1 X x 1 X σ 1 Y y 1 X x        

 

diag : diag : vectors variance Sample : : rs mean vecto Sample

1 1 2 1 1 2 1 1

           

diag diag : matrices both g Normalizin

2 1 2 1

2 2 Y T X T

σ y 1 Y Y σ x 1 X X    

slide-22
SLIDE 22
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Partial Least Squares – Algorithm

  • Next, define the covariance and cross-covariance matrices:
  • Setup the PLS iteration and determining the regression vectors

Slide 22

   

   

   

   

1 1 1 1

1 , : matrix covariance

  • cross

Sample 1 , : matrix covariance Sample

XY XY T n XY XX XX T n XX

n n S M Y X S S M X X S      

 

     

 

 

 

 

   

end ; ; norm ; norm ; ; norm ; 10 1 while ; 1 :, norm 1 :, 100; : 1 for

u u u 1 1 1 1

w w w w w w w v M w v v v w M v M M w           

 

  

i- XY i- YX i XX i XX

e- m i

   

 

   

 

     

     

end ; i :, ; i :, ; i :, ; ; ; ; ; ; ;

i i i 1 1 1 1 1 1 1 1

w W q Q p P Y X M X X M q w X Y Y p w X X X w M w w M q w M w w M p w w            

        i T i i XY i T i i XX T i i i i i T i i i i i i i XX T i i YX i i i XX T i i i XX i i

slide-23
SLIDE 23
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Partial Least Squares – Algorithm

  • The regression matrix can now be estimated as follows:
  • To establish a nonlinear extension of the standard PLS algorithm, let’s

look at the standard algorithm again:

  • We can “kernelize” the above Gram matrix by using the following

nonlinear transformation involving the random vectors X, Y and E:

  • Based on the data matrix X, we get:

Slide 23

 

T T

W P W Q B

1 

v Y X X Y v

m atrix Gram a is This

  

T T

 

 

E B G Y X ψ G   

T

 

E G Y X G    B ψ

slide-24
SLIDE 24
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Partial Least Squares – Algorithm

  • Using the Gram matrix based on 𝝎 𝐘0 , we can compute the projection

vector v as follows:

  • Once we have v, it is easy to compute the vector u:
  • The next step is to compute the vector t. For linear PLS, we can derive

the following relationship:

Slide 24

   

   

v Y 11 I GG 11 I Y v

X ψ X X Φ for matrix Gram the , , is This 1 1

          

X

T n T T n T

   

v Y u 

u X X t v Y X v M w w X t

u T T XY

;    

slide-25
SLIDE 25
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Partial Least Squares – Algorithm

  • Instead of the linear Gram matrix 𝐘0𝐘𝟏

𝐔, we can also use the nonlinear

Gram matrix 𝚾𝑌 𝐘0, 𝐘0 , which gives rise to:

  • To address the scaling problem, as we cannot compute the projection

vector w, we can scale the vector t to unit length:

  • Now, we can deflate the Gram matrix, 𝐇0𝐇0

𝑈 = 𝚾𝑌 𝐘0, 𝐘0 :

Slide 25

 u

X X Φ t

0,

 

t t t norm 

     

T i i T i i T i i T i i i T i i i T i i T i i i T i i i i 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1                  

         t t I G G t t I G G G t t I t t G t t G p t G G   

slide-26
SLIDE 26
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Partial Least Squares – Algorithm

and the response matrix 𝐙0:

  • The last step is to compute the regression matrix. Again, let’s look

again at the linear PLS algorithm first:

  • Using this regression matrix for predicting a new observation yields:

Slide 26

 

1 1 1 1 1 1 1 1 1 1 1 1 1            

     

i T i i i T i i T i i i T i i i i

Y t t I t t Y t t Y q t Y Y   

       

  

  

T 1 T T 1 1 T 1 1 1 1

Y T U X X T U X B Y T T T U X X T T T U X B U X W T T T X P T T T Y Q Q W P W B

      

     

T T T T T T T T T T T T T T T T

 

T 1 T

ˆ Y T U X X T U X x B x y

 

T T T T T T

slide-27
SLIDE 27
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Kernel Partial Least Squares – Algorithm

  • Finally, replacing the linear Gram matrix and vector by there nonlinear

counterparts gives rise to:

  • Let’s put this all together and define the KPLS algorithm.
  • Besides the construction of the nonlinear transformation, and with it its

Gram matrix, the rest of the algorithm is related to the linear PLS algorithm.

  • Compared to artificial neural networks, which have many network

weights, the “only” parameter that needs to be specified is the kernel

  • parameter. The remaining parameter are obtained by a linear

regression problem an solved using the robust PLS algorithm!

Slide 27

     

 

   

T 1 T

ˆ x ψ X ψ U T X ψ X ψ U T Y x ψ B y

 

T T

slide-28
SLIDE 28
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

How could chemometric techniques assist in dealing with very large network architechtures?

  • Disclaimer: KPLS is not a substitute to deep learning architectures!
  • KPLS does run into problems if the number of data points increase,

say beyond 10,000 (remember the size of the Gram matrix is equal to the number of data points squared)

  • A KPLS model has the potential to outperforms competitive artificial

neural network models when the number of variables x or y are larger and/or the number of data points is small.

  • To see how KPCA and KPLS could be useful tools, let’s examine the

structure of large network topologies on the next slide in more detail

Slide 28

slide-29
SLIDE 29
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

How could chemometric techniques assist in dealing with very large network architechtures?

  • Starting from a “small” (trained) network:
  • 1. how do we know that this neuron is important or could be discarded if it contributes negligibly to

the accuracy of the network prediction – e.g. for specific tasks (set of lung images)?

  • 2. secondly, how can we statistically examine significant differences in the individual layers/layer

combinations if we have two sets of images (one set that is labeled normal, whilst the other set is labeled as containing anomalies)? Slide 29

slide-30
SLIDE 30
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 30

Abnormality Detection (Basis Multivariate Approach)

  • Hotelling’s T2 Statistic
  • Q Statistic
  • Point 1
  • Point 2
  • Point 2
  • Point 1
  • Measurement Error
  • First Component
  • Second Component
slide-31
SLIDE 31
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 31

Application to Fault Diagnosis (Internal Combustion Engine)

  • Analysis of data from a Volkswagen 1.9L TDI diesel engine.
  • Various fault conditions were recorded and diagnosed.

DYNAMOMETER

air in exhaust manifold plenum chamber Fault 2: intercooler blockage (process) Fault 1: injector pump fuel meter (sensor) inlet manifold pressure inlet manifold temperature turbine inlet pressure turbine inlet temperature turbine exit pressure compressor (turbocharger)

slide-32
SLIDE 32
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 32

Application to Fault Diagnosis (Internal Combustion Engine)

Variables analysed Modelling results

Principal Component Variance Captured (%) Variance Total (%) 1 79.5998 79.5998 2 16.4492 96.0490 3 2.4169 98.4659 4 1.0745 99.5404 5 0.4010 99.9414 6 0.0586 100.000 Number of Bottleneck Nodes Variance Captured (%) Note 1 97.8160 Important variation 2 99.4212 3 99.8336 4 99.8725 Negligible 5 99.9401 6 99.9414 No Engine Variable Unit Note 1 Fuel Flow kg/h

  • utput

2 Air Flow kg/h 3 Inlet Manifold Pressure Bar 4 Inlet Manifold Temperature

  • C

5 Turbine Inlet Pressure Bar 6 Turbine Inlet Temperature

  • C

RPM

1500 2500 3500 4500 Pedal Position 30% 49% 57% 62% 40% 59% 64% 65% 54% 74% 74% 76% 62% 78% 80% 83% 100% 100% 100% 100%

slide-33
SLIDE 33
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 33

Application to Fault Diagnosis (Internal Combustion Engine)

Air leak of 2mm in the manifold plenum chamber

slide-34
SLIDE 34
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 34

Application to Fault Diagnosis (Internal Combustion Engine)

  • An incipient hole in the air intake system

could be successfully detected.

  • However, a detailed diagnosis as to which

recorded engine variable is affected by this event could not be obtained.

  • Traditional techniques fail to detect or

diagnose this event.

  • Model-based fault detection and

diagnosis is expensive, whilst data- driven techniques are a viable alternative that are cost-effective.

slide-35
SLIDE 35
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 35

Application to Fault Diagnosis (Internal Combustion Engine)

Air leak of 6mm in the manifold plenum chamber

slide-36
SLIDE 36
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 36

Application to Fault Diagnosis (Internal Combustion Engine)

(i) The fault could clearly be detected; (ii) The diagnosis provides the engine management system with sufficient information to trace this event to an air leak

slide-37
SLIDE 37
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 37

  • A bond will absorb radiation of a frequency similar to its vibration(s)
  • normal vibration
  • vibration having absorbed

energy

Application to Chemistry (RAMAN Spectroscopy)

Variable Selection

slide-38
SLIDE 38
  • Dr. Uwe Kruger

Projection-Based Data Chemometrics and Deep Reconstruction Troy, November 19., 2017

Slide 38

Application to Chemistry (RAMAN Spectroscopy)

Variable Selection