OPTIMUM OPTIMUM ADAPTIVE ALGORITHMS ADAPTIVE ALGORITHMS
for for
SYSTEM IDENTIFICATION SYSTEM IDENTIFICATION
- Dept. of Computer Engineering & Informatics
University of Patras, GREECE George V. Moustakides
OPTIMUM OPTIMUM ADAPTIVE ALGORITHMS ADAPTIVE ALGORITHMS for for - - PowerPoint PPT Presentation
OPTIMUM OPTIMUM ADAPTIVE ALGORITHMS ADAPTIVE ALGORITHMS for for SYSTEM IDENTIFICATION SYSTEM IDENTIFICATION George V. Moustakides Dept. of Computer Engineering & Informatics University of Patras, GREECE Definition of the problem w n x
University of Patras, GREECE George V. Moustakides
wn Physical System Physical System xn yn rn
Model h0 h1 … hN-1 Model h0 h1 … hN-1 r'n
Common Model Transversal Filter: r'n=h0xn+ h1xn-1+…+ hN-1xn-N+1
Room-1 Room-2 xn xn yn yn wn wn
Multipath xn yn wn Parametric Model en
We are given sequentially two sets of data xn: Input sequence yn: Measured sequence We would like to express yn as
and identify the filter coefficients hi adaptively using algorithms
where Hn=[ h0 h1 h2 … hN-1 ]T
, Xn=[ xn xn-1 … ]T , Yn=[ yn yn-1 … ]T
Xn = [ xn xn-1 … xn-N+1]T Zn: Regression Vector function of the input data xn xn-1 … Well known algorithms in the class: LMS: Zn = Xn RLS: Zn = Qn
T
(exponentially windowed sample covariance matrix) FNTF: Zn = Qn
sequence assuming that it is AR(M).
1 1 T n n n n n n n n
− −
Xn = [ xn xn-1 … xn-N+1]T Zn,i : vector functions of the input data xn xn-1 …
, 1 1 1 , ,
T n i n i n n i p n n n i n i i
− − − − − =
Well known algorithms in the class: SWRLS: Zn,i = Qn
T +…+ Xn-p+1XT n-p+1
UDRLS: …
, 1 1 1 , ,
T n i n i n n i p n n n i n i i
− − − − − =
By selecting different Regression Vectors Zn,i we obtain different adaptive algorithms. We need to compare them in order to select the optimum! For simplicity we assume EXACT MODEL!! and wn white noise For simplicity we assume EXACT MODEL!! For simplicity we assume EXACT MODEL!! and and w wn
n white noise
white noise
n n T n
*
Classically the transient phase refers to stationary data. Let us assume that the noise wn has power 20db and we have two competing algorithms A1 and A2. We observe their prediction errors en:
1000 2000 3000 4000 5000
5 10
Prediction Error Power in db Number of Iterations
A1 A2 A2
1000 2000 3000 4000 5000
10
Excess Error Power in db Number of Iterations
A1 A2 A2
To fairly compare adaptive algorithms
algorithms under comparison have the same steady state excess error power (Excess Mean Square Error EMSE).
“best”. Can we select the step size µ analytically in order to achieve a predefined value for the EMSE at steady state? Can we characterize the speed of convergence analytically during the transient phase?
and independent of the process {Xn}.
and independent of the process {Xn}.
, 1 1 1 , ,
T n i n i n n i p n n n i n i i
− − − − − =
RLS: Zn = Qn
T
1 min 1 1 , , 1 , min
n n p p T T T n i n i n i i n n n i i p n n i i i i i
− →∞ − − − + = = − + =
2 2 2 1 1 , , 1 ,
n w n w n T n n T p p T T T n i n i n i i n n n i i p n n i i i T n n
→ ∞ − − − + = = − + =
2 2 m in m in 2 m in
w w w
An algorithm A1 is better than an algorithm A2 if
Goal: Maximization of the Efficacy with respect to the regression vectors Zn,i. An algorithm A1 is better than an algorithm A2 if
1>EFF
2
Goal: Maximization of the Efficacy with respect to the Maximization of the Efficacy with respect to the regression vectors regression vectors Z Zn,i
n,i.
.
n n
1 −
Zn = Qn
Qn = (1-µ) Qn-1 + µXnXn
T
Under steady state Qn is a good approximation to Q=E{XnXn
T}
So RLS is expected to match the optimum performance.
Zn,i = Qn
Qn = XnXn
T+ Xn-1XT n-1+…+ Xn-p+1XT n-p+1
If the window p is small then Qndoes approximate well Q. If however p is large then due to the LLN the approximation can be good.