kalman filter with exponential moving average
play

Kalman Filter with Exponential Moving Average and Dynamic - PowerPoint PPT Presentation

L4DC 2020 Robust Online Model Adaptation by Extended Kalman Filter with Exponential Moving Average and Dynamic Multi-Epoch Strategy Abulikemu Abuduweili 1,2 , and Changliu Liu 2 1 School of EECS, Peking University, P.R. China 2 Robotics


  1. L4DC 2020 Robust Online Model Adaptation by Extended Kalman Filter with Exponential Moving Average and Dynamic Multi-Epoch Strategy Abulikemu Abuduweili 1,2 , and Changliu Liu 2 1 School of EECS, Peking University, P.R. China 2 Robotics Institute, Carnegie Mellon University, USA June 5, 2020 Peking University Carnegie Mellon University

  2. 2 Introduction – Behavior Prediction β€’ 𝑍 𝑒 = 𝑔(πœ„, π‘Œ 𝑒 ) β€’ X t = [𝑦 𝑒 ; 𝑦 π‘’βˆ’1 ;Β·Β·Β·; 𝑦 π‘’βˆ’π‘œ ] denotes the n -step past measurement. Y t = [𝑧 𝑒+1 ; 𝑧 𝑒+2 ;Β·Β·Β·; 𝑧 𝑒+𝑛 ] denotes the m -step future behavior. πœ„ is the parameter of the model. Trajectory prediction on Human motion prediction during autonomous driving human-robot collaboration Abuduweili A, et al. β€œ Adaptable Human Intention and Trajectory Prediction for Human-Robot Collaboration ” . arXiv preprint arXiv:1909.05089 2019; Yujiao Cheng, et al. β€œ Human motion prediction using semi-adaptable neural networks ” . ACC , pages 4884 – 4890. IEEE, 2019.

  3. 3 Introduction -Why Adaptation? β€’ Performance of the trained model can drop significantly under a slightly different data distribution. For tasks without annotated corpora from the test domain, adaptation techniques are required to deal with the lack of domain-specific data. Train set distribution Test set distribution Dataset Train Set Test Set MSE (m 2 ) 1.492 2.559 Performance comparison between Distribution Difference train set and test set (NGSIM). between train set and test set. Wenwen Si, et al. β€œ Agen: Adaptable generative prediction networks for autonomous driving ” . In 2019 IEEE Intelligent Vehicles Symposium (IV), 2019.

  4. 4 Introduction - Online Adaptation Framework β€’ Online adaptation explores local overfitting to minimize the prediction error, which Y t βˆ’ 𝑔( ΰ·  corresponds to a nonlinear least square (NLS) problem: min πœ„ 𝑒 , X t ) π‘ž ΰ·‘ πœ„ 𝑒

  5. 5 Introduction - Related Works Previous Methods Problems SGD based slow convergence; sub-optimal RLS-PAA only applies to linear models Previous methods for online adaptation. Adapting the last layer of GRU model with RLS-PAA. Bhasin S, et al. β€œRobust identification - based state derivative estimation for nonlinear systems”. IEEE Transactions on Automatic Control 2012; 58(1); Wenwen Si, et al. β€œ Agen : Adaptable generative prediction networks for autonomous driving”. In 2019 IEEE Intelligent Vehicles Symposium (IV), 2019.

  6. 6 Robust Online Model Adaptation β€’ Our adaptation algorithm is based by the recursive EKF method. Why? Adaptation algorithms Convergence rate Applicable for nonlinear system SGD based Slower Yes RLS based Faster No EKF based (Ours) Faster Yes β€’ By assuming that the ground truth changes very slowly, we can pose the parameter adaptation problem as a static state estimation problem with the following dynamics: Angelo Alessandri et al. β€œA recursive algorithm for nonlinear least - squares problems”. Computational Optimization and Applications, 38(2), 2007.

  7. 7 Robust Online Model Adaptation β€’ Modified EKF with Exponential Moving Average (EMA) filtering Our Why Explanation (Methods) Extensions Forgetting Data in the distant past is no We consider a nonlinear recursive least squares (NLS) problem with forgetting factor πœ‡ : Factor longer relevant for modeling the 2 , 0 < πœ‡ ≀ 1 πœ‡ π‘’βˆ’π‘— 𝑧 𝑗 βˆ’ 𝑔 current. 1 ( መ 𝑒 Οƒ 𝑣=1 min πœ„ π‘—βˆ’1 , X π‘—βˆ’1 ) 2 ΰ·‘ πœ„ 𝑒 EMA EMA is typically applied to EMA-V: EMA-V calculates the step size of the parameter filtering parameter update in practice, by decreasing exponentially the older step size. EMA-P: P which can reduce the variance of t is an uncertainty matrix in the parameter convergence curve. (E.g.: Polyak estimates in EKF. We can smooth the inner state of the optimizer by EMA pre-filtering P averaging and momentum for t . SGD) Angelo Alessandri et al. β€œ A recursive algorithm for nonlinear least-squares problems ” . Computational Optimization and Applications, 38(2), 2007.

  8. 8 Robust Online Model Adaptation β€’ Modified EKF with Exponential Moving Average (EMA) filtering

  9. 9 Robust Online Model Adaptation β€’ Dynamic multi-epoch update strategy (DME) Sampling Explanation (Methods) Previous method All data are equally considered. We run the adaptation algorithm chronologically from the first data to the last data 𝑧 𝑒 , πœ„ βˆ— . Define a criterion 𝐷 to determine the number of epochs πœ† 𝑒 = 𝐷 X π‘’βˆ’1 , 𝑧 𝑒 , ො Dynamic multi-epoch We reuse the input-output pair ( X π‘’βˆ’1 ; 𝑧 𝑒 ) πœ† 𝑒 times to adapt the parameter πœ„ βˆ— update strategy (Ours) β€’ DME increase the sample efficiency. It is practically useful to differentiate β€œ easy ” samples from β€œ hard ” samples. β€’ Simple criterion: Two thresholds 𝜁 1 and 𝜁 2 are used to discriminate β€œ easy ” , β€œ hard ” , and β€œ anomaly ” samples by error β€’ Easy sample: single-epoch update β€’ Hard sample: two-epoch update β€’ Anomaly sample: skip the update Yoshua Bengio, et al. β€œ Curriculum learning ” . ICML , pages 41 – 48. ACM, 2009.

  10. 10 Robust Online Model Adaptation β€’ Dynamic multi-epoch update strategy

  11. 11 Numerical Experiments: Multi-Task Prediction β€’ In the experiment, we considers a multi-task prediction problem for simultaneous intention and trajectory prediction. β€’ In the online adaptation of multi-task learning, the adaptation algorithm updates the prediction model only considering the error measured between the predicted trajectory and the ground truth trajectory. Online adaptation framework for a multi-task model.

  12. 12 Numerical Experiments: Design β€’ Neural Network Architecture: RNN based encoder-decoder-classifier structure. β€’ Dataset: Mocap dataset (human-motion) and NGSIM dataset (vehicle). β€’ Online adaptation on hidden weights of encoder of the offline-trained models. Neural network architecture in experiments.

  13. 13 Numerical Experiments: Results β€’ Table 1 shows the prediction performance of online adapted models using different optimizers. Compared to the SGD-based algorithms, the EKF-based methods perform better. In addition, MEKF EMAβˆ’DME has the best performance among all.

  14. 14 Numerical Experiments: Results β€’ Table 2 shows the effectiveness of the proposed extensions. EMA-P slightly improves the performance. DME improves the performance apparently.

  15. 15 Demo β€’ Human-motion trajectory prediction with proposed online adaptation.

  16. 16 Demo β€’ Human-motion trajectory and intention prediction with proposed online adaptation.

  17. 17 Conclusions β€’ This work studied online adaptation of neural network-based prediction models for behavior prediction. β€’ In order to improve the performance and convergence rate, EMA filtering was investigated, including EMA-V and EMA-P. β€’ This paper introduced a dynamic multi-epoch update strategy, which is compatible with any optimizer. β€’ By combining all extensions with the EKF based algorithm, we introduced the robust online adaptation algorithm MEKF EMAβˆ’DME β€’ The source code is open-sourced in the following link: https://github.com/intelligent-control-lab/MEKF_MAME .

  18. Thank you for your attention Questions? Abulikemu Abuduweili: abduwali@pku.edu.cn , Changliu Liu: cliu6@andrew.cmu.edu.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend