Time Series Forecasting With a Learning Algorithm: An Approximate - - PowerPoint PPT Presentation

time series forecasting with a learning algorithm an
SMART_READER_LITE
LIVE PREVIEW

Time Series Forecasting With a Learning Algorithm: An Approximate - - PowerPoint PPT Presentation

Time Series Forecasting With a Learning Algorithm: An Approximate Dynamic Programming Approach Ricardo Collado 1 an Creamer 1 Germ 1 School of Business Stevens Institute of Technology Hoboken, New Jersey R. Collado (Stevens) Learning Time


slide-1
SLIDE 1

Time Series Forecasting With a Learning Algorithm: An Approximate Dynamic Programming Approach

Ricardo Collado1 Germ´ an Creamer1

1School of Business

Stevens Institute of Technology Hoboken, New Jersey

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 1 / 34

slide-2
SLIDE 2

Introduction

Time Series Drivers:

  • Historical data:
  • Incorporated in classical time series forecast methods
  • Works best when the underlying model is fix
  • Exogenous processes:
  • Not included in historical observations
  • Difficult to incorporate via classical methods
  • Could indicate changes in the underlying model
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 2 / 34

slide-3
SLIDE 3

Introduction

Techniques to handle changes due to external forces:

  • Jump Diffusion Models
  • Regime Switching Methods
  • System of Weighted Experts
  • Others . . .

These methods do not directly integrate alternative data sources available to us

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 3 / 34

slide-4
SLIDE 4

Introduction

Alternative data sources:

  • Text & News Analysis
  • Social Networks Data
  • Sentiment Analysis
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 4 / 34

slide-5
SLIDE 5

Introduction

Alternative data sources:

  • Text & News Analysis
  • Social Networks Data
  • Sentiment Analysis
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 4 / 34

slide-6
SLIDE 6

Introduction

Alternative data sources:

  • Text & News Analysis
  • Social Networks Data
  • Sentiment Analysis
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 4 / 34

slide-7
SLIDE 7

Introduction

We study time series forecast methods that are:

  • Dynamic
  • Context-Based
  • Capable of Integrating Social, Text, and Sentiment Data

In this presentation we develop:

  • Stochastic dynamic programming model for time series forecast
  • Rely on an “external forecast” for future values
  • External forecast allows to incorporate alternative data

sources

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 5 / 34

slide-8
SLIDE 8

Traditional Approach

Time Series Fitting Process

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 6 / 34

slide-9
SLIDE 9

Traditional Approach

s0 = x0 a0 = φ∗

1

X1 = φ∗

1x0 + 1

A = {φ ∈ R} s0

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 7 / 34

slide-10
SLIDE 10

Traditional Approach

s0 = x0 a0 = φ∗

1

X1 = φ∗

1x0 + 1

s0

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 7 / 34

slide-11
SLIDE 11

Traditional Approach

s1 = x1 a1 = φ∗

2

X2 = φ∗

2x1 + 2

s0 s1

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 7 / 34

slide-12
SLIDE 12

Traditional Approach

s2 = x2 a2 = φ∗

3

X3 = φ∗

3x2 + 3

s0 s2 s1

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 7 / 34

slide-13
SLIDE 13

Traditional Approach

s3 = x3 a3 = φ∗

4

X4 = φ∗

4x3 + 4

s0 s2 s3 s1

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 7 / 34

slide-14
SLIDE 14

Traditional Approach

s4 = x4 a4 = φ∗

5

X5 = φ∗

5x4 + 5

s0 s2 s3 s1

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 7 / 34

slide-15
SLIDE 15

Traditional Time Series Fitting

Main Problem: min

π∈Π E

T

  • t=1

ct(st, at)

  • ,

where at = πt(x1, . . . , xt) is an admissible fitting policy.

  • The time series model is parametrized by Θ ⊆ Rd
  • A(s) = Θ for all states s
  • ct(s, a) is the result of a goodness of fit test for the observations

s = (x0, . . . , xt) and model selection a = θ

  • Solution via Bellman’s optimality equations
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 8 / 34

slide-16
SLIDE 16

Traditional Time Series Fitting

Conditions to guarantee optimality:

  • The set of actions A(s) is compact
  • The cost functions ct(s, ·) are lower semicontinuous
  • For every measurable selection at(·) ∈ At(·), the functions s → ct(s, at(s))

and cT(·) are elements of L1(S, BS, P0)

  • The DP stochastic kernel function Qt(s, ·) is continuous
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 9 / 34

slide-17
SLIDE 17

Traditional Time Series Fitting

Value Functions: Value functions vt : S → R, t = 1, . . . , T, given recursively by: vT(s) = cT(s) vt(s) = min

a∈At(s) {ct(s, a) + E [vt+1 | s, a]} ,

for all s ∈ S and t = T − 1, . . . , 0. Bellman’s Optimality Equations: Then an optimal Markov policy π∗ = {π∗

0, . . . , π∗ T−1} exists and satisfies the

equations: π∗

t (s) ∈ arg min a∈At(s)

{ct(s, a) + E [vt+1 | s, a]}, s ∈ S, t = T − 1, . . . , 0. Conversely, any measurable solution of these is an optimal Markov policy π∗.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 10 / 34

slide-18
SLIDE 18

Traditional Time Series Fitting

Notice that:

  • Our choice of model does not affect future observations and

cost.

  • So, E [v | s, a] = E [v | s, a′], for any (s, a), (s, a′) ∈ graph(A).
  • Therefore we can rewrite the optimal policy as:

π∗

t (s) ∈ arg min a∈At(s)

{ct(s, a)}, s ∈ S, t = T − 1, . . . , 0.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 11 / 34

slide-19
SLIDE 19

Traditional Time Series Fitting

Notice that:

  • Our choice of model does not affect future observations and

cost.

  • So, E [v | s, a] = E [v | s, a′], for any (s, a), (s, a′) ∈ graph(A).
  • Therefore we can rewrite the optimal policy as:

π∗

t (s) ∈ arg min a∈At(s)

{ct(s, a)}, s ∈ S, t = T − 1, . . . , 0.

  • The optimal policy π∗ is purely myopic
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 11 / 34

slide-20
SLIDE 20

Traditional Time Series Fitting

Q: How to break with the myopic policy?

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 12 / 34

slide-21
SLIDE 21

Traditional Time Series Fitting

A: Introduce a new Markov model

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 13 / 34

slide-22
SLIDE 22

New Markov Model

New Markov Model:

  • Given stochastic process {Xt | t = 0, . . . , T}, s.t. X0 = {φ0}
  • Time series model parameterized by Θ ⊆ Rd
  • State space:

St =      (xt, ht−1, θt−1)

  • xt observation from Xt,

ht = x0, . . . , xt−1 sample sequence , θt−1 = (φ1, . . . , φp) ∈ Θ     

  • Action space: A(s) = Θ for all states s ∈ St
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 14 / 34

slide-23
SLIDE 23

New Markov Model

Cost function: ct(s, θt) = γ(st, θt) + r δ(st, θt−1, θt)

  • γ: Goodness of fit test
  • δ: Penalty on changes from previous model selection
  • r ≥ 0: Scaling factor used to balance fit and penalty
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 15 / 34

slide-24
SLIDE 24

New Markov Model

Example: χ2 (θt | ht−1, xt) + r

  • 1 − exp
  • −λ
  • E [Pθt | ht−1, xt] − E
  • Pθt−1
  • ht−1, xt
  • ,

where r, λ ≥ 0.

  • γ(st, θt) := χ2 (θt | ht−1, xt)
  • δ(st, θt−1, θt) := 1 − exp
  • −λ
  • E [Pθt | ht−1, xt] − E
  • Pθt−1
  • ht−1, xt
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 16 / 34

slide-25
SLIDE 25

New Markov Model

Dynamic Optimization Model:

s0 = x0 a0 = φ∗

1

X1 = φ∗

1x0 + 1

A = {φ ∈ R} s0

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 17 / 34

slide-26
SLIDE 26

New Markov Model

Dynamic Optimization Model:

downstream δ adds cost for changing model γ enforces good fit s0 = x0 a0 = φ∗

1

X1 = φ∗

1x0 + 1

s0

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 17 / 34

slide-27
SLIDE 27

New Markov Model

Dynamic Optimization Model:

downstream δ adds cost for changing model γ enforces good fit s0 = x0 a0 = φ∗

1

X1 = φ∗

1x0 + 1

s0

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 17 / 34

slide-28
SLIDE 28

New Markov Model

Dynamic Optimization Model:

s1 = x1 a1 = φ∗

2

X2 = φ∗

2x1 + 2

s0 s1

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 17 / 34

slide-29
SLIDE 29

New Markov Model

Dynamic Optimization Model:

s2 = x2 a2 = φ∗

3

X3 = φ∗

3x2 + 3

s0 s2 s1

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 17 / 34

slide-30
SLIDE 30

New Markov Model

Dynamic Optimization Model:

s3 = x3 a3 = φ∗

4

X4 = φ∗

4x3 + 4

s0 s2 s3 s1

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 17 / 34

slide-31
SLIDE 31

Lookahead Methods

Introducing Exogenous Information Via Lookahead Methods

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 18 / 34

slide-32
SLIDE 32

Lookahead Methods

Lookahead Methods:

  • At time t we have access to forecast random variables:
  • X t

t+1, . . . ,

X t

t+h

  • We assume these are discrete approximations to Xt+1, . . . , Xt+h
  • Each forecast induces a discrete probability measure on S

We expect the forecasts to be finite random variables with few atoms This has the effect of simplifying calculations, but at the expense of rough approximations.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 19 / 34

slide-33
SLIDE 33

Lookahead Methods

Basic lookahead method: Step 0 Initialization:

Step 0a Initialize v t(s) for all time periods t and all s ∈ S. Step 0b Choose an initial state s1

0.

Step 1 For t = 0, . . . , T do:

Step 1a Update the state variable: observe a value xt of the stochastic process, let st be obtained from xt and model selection at t − 1, and get forecasts

  • Xt+1, . . . ,

Xt+h. Step 1b Solve ˆ vt = min

at∈At(st) {ct(st, at) + E [¯

vt+1 | st, at]} , by solving a stochastic optimization problem. Let at be the obtained optimal solution to the minimization problem. Step 1c Update the value function approximation ¯ vt: ¯ vt(s) =

  • ˆ

vt, if s = st, ¯ vt(s),

  • therwise.

Step 2 Return the value functions {¯ vt | t = 0, 1, . . . , T}.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 20 / 34

slide-34
SLIDE 34

Lookahead Methods

Basic Monte Carlo Algorithm: Step 0 Initialization:

Step 0a Initialize v 0

t (s) for all time periods t and all s ∈ S.

Step 0b Choose an initial state s1

0.

Step 0c Solve the problem the previous method and denote the resulting value function approximations ¯ v 0

t , t = 0 . . . , T.

Step 0d Set n = 1.

Step 1 For t = 0, . . . , T do:

Step 1a Update the state variable: Observe a value xn

t of the stochastic process, let

sn

t be obtained from xt n and model selection at t − 1, and get forecasts

  • X n

t+1, . . . ,

X n

t+h.

Step 1b Solve ˆ v n

t =

min

at∈At(sn

t ) {ct(sn

t , at) + E [¯

vt+1 | sn

t , at]} ,

as before. Let an

t be the obtained optimal solution to the minimization

problem. Step 1c Update the value function approximation ¯ v n−1

t

: ¯ v n

t (s) =

  • (1 − αn−1)¯

v n−1(s) + αn−1ˆ v n

t ,

if s = sn

t ,

¯ v n−1

t

(s),

  • therwise.

Step 2 Let n = n + 1. If n < N, go to Step 1. Step 3 Return the value functions

  • ¯

v N

t

  • t = 0, 1, . . . , T
  • .
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 21 / 34

slide-35
SLIDE 35

Numerical Results I

Numerical Results I

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 22 / 34

slide-36
SLIDE 36

Numerical Results I

Estimation Techniques I:

  • Initial test: Natural Gas Futures Contract 4 - times series (RNGC4), 1999 –

2013.

  • Test approximate dynamic programming method (MSE ) over 1 to 10 days

forecast window, comparing the calibration of machine learning algorithms used for regression and classification:

  • Support vector machine (SVM)
  • Random forests (RF)
  • ARIMA
  • Logistic regression (LR ) – Benchmark
  • 2nd test: Crude Oil Spot (WTI), Aug. 2015 – Jan. 2016.
  • We made two main tests by using approximations to “futre” WTI Spot on

different time windows:

  • Actual values + random white noise with increasing variance
  • Actual values + white noise + increasing bias
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 23 / 34

slide-37
SLIDE 37

Numerical Results I

Energy Time Series

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 24 / 34

slide-38
SLIDE 38

Numerical Results I

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 24 / 34

slide-39
SLIDE 39

Numerical Results I

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 24 / 34

slide-40
SLIDE 40

Numerical Results I

3.25 3.35 3.45 3.55 3.65 3.75 3.85 3.95

Simulation Test

WTI Spot Benchmark Dynamic ARIMA Dynamic SVM Dynamic RF Dynamic CART

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 24 / 34

slide-41
SLIDE 41

Numerical Results I

Model MSE Benchmark 0.001116097 ARIMA 0.000320973 SVM 0.000776309 RF 0.000197873 CART 0.000476187 Noise Variance Model 0.001 0.01 0.05 0.1 0.5 1 ARIMA 0.0003227 0.000389 0.001943 0.004214 0.008405 0.007514 SVM 0.000776849 0.000818 0.002475 0.008127 0.099774 0.16513 RF 0.000175392 0.00023 0.001843 0.003496 0.007223 0.008425 CART 0.000483638 0.000504 0.002376 0.006527 0.023588 0.024053 Benchmark 0.001116097 Model RMSE

Unbiased Random Noise External Forecast Test

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 24 / 34

slide-42
SLIDE 42

Numerical Results I ARIMA Baseline 490.6124 RF Baseline 502.5576 CART Baseline 967.2576 0.001 0.01 0.03 0.05 0.1 0.5 1 2 ARIMA 384.0882 380.7184 379.9081 373.224 380.9279 386.4873 473.5217 RF 28.16006 30.8183 36.77366 45.00305 56.97973 195.9457 282.2603 389.0374 CART 6.680402 29.53345 46.15347 64.68325 147.4693 497.2315 725.8215 863.787 ARIMA Baseline 627.327 RF Baseline 804.8075 CART Baseline 1588.515 0.001 0.01 0.03 0.05 0.1 0.5 1 2 ARIMA 586.0408 586.1985 586.4885 587.7807 590.063 624.0278 705.4027 RF 71.76113 87.90271 117.1728 146.3925 199.0221 424.4266 528.3886 618.359 CART 12.98331 65.70607 135.9031 240.7952 369.7038 913.2118 1188.266 1422.569

Biased Random Noise External Simulation: 10 and 50 Days Windows

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 24 / 34

slide-43
SLIDE 43

Numerical Results I

Numerical Results 2

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 25 / 34

slide-44
SLIDE 44

Numerical Results 2

Estimation Techniques II:

  • 3rd test: Crude Oil Spot (WTI). Test approximate dynamic programming

method over 1, 5, and 5 days forecast window, using different ML dynamic techniques with actual + white noise as “future” data:

  • Support vector machine (SVM)
  • Classification and regression trees (CART)
  • ARIMA
  • Logistic regression (LR ) – Benchmark
  • 4th test:SVM futures WTI and SVM sparse “future” data on WTI spot.
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 26 / 34

slide-45
SLIDE 45

Numerical Results 2

WTI Spot

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 27 / 34

slide-46
SLIDE 46

Numerical Results 2

1-Day Forecast

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 27 / 34

slide-47
SLIDE 47

Numerical Results 2

1-Day Forecast Error

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 27 / 34

slide-48
SLIDE 48

Numerical Results 2

5-Day Forecast

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 27 / 34

slide-49
SLIDE 49

Numerical Results 2

10-Day Forecast

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 27 / 34

slide-50
SLIDE 50

Numerical Results 2

Numerical Results 2.2

SVM – CART Analysis

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 28 / 34

slide-51
SLIDE 51

Numerical Results 2

SVM 15 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-52
SLIDE 52

Numerical Results 2

CART 15 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-53
SLIDE 53

Numerical Results 2

SVM 20 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-54
SLIDE 54

Numerical Results 2

CART 20 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-55
SLIDE 55

Numerical Results 2

SVM 25 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-56
SLIDE 56

Numerical Results 2

CART 25 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-57
SLIDE 57

Numerical Results 2

SVM 30 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-58
SLIDE 58

Numerical Results 2

CART 30 Days Window

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 29 / 34

slide-59
SLIDE 59

Numerical Results 2

Numerical Results 2.3

WTI Futures – Sparse SVM Analysis

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 30 / 34

slide-60
SLIDE 60

Numerical Results 2

WTI Futures

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 31 / 34

slide-61
SLIDE 61

Numerical Results 2

WTI: |Futures − Spot|

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 31 / 34

slide-62
SLIDE 62

Numerical Results 2

SVM Futures

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 31 / 34

slide-63
SLIDE 63

Numerical Results 2

SVM Futures Error Comparison

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 31 / 34

slide-64
SLIDE 64

Numerical Results 2

SVM Sparse Forecast

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 31 / 34

slide-65
SLIDE 65

Numerical Results 2

Numerical Results 3

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 32 / 34

slide-66
SLIDE 66

Numerical Results III

Estimation Techniques III:

  • Test our approximate programming dynamic method comparing the

calibration of two machine learning algorithms used for regression and classification: support vector machine (SVM) and random forests (RF).

  • For regression analysis:
  • Support vector regression (SVR): minimizes a loss function using only the

most relevant values.

  • Random forests for regression (RFR): explores a large search space based on

random selection of its features and samples.

  • External forecast function for daily data of WTI log returns: 1 month WTI

futures price at the end of every month.

  • Training & test dataset: 15 lags of the dependent variable and technical

indicators:

  • Price indicators: Simple moving averages with 10 and 20 days.
  • Momentum indicators: Relative strength index with 10 and 20 days, and the

moving average convergence divergence.

  • The test sample includes the forecast function.
  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 33 / 34

slide-67
SLIDE 67

Numerical Results III

WTI log return and forecasts by ARMA(1,1), support vector regression (SVR), and the approximate dynamic programming SVR (SVR-ADP) for June and July 2014.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 34 / 34

slide-68
SLIDE 68

Numerical Results III

WTI log return and forecasts by ARMA(1,1), random forest for regression (RFR), and the approximate dynamic programming RFR (RFR-ADP) for June and July 2014.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 34 / 34

slide-69
SLIDE 69

Numerical Results III

Root mean squared error (RMSE) of ADP and static (benchmark) methods. SVR, RFR, and x10 stand for support vector regression, random forest for regression, and 10 times 40 folds cross-validation respectively. The error bars represent standard error. RMSE mean differences of SVR-ADP and RFR-ADP in relation to their static versions and ARMA(1,1) and CART are statistically significant with 99% confidence level .

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 34 / 34

slide-70
SLIDE 70

Numerical Results III

Matthews correlation coefficient of ADP and static (benchmark) methods. SVM and RF stand for support vector machine and random forest respectively. The error bars represen standard error. MCC mean differences of SVM-ADP and RF-ADP in relation to their static versions and CART are statistically significant with 95% conf.level.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 34 / 34

slide-71
SLIDE 71

Numerical Results III

Test error of ADP and static (benchmark) methods. SVM and RF stand for support vector machine and random forest respectively. The error bars represent standard

  • error. Test error mean differences of SVM-ADP and RF-ADP in relation to their static

versions and CART are statistically significant with 95% confidence level.

  • R. Collado (Stevens)

Learning Time Series and Dynamic Programming September 10, 2019 34 / 34