jort en an vragen aan jan of ze papa zit in de bank en
play

Jort en An vragen aan Jan of ze Papa zit in de bank en papa werkt - PowerPoint PPT Presentation

R ECURRENCE Q UANTIFICATION A NALYSIS auto-RQA of categorical & continuous time series f.hasselman@pwo.ru.nl Recurrence Quantification Analysis STORY 1 STORY 2 Jort en An vragen aan Jan of ze Papa zit in de bank en papa werkt met


  1. R ECURRENCE Q UANTIFICATION A NALYSIS auto-RQA of categorical & continuous time series f.hasselman@pwo.ru.nl

  2. Recurrence Quantification Analysis STORY 1 STORY 2 “Jort en An vragen aan Jan of ze “Papa zit in de bank en papa werkt met de wandelwagen mogen rijden in de tuin die maakt een kar de en het mag van Jan en ze gaan er in kinderen. Papa maakt een kar van en ze rijden heel snel. Ze zien een de kinderen en de kinderen en de boom en de wandelwagen gaat kinderen tegen de boom en de kar kapot. Ze komen weer bij. Jan is kapot en de kinderen huilen en maakt de wandelwagen weer” de kinderen zijn blij” MLU: 3.70 MLU: 3.68 # woorden: 47 # woorden: 47 Inter-rater reliability of “quality” is ok, but “why”? Data from: Huijgevoort, M. A. E. V. (2008). Improving beginning literacy skills by means of an interactive computer environment (Doctoral dissertation). 2 http://repository.ubn.ru.nl/bitstream/handle/2066/45170/45170_imprbelis.pdf

  3. Recurrence Quantification Analysis: Nominale Tijdseries STORY 1 STORY 2 “1 2 3 4 5 6 7 8 9 10 11 12 13 “1 2 3 4 5 6 1 7 3 4 8 9 10 2 14 15 16 6 2 8 17 18 19 2 8 13 11 12 4 13 1 10 11 12 14 4 13 6 20 21 8 22 23 24 2 10 11 25 26 2 4 13 6 4 13 15 4 16 6 4 12 17 18 27 28 29 6 6 30 10 11 30” 6 4 13 19 6 4 13 20 21” Data from: Huijgevoort, M. A. E. V. (2008). Improving beginning literacy skills by means of an interactive computer environment (Doctoral dissertation). 3 http://repository.ubn.ru.nl/bitstream/handle/2066/45170/45170_imprbelis.pdf

  4. Repetition = Recurrence = Relation over time (Story 1) 4

  5. Repetition = Recurrence = Relation over time (Story 2) 5

  6. Recurrence Plot Place a dot when a word is recurring ‘jort’ (0 keer)

  7. Recurrence Plot Place a dot when a word is recurring ‘jort’ (0 keer) ‘en’ (4 keer)

  8. Recurrence Plot Place a dot when a word is recurring ‘jort’, ‘an’, ‘vragen’, ‘aan’, ‘of’ (0 keer) ‘en’ (4 keer) ‘jan’ (3 keer) ‘ze’ (4 keer)

  9. Recurrence Matrix / Recurrence Plot Recurrence Quantification Analysis auto-Recurrence: Symmetric recurrence plot around the LOS (Line of Synchronisation) Categorical (nominal): 1 point = repetition of a category Quantify patterns of recurrences: Recurrence Rate (RR) : Proportion actual recurrent points on maximum possible recurrent point (minus the diagonal): 70 / (47 2 - 47) = 0.032 (3.2%) 35 / ((47 2 - 47) / 2) = 0.032 (3.2%) 9

  10. Recurrence Matrix / Recurrence Plot Recurrence Quantification Analysis Diagonal lines ➡ repetition of any pattern: “de wandelwagen” is recurring 2 times Determinism (DET) : proportion recurrent points that lie on a diagonal line 8 / 70 = 0.114 (11.4%) 4 / 35 = 0.114 (11.4%) Vertical lines ➡ recurrence of exactly the same value: “jan jan” Laminarity (LAM) : proportion recurrent points that lie on a vertical line 4 / 70 = .057 (5.7%) 2 / 35 = .057 (5.7%) 10

  11. Recurrence Quantification Analysis STORY 1 STORY 2 RR: 3.2% RR: 7.8% DET: 11.4% DET: 54.3% LAM: 5.7% LAM: 0.0% 11

  12. Recurrence Quantification Analysis SHUFFLE STORY 1 SHUFFLE STORY 2 RR: 3.2% RR: 7.8% DET: 11.4% DET: 54.3% LAM: 5.7% LAM: 0.0% RR: 3.2% RR: 7.8% DET: 0% DET: 2.9% LAM: 8.6% LAM: 22.9% 12

  13. Advanced Data Analysis Cross-Recurrence Quantification Analysis Executieve functions? RQA analysis of the RNG task Oomens, W., Maes, J. H., Hasselman, F., & Egger, J. I. (2015). A time series approach to random number generation: using recurrence quantification analysis to capture executive behavior. Frontiers in Human Neuroscience , 9 Executive control: “be as random as you can” Vignette: Behavioural Science Institute R manual or: https://fredhasselman.github.io/casnet/index.html 13

  14. Advanced Data Analysis Cross-Recurrence Quantification Analysis Many Applications of RQA Behavioural Science Institute 14

  15. Advanced Data Analysis Cross-Recurrence Quantification Analysis Many Applications of RQA Behavioural Science Institute 15

  16. N=181 N=242

  17. Phase Space Reconstruction continuous time series f.hasselman@pwo.ru.nl

  18. Quantifying Complex Dynamics scale-free / fractal highly correlated / interdependent nonlinear / maybe chaotic result of multiplicative interactions Takens’ (1981) Embedding Theorem tells us that a (strange) attractor can be recovered (“reconstructed”) from observations of a single component process of a complex interaction- dominant system. Behavioural Science Institute Takens, F. (1981). Detecting strange attractors in turbulence. In D. A. Rand and L.-S. Young (Eds.) Dynamical Systems and Turbulence. Lecture Notes in Mathematics vol. 898 , 366–381, Springer-Verlag. 18

  19. How to study interaction-dominant systems As you know in a coupled system the time evolution of one variable depends on other variables of the system. This implies that one variable contains information about the other variables (of course depending upon the strength of coupling and maybe the type of interaction) So given the Lorenz system … dX/dt = δ · (Y - X) dY/dt = r · X - Y - X · Z dZ/dt = X · Y - b · Z Takens’ theorem suggests that we should be able to reconstruct the higly chaotic “butterfly” attractor by just using X(t) [or Y(t) or Z(t)] … Takens, F. (1981). Detecting strange attractors in turbulence. In D. A. Rand Behavioural Science Institute and L.-S. Young (Eds.) Dynamical Systems and Turbulence. Lecture Notes in Mathematics vol. 898 , 366–381, Springer-Verlag. 19

  20. Lorenz system – Time series of X, Y and Z Behavioural Science Institute 20

  21. Lorenz system – X,Y,Z State space Strange Attractor X,Y,Z Behavioural Science Institute 21

  22. Creating surrogate dimensions using the method of delays Lorenz − X(t) Lorenz − X(t) 25 25 20 20 15 15 10 10 5 X (t) 5 X X 0 0 − 5 − 5 − 10 − 10 − 15 − 15 − 20 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 − 20 500 1000 1500 2000 2500 3000 Behavioural Science Institute 22

  23. Creating surrogate dimensions using the method of delays Lorenz − X(t) Lorenz − X(t) 25 25 20 20 15 15 10 10 Let’s take our embedding delay X (t) 5 5 or lag to be: X X 0 τ = 1000 0 − 5 − 5 − 10 − 10 τ − 15 − 15 − 20 − 20 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000 Behavioural Science Institute 23

  24. Creating surrogate dimensions using the method of delays Lorenz − X(t) 25 Lorenz − X(t + tau) 20 20 15 15 10 10 Let’s take our embedding delay X (t) 5 5 or lag to be: X 0 τ = 1000 0 X − 5 − 5 − 10 − 10 τ − 15 − 15 − 20 500 1000 1500 2000 2500 3000 − 20 500 1000 1500 2000 2500 3000 Data point 1 + τ [X(t) = 1001] X (t + τ ) becomes data point 1 for this dimension Behavioural Science Institute 24

  25. Creating surrogate dimensions using the method of delays Lorenz − X(t) Lorenz − X(t) 25 25 20 20 15 15 10 10 Let’s take our embedding delay X (t) 5 5 or lag to be: X X 0 0 τ = 1000 − 5 − 5 − 10 − 10 − 15 − 15 − 20 − 20 500 500 1000 1000 1500 1500 2000 2000 2500 2500 3000 3000 Lorenz − X(t + tau) 2* τ 20 15 Data point 1 + τ [X(t) = 1001] 10 X (t + τ ) 5 becomes data point 1 for this dimension 0 X − 5 − 10 − 15 − 20 500 1000 1500 2000 2500 3000 Behavioural Science Institute 25

  26. Creating surrogate dimensions using the method of delays Lorenz − X(t) Lorenz − X(t) 25 25 Lorenz − X(t + 2*tau) 20 20 20 15 15 15 10 10 10 Let’s take our embedding delay X (t) 5 5 5 or lag to be: X X 0 0 τ = 1000 0 X − 5 − 5 − 5 − 10 − 10 − 10 − 15 − 15 − 15 − 20 − 20 500 500 1000 1000 1500 1500 2000 2000 2500 2500 3000 3000 Lorenz − X(t + tau) 2* τ − 20 20 500 1000 1500 2000 2500 3000 15 Data point 1 + τ [X(t) = 1001] 10 X (t + τ ) 5 becomes data point 1 for this dimension 0 X − 5 − 10 − 15 − 20 Data point 1 + 2* τ [X(t) = 2001] 500 1000 1500 2000 2500 3000 X (t + 2* τ ) becomes data point 1 for this dimension Behavioural Science Institute 26

  27. Creating surrogate dimensions using the method of delays Lorenz − X(t) 25 20 15 10 The embedding lag reflects 5 X (t) X the point in the time series at 0 which we are getting − 5 new information about the system… − 10 Lorenz − X(t + tau) 20 − 15 In theory any lag can be used, 15 − 20 500 1000 1500 2000 2500 3000 everything is interacting... 10 5 We are looking for the lag which X (t + τ ) 0 X is optimal, gives us maximal new − 5 information about the temporal structure in the data… − 10 Lorenz − X(t + 2*tau) 20 − 15 15 Intuitively: − 20 500 1000 1500 2000 2500 3000 10 Where the autocorrelation is zero 5 X (t + 2 τ ) 0 X We are creating a return plot to − 5 examine the systems’ state space! − 10 Behavioural Science Institute − 15 27 − 20 500 1000 1500 2000 2500 3000

  28. How to determine embedding lag? • We saw that the autocorrelation function is not very helpful when you are dealing with long range correlations in the data. Behavioural Science Institute 28

  29. Lorenz system – Determine embedding lag Average mutual information: Use first local minimum Behavioural Science Institute 29

  30. How many dimensions? Determine embedding dimension (m) Behavioural Science Institute 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend