judgment in forecasting nigel harvey university college
play

Judgment in forecasting Nigel Harvey University College London ISF - PowerPoint PPT Presentation

Judgment in forecasting Nigel Harvey University College London ISF Thessaloniki 2019 Applied psychology in forecasting research: Topics Ill cover Applied psychology focusses on task performance: a) the characteristics of performance, b)


  1. Judgment in forecasting Nigel Harvey University College London ISF Thessaloniki 2019

  2. Applied psychology in forecasting research: Topics I’ll cover • Applied psychology focusses on task performance: a) the characteristics of performance, b) factors that affect performance, and c) methods for improving performance. • Forecasting has been subjected to this approach (mainly by management scientists) because every stage of the forecasting process involves some judgment. The main tasks are a) judgmental forecasting and b) judgmental adjustment of statistical forecasts. • As in most applied science, experimental methods are important. Experiments are models of the real system. We find things out about the model, see if they are true of the real system, and, if not, include more features of the system in the model until they are true. ISF Thessaloniki 2019

  3. Applied psychology : A little institutional history • In the UK, research into applied psychology was funded in the 1940s by the government to meet the needs of the war effort. Research, led by Kenneth Craik at the Cambridge Applied Psychology Unit, focussed on tracking tasks (target acquisition, piloting) and vigilance tasks (radar). • Despite the applied orientation, theoretical advances were made. • They found that human identification and anticipation of signals in continuous tracking tasks improves with practice: people start by correcting position errors and, then, successively, learn to eliminate errors in velocity, acceleration, and jerk. ISF Thessaloniki 2019

  4. Applied psychology : A little personal history • I took over George Drew’s motor skills teaching at UCL (1976). • I studied step tracking: people saw an array of windows, moved a cursor to where they judged the next signal to be, corrected if necessary, and repeated. An AR(2) algorithm produced the signal: parameters were set to give a nondeterministic seasonal pattern. • To meet needs of 1980s industry, UK funding moved from motor to cognitive skills. To obtain funding, I adapted my step-tracking program to examine judgmental forecasting. Participants produced 100 forecasts from a rolling window of the signal, generated 100 instances and repeated. They acquired AR(1) then AR(2). But performance in forecasting and generating tasks was uncorrelated. ISF Thessaloniki 2019

  5. Business forecasting: Changing forecasting practice Fildes & Goodwin (2007) Fildes & Petropoulos (2014) Judgment alone 25% 15.6% Statistics alone 25% 28.7% Average of judgmental 17% 18.5% and statistical forecasts Judgmental adjustment 34% 37.1% of statistical forecasts • Statistics may be basic use of Excel • There have been many other surveys, reviewed by De Baets (2019). • Pure judgmental forecasting had become considerably less common and each of the other three approaches slightly more common. Here I consider a) judgmental forecasting, b) judgmental adjustment, and c) judgmental selection of forecasting models. ISF Thessaloniki 2019

  6. Judgmental forecasting ISF Thessaloniki 2019

  7. Characteristics of performance in judgmental forecasting tasks • Trend damping (e.g., Bolger & Harvey, 1993; Eggleton, 1982; Keren, 1983; Lawrence & Makridakis, 1989). • Misperception of sequential dependence (e.g., Bolger & Harvey, 1993; Eggleton, 1982; Reimers & Harvey, 2011). • Framing effects – e.g., over-forecasting of desirable variables and under-forecasting of undesirable ones (e.g., Eggleton, 1982; Harvey & Reimers, 2013; Lawrence & Makridakis, 1989). • Addition of noise to forecasts (e.g., Harvey, 1995; Harvey, Ewart & West, 1997). ISF Thessaloniki 2019

  8. Trend damping • Forecasts for upward trends are too low; those for downward ones are too high. (Not a context effect: it occurs with a single trial.) • Effects are greater with positively accelerated functions but they are clearly present with linear ones. • For very shallow trends, the opposite effect (anti-damping) can occur. • Two broad accounts: (1) under-adjustment from anchor provided by the last data point (on the trend line on average); (2) people expect from their real-life experience with the ecology that accelerating trends will become sigmoid or will be part of long-term cycles. ISF Thessaloniki 2019

  9. Trend damping: typical data series and judgmental forecasts ISF Thessaloniki 2019

  10. Typical damping with accelerating upward trend 460 500 440 450 400 420 350 400 300 Prediction Prediction 380 250 200 360 150 340 100 320 50 0 300 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 Time Time ISF Thessaloniki 2019

  11. Anti-damping found with shallow decelerating trend 460 500 450 440 400 420 350 400 300 Prediction Prediction 380 250 200 360 150 340 100 320 50 0 300 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 Time Time ISF Thessaloniki 2019

  12. Misperception of sequential dependence • People forecast from un-trended independent series as if they see some sequential dependence in them. Forecasts should lie on the mean but they are much too close to the last data point. • With high degrees of sequential dependence (e.g., AR1 = 0.8), the opposite effect occurs. Forecasts are too far from the last data point. • Under-adjustment from the anchor provided by the last data point can explain the error when sequential dependence is absent or low but not when it is high. • Real-life data series tend to show modest autocorrelation. People use this ecological knowledge as an a priori hypothesis and make adjustment from it on the basis of limited and noisy data series. ISF Thessaloniki 2019

  13. Misperception of sequential dependence: typical data series and judgmental forecasts ISF Thessaloniki 2019

  14. Measuring the perceived autocorrelation implied by participants’ forecasts ISF Thessaloniki 2019

  15. Misperception of sequetial dependence: Cumulative distribution 1 0.9 0.8 0.0 Autocorrelation 0.4 Autocorrelation 0.7 0.8 Autocorrelation Cumulative Frequency 0.6 0.5 0.4 0.3 0.2 0.1 0 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 Implied Autocorrelation ISF Thessaloniki 2019

  16. Framing effects • People tend to over-forecast from a series when it is labelled as representing something desirable (e.g., profits) but to under-forecast from exactly the same series when it is labelled as representing something undesirable (e.g., losses). • The most likely explanation is that people are unknowingly affected by optimism (Weinstein, 1980). • However, it can also be argued that people expect action to be taken to reverse upward trends of undesirable quantities and downward trends of desirable ones (O’Connor, Remus & Griggs, 1997). ISF Thessaloniki 2019

  17. Framing effects: ‘profit’ versus ‘loss’ labels ISF Thessaloniki 2019

  18. Addition of noise to forecasts • When people make forecasts from noisy series, their forecasts are scattered around the line representing optimal forecasts. • This is so despite the forecasters knowing that they should be forecasting the most likely point of the true outcomes rather than representing the sort of series that will appear after the outcomes are known. The effect has been found with forecasters who are familiar with regression. • It is possible that people see illusory patterns in the noise and attempt to take those apparent patterns into account when making their forecasts. ISF Thessaloniki 2019

  19. Appropriate forecasts from noisy series ISF Thessaloniki 2019

  20. More noise is added when series are noisier ISF Thessaloniki 2019

  21. Expectations for real series • The findings demonstrate the effects with artificial time series generated via an algorithm. Would they appear for real series? Lawrence & O’Connor (1995) found that, on average , forecasts were not too close to the last data point with real series. • This is exactly what would be expected from the ecological account of the ‘biases’ that have been found. Had Lawrence and O’Connor (1995) looked only at real series that showed independence, they would have found forecasts too close to the last data point. Had they looked only at real series with strong sequential dependence, they would have found forecasts too far from the last data point. But averaged across all the real series (representative of the ecology), there was no overall bias. ISF Thessaloniki 2019

  22. Some factors affecting judgmental forecasting performance • Prior forecasting context (Harvey & Reimers, 2013; Reimers & Harvey, 2011). • Graphical or tabular format used to present data series (Harvey & Bolger, 1996; Lawrence, 1983; Lawrence, Edmundson & O’Connor, 1985; Theocharis, Smith & Harvey, 2019). • Length of data series (Andersson et al, 2012; Lawrence & O’Connor, 1992; Theocharis & Harvey, 2019; Wagenaar & Timmers, 1978). • Order in which a sequence of forecasts are made (Theocharis & Harvey, 2016). ISF Thessaloniki 2019

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend