part 2 generations of projections ipcc model projections
play

Part 2 Generations of Projections IPCC Model Projections over time - PowerPoint PPT Presentation

Case Study 4 Morning Session Part 2 Generations of Projections IPCC Model Projections over time (all models, all experiments) Sudbury Annual Temperature Change for the 2050s (from 1981-2010 baseline) 3 + 2.6C 2.5 + 1.9C 2 1.5 1


  1. Case Study 4 Morning Session Part 2

  2. Generations of Projections

  3. IPCC Model Projections over time… (all models, all experiments) Sudbury Annual Temperature Change for the 2050s (from 1981-2010 baseline) 3 + 2.6°C 2.5 + 1.9°C 2 1.5 1 0.5 0 SAR TAR AR4 AR5 1995 (11) 2001 (14) 2007 (171) 2013 (208)

  4. IPCC Model Projections over time… (all models, all experiments) Sudbury Annual Precipitation Change (%) for the 2050s (from 1981-2010 baseline) 7 + 6.3 % 6 5 + 4.5 % 4 3 2 1 0 SAR TAR AR4 AR5 1995 (11) 2001 (14) 2007 (171) 2013 (208)

  5. Model CO2 sensitivity – the link between GHG and Change Its been relatively steady – we double CO2 and the models over time generate similar temperature change Have we been wrong for 100 years? Source: Maslin and Austin, 2012

  6. Why are there even differences between Models? • Spatial Resolution – as we have seen • Different sensitivities to GHG forcing (not all are as ‘reactive’ to a given RCP) • Assumption of land surface type • Assumption of elevation • Boundary conditions, initial conditions • Varying degrees of complexity in oceanic and atmospheric physics and their connection (always a tradeoff between complexity and computational requirements) • The BIGGIE – how each model deals with sub-grid (or within a single grid) processes like SNOW, ICE, SOIL LAYERS (number, character), CLOUD

  7. Model Ensembles And Uncertainty

  8. Ensemble considerations for practitioners • AVAILABILITY of the data / processing demands! • The use of a limited number of models or scenarios provides no information of the uncertainty involved in climate modelling – ensembles can help • Although each GCM represents the ‘best effort’ of each modelling centre, there are biases • The use of an ensemble (mean/median) of models tends to converge to a ‘best estimate’ by reduction of strong biases in single models • There are other alternatives to ensembles as well which will be demonstrated – depends on the stakeholder demands The IPCC is very clear that the use of a limited number of models is not recommended for decision-making (Guidance Document)

  9. Research studies support ensembles Most of the current published literature considers the multi-model approach • IPCC-TGICA, 2007: General Guidelines on the Use of Scenario Data for Climate Impact and Adaptation Assessment. Version 2. Prepared by T.R. Carter on behalf of the Intergovernmental Panel on Climate Change, Task Group on Data and Scenario Support for Impact and Climate Assessment, 66pp. • Gleckler, P. J, K. E. Taylor, and C. Doutriaux (2008) Performance metrics for climate models. Journal of Geophysical Research. Vol. 113. D06104. • IPCC Expert Meeting on Assessing and Combining Multi Model Climate Projections, Boulder, Colorado, USA, 25-27 January 2010 http://www.ipcc.ch/pdf/supporting-material/IPCC_EM_MME_GoodPracticeGuidancePaper.pdf Discussion: Should we ‘weight’ the models? Or treat them all equally? Good and bad?

  10. Why the Emphasis on Ensembles? • Researchers have shown that using this approach (averaging may estimates) best fits historical climate – over the use of any single model • The use of a single model ‘puts all our eggs in one basket’ – we are assuming one model is correct • The use of multiple models allows us to obtain some indication (but a PROXY only), of model uncertainty (whether the model estimates are all ‘close’ (greater confidence) or ‘spread out’ (less confidence) • A proven track record has long been used in weather forecast models (forecasters would look at all models to inform their final decision) • Although each modelling centre is a best effort – there are biases • So there is an assumption of a ‘convergence’ on a best estimate using ensembles by removing strong single model biases (cancellation)

  11. The Ensemble Estimate – value of ensemble https://www.youtube.com/watch?v=j__w-7s8GPo Source: Nat’l Geo – Brain Games

  12. Why is the Ensemble not a direct measure of uncertainty? • The ensemble method provides us a direct measure of MODEL agreement, not directly climate projection uncertainty, although these we would suspect are related • The Standard Deviation of model results over each gridcell can give us an INDICATION or CHARACTERIZATION of model certainty/uncertainty Areas of low SD = Areas of higher model agreement Areas of high SD = Areas of lower model agreement • But what if everyone is wrong? • A question of PRECISION (models agree) vs ACCURACY (models correctly define climate) • We know models are not ideal – they must PARAMETERIZE real life, they simplify some processes

  13. Poll Two…

  14. Precision versus Accuracy A B POLL QUESTION 2 Where do YOU think we are with Climate Models ? C D Source: NOAA

  15. How Good is the Ensemble Anyway? • We can test this out using historical gridded observed datasets at similar resolution (like NCEP) • Acknowledge the models are developed/calibrated against these datasets so not completely independent! NCEP (National Centers for Environmental Prediction) U.S. AR5 Ensemble (All models/all model runs) Good Agreement MEAN ANNUAL TEMPERATURE (1981-2010)

  16. How Good is the Ensemble Anyway? Precipitation is in average mm/day NOT as successful as Temperature – but this is commonly found NCEP (National Centers for Environmental Prediction) U.S. AR5 Ensemble (All models/all model runs) MEAN ANNUAL PRECIPITATION (1981-2010)

  17. Models vs Observations – box plots from RSI Analytics 1981-2010 Mean Temperature – Observed and the AR5 Ensemble Bioclimate Location: Geraldton, ON Good Fit

  18. Models vs Observations – box plots from RSI Analytics 1981-2010 Precipitation – Observed and the AR5 Ensemble Bioclimate Location: Geraldton, ON Not as Good Fit in some months

  19. What about GCM vs RCM projections (Dynamical) vs Statistical Downscaling? • Statistical downscaling can be useful when we have good specific observational data to relate to model projections • Requires customized input datasets and generally for only a small subset of models • Includes ‘weather typing’ technique – relationship between weather events (ice storm) and large scale (e.g. if this situation is found it is ASSOCIATED with these types of events) • Both general techniques (Dynamical and Statistical) can be complementary, but IPCC relies on models (Dynamic)

  20. Global versus Regional Models in Ontario

  21. Comparison: GCM vs RCM T Deltas for Ontario • Look at 3 projections (using RCP8.5, 2050s difference from 1981-2010 annual temperature): RCM HIGH RES output RCM HIGH RES output GCM output CanRCM4 (25 km) contoured CanRCM4 (50 km) contoured CanESM2 (~200 km) contoured +5.6 +5.4 +5.4 +3.5 +3.6 +3.5

  22. A Test – What if we apply GCM to hi-res baseline? And if we remove the CanRCM4 Bias – we get a close match CanRCM4 Mean T at 2050s Bias Removed CanGRD with GCM Delta T for 2050s

  23. A Test – What if we apply GCM to hi-res baseline? • The take away? • This method is an efficient way to incorporate both BIAS correction (we use REAL observed baseline) AND we have many GCM projections available • The previous example just uses the Canadian GCM – but any of the AR5 models could be the ‘delta’ • The projection was different because the CanRCM4 model is too warm even historically - so we get a ‘too warm’ 2050s temperature • This could very likely be improved IF we apply the GCM ensemble instead of a single model <this is the typical RSI methodology>

  24. Using the Data - Best Options

  25. Using the Data - Best Options • Look primarily at the RCP4.5 (low) and RCP8.5 (business as usual) emission projections • If only one option – the most likely would be RCP8.5 • Apply all the models you can! • Look at the model CHANGES from the baseline (Deltas), since the individual model is likely biased • If you do consider a single model, can it be characterized at least within the entire model collection? • Best NOT to combine emission scenarios – but consider them as separate possibilities (high and low) • Monthly output is generally available – daily is often not and requires large storage/computations • If daily is required (hydrological model?) – it may be a good approximation to apply monthly deltas to observed daily data to generate proxy future data

  26. Characterizing Uncertainty

  27. Using the Data – Characterizing Uncertainty • RCP4.5 Obtain the data • Ensemble Regrid the models to a common resolution • Use all model runs for each RCP8.5 Ensemble RCP 4.5 RCP8.5 N 95 84 Mean Delta +2.5 +3.7 Std. Dev. 0.68 0.80 5 th Perctile 1.5 2.4 95 th Perctile 3.8 5.1 Max Value 4.4 5.4 Min Value 1.1 2.3

  28. Using the Data – Characterizing Uncertainty • So we must add observed Toronto Regional Average – Mean Annual T StD to the ensemble model 18 RCP4.5 projection StD 16 RCP8.5 14 • If we are using JUST ONE Temperature (°C) 12 MODEL (no ensemble) then the model ideally includes 10 the observed StD 8 6 4 Under Normal Distribution: +/- 1 StD = 68% prob, +/- 2 StD = 95% prob Observed St Dev Range Observed St PLUS the ensemble Dev Range

  29. Using the Data – Characterizing Uncertainty Not unexpectedly, ensemble StD increases going forward in time – Temperature Change (C) just like weather forecasts 2080s Projections – ALL RCPs 2050s Projections – ALL RCPs 2020s Projections – ALL RCPs

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend