Ensemble forecast system design for high- impact weather prediction - - PowerPoint PPT Presentation

ensemble forecast system design for high impact weather
SMART_READER_LITE
LIVE PREVIEW

Ensemble forecast system design for high- impact weather prediction - - PowerPoint PPT Presentation

Ensemble forecast system design for high- impact weather prediction applications Glen Romine NCAR MMM/IMAGe Acknowledgements: NCAR ensemble team: + Craig Schwartz, Ryan Sobash, and Kate Fossell ICAS 2017 : 9/12/2017 Support: NCARs MMML


slide-1
SLIDE 1

Ensemble forecast system design for high- impact weather prediction applications

Glen Romine NCAR MMM/IMAGe

Acknowledgements: NCAR ensemble team: + Craig Schwartz, Ryan Sobash, and Kate Fossell

ICAS 2017 : 9/12/2017

Support: NCAR’s MMML and CISL; NCAR Short term explicit prediction program, NOAA: NA15OAR4590191, NA17OAR4590114 Collaborators/contributors: Ryan Torn, Morris Weisman, Dave Ahijevych, Davide Del Vento

slide-2
SLIDE 2

Focus is on deep moist convection hazards

e.g., the system design in this talk is geared toward next-day prediction of severe weather hazards (tornadoes, large hail, damaging local winds, and flash flooding), though it can have utility for other weather hazards

slide-3
SLIDE 3

CAM – convection-allowing model

  • Model forecast with horizontal spacing between adjacent

grid boxes of 3-4 km or less

  • Capable of ‘resolving’ individual thunderstorms
  • a.k.a. convection-permitting

Why we might want a CAM forecast

  • Convective organization informs on primary threats
  • A more realistic propagation of weather systems (non-

hydrostatic regime)

  • Seeking better guidance on high impact weather hazards
  • Want an ensemble for a probability estimate of threats

Definition + motivation

slide-4
SLIDE 4

e.g., Stensrud et al. 1997; Ingredients based forecasting of severe storms from mesoscale model output; Co-location of shear, instability and triggered precipitation in a mesoscale model Benefit – low computational cost CON – coarse representation of convective events

Mesoscale model output for convective forecasting

16 May 2013 – 9 h RAP forecast

slide-5
SLIDE 5

Simulated reflectivity – instantaneous precipitation rate, similar to

  • bserved weather radar product

Benefit – explicit representation of convection, easy interpretation

  • f forecast mode (e.g., cells, lines, intensity)

CON – more resources, only one representation of event

CAM guidance for convective forecasting

16 May 2013 – 12 h WRF forecast from GFS analysis

“A shuffling zombie…”

Tom Hammill regarding guidance from deterministic forecasts

slide-6
SLIDE 6

Benefit – information on the likelihood of convective mode and uncertainty in timing, location and intensity CON – more computing, difficult to look at every solution

CAM ensemble guidance for convective forecasting

16 May 2013 – 12 h WRF forecast from ensemble EnKF analysis

slide-7
SLIDE 7

2013/05/16 00 UTC Radar Reflectivity (dBZ)

TEXAS OKLAHOMA

CAM ensemble probabilistic guidance

Actual weather radar observations (merged)

slide-8
SLIDE 8

TEXAS OKLAHOMA

Observed Reflectivity > 45 dBZ only (black fill) Ensemble member 9-hr forecast simulated reflectivity > 45 dBZ only (1st member lavender)

CAM ensemble probabilistic guidance

slide-9
SLIDE 9

Observed Reflectivity > 45 dBZ only (black fill) Ensemble member 9-hr forecast simulated reflectivity > 45 dBZ only (1st member lavender)

TEXAS OKLAHOMA

CAM ensemble probabilistic guidance

slide-10
SLIDE 10

Observed Reflectivity > 45 dBZ only (black fill) Ensemble member 9-hr forecast simulated reflectivity > 45 dBZ only (1st member lavender) (2nd member cyan)

TEXAS OKLAHOMA

CAM ensemble probabilistic guidance

slide-11
SLIDE 11

Observed Reflectivity > 45 dBZ only (black fill) 10 ensemble members 9-hr forecast simulated reflectivity > 45 dBZ only (color fills) 1 2 3 4 5 6 7 8 9 10 OBS

TEXAS OKLAHOMA

CAM ensemble probabilistic guidance

slide-12
SLIDE 12

9-hr forecast 30-member ensemble probability simulated reflectivity > 45 dBZ

TEXAS OKLAHOMA

CAM ensemble probabilistic guidance

Caveat: We cannot verify probabilities (events are binary), only the statistical reliability, so need to consider a large number of events

slide-13
SLIDE 13

Examples: Reflectivity – familiar to radar depictions of severe weather Accumulated precipitation – direct analog to observed event (e.g. flash flooding) Example storm surrogates – derived information from convective

  • bjects in model simulations:
  • updraft speed
  • maximum hail size estimates
  • lightning flash rate
  • updraft helicity – indicates rotating convective storms
  • maximum surface wind speeds
  • low-level vorticity

CAM products for prediction of severe convection

slide-14
SLIDE 14

Future: Object-based diagnostics, verification

UH object tracking algorithm

  • 1. Identify UH areas > 25 m2/s2 using watershed algorithm to produce objects
  • 2. Connect objects in time using overlap criteria (connect objects with max object overlap)

20 June 2015 Member 9 12Z - 12Z UH (shaded) with object tracks overlaid

Provided by R. Sobash

slide-15
SLIDE 15

0-1 km AGL storm-relative helicity (m2/s2) Surface-based LCL (m) Surface-based CIN (J/kg) Number of storms

30 April 2015 – 30 July 2015

1km AGL vertical vorticity

Significant Tornado Parameter

1km AGL vertical vorticity

Future: Object-based diagnostics, verification

Provided by R. Sobash

Low-level rotation potential surrogate for tornado prediction

Model environments of rotating storms (fortunately) look a lot like observation-based climatology

slide-16
SLIDE 16

Skill threshold of surrogates varies with time….

Probabilities with a fixed threshold of updraft helicity is a useful predictor

  • f reports of severe weather

during springtime

slide-17
SLIDE 17

Skill threshold of surrogates varies with time….

A lower threshold would be more skillful during the Fall and winter months

slide-18
SLIDE 18

varies in time …. as well as in space

Gray – lower threshold needed Red – higher threshold needed

Bias from climatological skilled updraft helicity threshold Sobash and Kain (2017) Caveat – includes regional reporting bias in storm reports

slide-19
SLIDE 19

NCAR’s real-time ensemble forecast system

Since April 2015, NCAR ENSEMBLE – http://ensemble.ucar.edu PRODUCT EXAMPLES Ensemble mean: average forecast state from all ensemble members

  • smooth, ‘best forecast’

Probability matched mean: remapping of ensemble mean

  • improved magnitudes over ensemble mean, may be unrepresentative

Ensemble spread: variability metric among the member forecasts

  • representativeness of the ensemble mean

Ensemble max/min: shows the extreme values at a given location

  • quick look for high impact events, little information on likelihood

Paintball (spaghetti) plot: Gives location and structure information

  • overlap indicates qualitative agreement, single threshold shown

Postage stamp: small plots with full contour range for each forecast member

  • insight on member scenarios

Probability threshold: raw likelihood from ensemble of event occurrence

  • summary of ensemble information at a given point, limited skill on grid scale

Neighborhood probability threshold: relaxes event occurrence to local area

  • better representation for extreme events

Forecasts are initialized from our own home-grown ensemble analyses

slide-20
SLIDE 20

NCAR’s Data Assimilation Research Testbed (DART)

DART is a software environment for exploring ensemble data assimilation (DA) methods across a wide range of models – here we use with NCAR’s Advanced Research WRF DART system provides complete solution to generate ensemble analysis (initial conditions) consistent with forecast model Confront the (imperfect) model with (imperfect) observations: DART provides rich diagnostics Ensemble analysis provides a set of equally likely initial conditions

DART team: J. Anderson, N. Collins, T . Hoar, J. Hendricks, and G. Romine

A community facility for ensemble data assimilation

slide-21
SLIDE 21

Continuous cycling is ‘best practice’ First guess (B) for analysis is short forecast from the prior analysis No ‘spinup’ needed,

  • n the model attractor

For regional models – nearly all centers use ‘partial’ cycling periodically replacing the background from another (often global) analysis

Analysis

Short forecast Observations

B DA primer: continuously cycled analysis

slide-22
SLIDE 22

Wiring diagram of ensemble cycled analysis (DART) Xa = Xf +K[y0 – HXf]

WRF Member 1

Ensemble background (Xf)

WRF Member 2 WRF Member 3 DART filter

  • bservations

Model estimate of

  • bservations

Ensemble analysis (Xa)

WRF Member 1 WRF Member 2 WRF Member 3

WRF model integration

(y0)

(HXf)

(K)

A better WRF forecast means less adjustment needed by the analysis system

Analysis = background + analysis increment (Kalman gain x innovation)

slide-23
SLIDE 23

Ensemble analysis state update from an observation

Observation and error pdf Ensemble mean vertical velocity Ensemble mean Vr Prior forecast gives first guess

  • f observation

values and the relation to the model states Adapted from Snyder and Zhang (2003)

slide-24
SLIDE 24

Ensemble analysis state update from an observation

Updated ensemble mean w Updated ensemble mean Vr Covariance used to update the analysis from the newest set

  • f observations

New estimate has smaller variance

slide-25
SLIDE 25

Assimilation of conventional observations, forecast domain

Observations come from a variety of sources, not uniformly distributed in type or time Each observation platform can have unique bias characteristics

slide-26
SLIDE 26

1581 X 986 415 X 325 X 40

NCAR Ensemble: Domain area

Analysis domain (80 members) Forecast domains (10 members) GFS + perturbations GFS + perturbations GFS + perturbations

Analysis state size 14 3D variables 14 x 80 x 415 x 325 x 40 = ~ 6 B state elements

  • n 15-km domain

Update every 6 hrs 75k obs, 7 minute wall clock on 512 procs. Fast! 90% of computing is in the analysis state forecasts

slide-27
SLIDE 27

NCAR ensemble next day hazard forecast skill

Good skill and reliability for next day severe weather prediction (12-36 h)

slide-28
SLIDE 28

Investigating model error with DA

Clear evidence of systematic model bias, though it also has spatial (and diurnal) and seasonal dependence – how can we attribute the source?

slide-29
SLIDE 29

Analysis cycle (i) Model advance (t)

Adapted from Cavallo et al. (2016)

i=0 t=0 t=1 t=2 i=1 i=2 t=3 i=3 i=m t=n   θ0,0 θ1,1 θ2,2 θ3,3 θm,n θo(t)

    

   

θ1,0 θ2,1 θ3,2 θm,n-1

Biased model

INC1 INC2 INC3 INCm TEND1 TEND2 TEND3 TENDm

Physics tendency tracking for model improvement

A potential means to identify sources of systematic model bias using data assimilation

slide-30
SLIDE 30

Model physics spinup – large scale forcing dominates

From Kain et al. (2010) – precipitation areas needs to be dynamically consistent with ICs

slide-31
SLIDE 31

Development plans in Ensemble DA

  • Leverage both DART (NCAR ensemble DA),

GSI (U.S. operational DA)

  • GSI for forward observation operators
  • Will monitor physics tendencies to reduce

systematic bias

  • Full conterminous U.S. 3-km analysis
  • Analysis on convection-allowing grid

(a.k.a. multi-scale initial conditions)

  • About 26X more computation needed

Reduction in spin-up errors

  • Assimilation of radar observations
  • More frequent cycling

(hourly or more frequent updates)

  • Looking at GOES-16 – much larger data set!

Next-generation ensemble analysis and forecast system

Analysis state size 14 3D variables 14 x 80 x 415 x 325 x 40 + 16 x 80 x 1581 x 986 x 40 = ~ 86 B state points For both 15- and 3-km domains > 14x increase in size!

slide-32
SLIDE 32

The End!

Thank you for your attention

slide-33
SLIDE 33

Courtesy S. Ha

Regional model lateral boundary errors – via MPAS

slide-34
SLIDE 34

MCS prediction. Follow with examples for good and bad cases

CAM forecasts are sometimes very useful….