Gaussian Process Regression with Noisy Inputs
Dan Cervone
Harvard Statistics Department
March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Gaussian Process Regression with Noisy Inputs Dan Cervone Harvard - - PowerPoint PPT Presentation
Gaussian Process Regression with Noisy Inputs Dan Cervone Harvard Statistics Department March 3, 2015 Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015 Gaussian process regression Introduction A smooth
Harvard Statistics Department
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
k = (x(s∗ 1 ) . . . x(s∗ k ))′.
k|xn ∼ N
k, sn)C(sn, sn)−1xn,
k, s∗ k) − C(s∗ k, sn)C(sn, sn)−1C(sn, s∗ k)
GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
k, un).
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u).
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u).
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u).
2)
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
KILE(s∗) be the KILE estimator for x(s∗) given xn. Then for any
KILE (s∗))2] ≥ E[(x(s∗) − ˆ
KILE(s∗))2]. Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
KILE(s∗) be the KILE estimator for x(s∗) given xn. Then for any
KILE (s∗))2] ≥ E[(x(s∗) − ˆ
KILE(s∗))2].
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
uIp),
u)p/2 exp
u
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
uIp),
u)p/2 exp
u
M
iid
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
nCθ(sn + un, sn + un)−1yn
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
nCθ(sn + un, sn + un)−1yn
nKθ(sn, sn)−1yn
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
nCθ(un)−1yn + const.
n − In
n − In
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2].
u).
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2].
u).
−4 −2 2 4 −4 −2 2 4 u1 u2
u = 2, σ2 x = 0.0001 −4 −2 2 4 −4 −2 2 4 u1 u2
u = 2, σ2 x = 1 −0.5 0.0 0.5 −0.5 0.0 0.5 u1 u2
u = 0.1, σ2 x = 0.0001
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2].
iid
uI2).
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2].
iid
uI2).
x
u
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2].
iid
uI2).
x
u
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2].
iid
uI2).
x
u
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2].
iid
uI2).
x
u
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.0001. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.73 0.77 0.95 1.00 1.00 0.60 0.58 0.78 0.93 1.00 0.22 0.24 0.32 0.49 1.00 0.28 0.31 0.54 0.95 1.00 0.75 0.83 0.98 1.00 1.00 1.02 1.03 1.01 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.87 0.83 0.90 1.00 1.00 0.78 0.69 0.81 0.91 1.00 0.71 0.78 0.72 0.82 0.99 0.93 0.89 0.94 0.98 1.00 0.95 0.95 0.97 1.00 1.00 0.96 0.96 0.99 1.00 1.00
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.01. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.00 1.00 1.00 1.00 1.00 0.93 0.94 0.99 1.00 1.00 0.64 0.68 0.87 1.00 1.00 0.41 0.48 0.68 0.98 1.00 0.79 0.79 0.97 1.00 1.00 1.02 1.02 1.01 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.96 0.99 1.00 1.00 1.00 0.92 0.93 0.99 1.00 1.00 0.77 0.71 0.87 0.99 1.00 0.91 0.94 0.93 0.99 1.00 0.95 0.95 0.98 1.00 1.00 0.96 0.96 0.99 1.00 1.00
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.1. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.00 1.00 1.00 1.00 1.00 0.96 0.99 1.00 1.00 1.00 0.86 0.94 0.99 1.00 1.00 0.67 0.77 0.92 1.00 1.00 0.84 0.89 0.99 1.00 1.00 1.03 1.01 1.00 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.00 1.00 1.00 1.00 1.00 0.95 0.98 1.00 1.00 1.00 0.81 0.89 0.98 1.00 1.00 0.94 0.92 0.97 1.00 1.00 0.95 0.96 0.98 1.00 1.00 0.96 0.97 0.99 1.00 1.00
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 1. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 0.92 0.96 1.00 1.00 1.00 0.97 0.98 1.00 1.00 1.00 1.01 1.01 1.00 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 0.98 1.00 1.00 1.00 0.98 0.99 0.99 1.00 1.00 0.98 0.99 1.00 1.00 1.00 0.98 0.98 1.00 1.00 1.00
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.33 0.44 0.66 0.91 0.95 0.11 0.15 0.31 0.71 0.95 0.04 0.07 0.12 0.32 0.92 0.41 0.45 0.66 0.90 0.95 0.84 0.88 0.93 0.95 0.95 0.95 0.95 0.95 0.95 0.95
x = 0.0001 95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.91 0.93 0.95 0.95 0.95 0.66 0.77 0.91 0.94 0.95 0.31 0.42 0.70 0.91 0.95 0.48 0.55 0.74 0.92 0.95 0.85 0.88 0.93 0.95 0.95 0.95 0.95 0.95 0.95 0.95
x = 0.01
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.95 0.95 0.95 0.95 0.95 0.89 0.92 0.95 0.95 0.95 0.66 0.79 0.91 0.95 0.95 0.70 0.77 0.88 0.94 0.95 0.88 0.90 0.94 0.95 0.95 0.95 0.95 0.95 0.95 0.95
x = 0.1 95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.95 0.95 0.95 0.95 0.95 0.94 0.95 0.95 0.95 0.95 0.89 0.91 0.94 0.95 0.95 0.89 0.92 0.94 0.95 0.95 0.93 0.94 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95
x = 1
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x are unknown:
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x are unknown:
u)p/2 exp
u
u not identifiable.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.0001. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.98 0.97 1.00 1.00 1.00 1.02 0.97 1.00 1.00 1.00 0.88 0.95 0.97 1.00 1.00 1.00 0.97 0.98 0.99 1.00 0.98 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.92 0.97 0.96 0.99 0.99 0.77 0.74 0.83 0.92 1.00 0.82 0.80 0.77 0.85 0.98 1.00 0.94 1.00 1.09 1.14 0.98 0.96 0.98 1.02 1.03 0.98 0.98 0.97 0.98 0.97
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.01. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.03 1.03 1.00 1.00 1.00 1.01 0.97 1.00 1.00 1.00 0.90 0.98 0.99 1.00 1.00 0.99 0.97 0.98 0.99 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.98 1.06 1.05 1.05 1.05 0.77 0.92 0.95 0.99 0.99 0.79 0.75 0.88 0.98 0.99 0.93 0.98 0.99 1.03 1.06 0.98 0.96 0.99 1.01 1.05 0.99 0.97 0.98 0.97 0.97
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.1. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.05 1.02 1.01 1.00 1.00 1.02 0.86 1.01 1.00 1.00 0.94 0.98 1.01 1.00 1.00 0.97 0.99 0.98 1.00 1.00 0.99 1.00 0.99 1.00 1.00 0.99 1.00 0.99 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.31 1.14 1.16 1.18 1.15 0.85 0.97 0.98 0.99 0.99 0.82 0.87 0.97 0.99 0.98 0.99 0.97 0.98 0.98 0.98 0.97 0.95 0.97 1.00 1.00 0.98 0.95 0.97 0.98 0.96
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 1. Relative MSPE for KALE/KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.03 0.90 0.92 1.00 1.00 1.08 1.05 1.00 1.00 1.00 1.01 1.02 1.01 1.00 1.00 0.99 1.00 1.01 1.00 1.00 0.99 1.00 1.00 1.00 1.00 0.99 0.99 0.99 1.00 1.00 Relative MSPE for HMC/KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.58 1.35 1.35 1.21 1.56 1.17 1.10 1.28 1.11 1.13 0.91 0.95 1.05 1.06 1.01 0.96 0.94 0.96 0.96 0.98 0.95 0.94 0.94 0.94 0.95 0.94 0.95 0.96 0.95 0.96
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.0001.
95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.92 0.94 0.93 0.94 0.94 0.87 0.89 0.91 0.95 0.94 0.75 0.82 0.88 0.93 0.92 0.57 0.68 0.87 0.91 0.93 0.54 0.63 0.82 0.87 0.89 0.46 0.57 0.68 0.68 0.58 95% interval coverage for KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.97 0.99 0.96 0.95 0.94 0.95 0.95 0.96 0.98 0.94 0.93 0.94 0.95 0.97 0.94 0.75 0.82 0.93 0.94 0.93 0.62 0.71 0.87 0.88 0.89 0.46 0.57 0.69 0.68 0.58 95% interval coverage for HMC σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.98 0.98 0.96 0.96 0.96 0.94 0.96 0.96 0.97 0.96 0.94 0.94 0.95 0.97 0.96 0.90 0.92 0.94 0.95 0.95 0.88 0.89 0.91 0.90 0.92 0.87 0.87 0.87 0.86 0.86 Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.01.
95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.92 0.93 0.92 0.93 0.93 0.88 0.91 0.92 0.93 0.93 0.76 0.85 0.91 0.92 0.93 0.55 0.71 0.86 0.89 0.90 0.56 0.59 0.80 0.85 0.89 0.48 0.57 0.67 0.70 0.73 95% interval coverage for KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.94 0.94 0.93 0.93 0.93 0.94 0.95 0.94 0.94 0.93 0.92 0.94 0.96 0.94 0.93 0.70 0.84 0.92 0.92 0.90 0.58 0.64 0.85 0.86 0.89 0.43 0.57 0.68 0.70 0.73 95% interval coverage for HMC σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.97 0.97 0.97 0.97 0.97 0.96 0.96 0.96 0.96 0.96 0.94 0.95 0.96 0.95 0.95 0.91 0.93 0.93 0.94 0.94 0.88 0.89 0.91 0.91 0.90 0.87 0.87 0.87 0.87 0.86 Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 0.1.
95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.92 0.91 0.92 0.91 0.93 0.87 0.90 0.92 0.92 0.91 0.74 0.85 0.92 0.92 0.93 0.61 0.71 0.88 0.89 0.92 0.53 0.66 0.79 0.83 0.87 0.45 0.55 0.66 0.64 0.62 95% interval coverage for KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.93 0.92 0.92 0.91 0.93 0.91 0.92 0.92 0.92 0.91 0.88 0.93 0.94 0.93 0.93 0.71 0.82 0.93 0.90 0.92 0.53 0.71 0.82 0.84 0.87 0.45 0.56 0.66 0.63 0.62 95% interval coverage for HMC σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.98 0.98 0.98 0.97 0.97 0.98 0.97 0.97 0.96 0.96 0.95 0.96 0.96 0.96 0.96 0.92 0.93 0.95 0.94 0.95 0.90 0.90 0.92 0.91 0.92 0.88 0.88 0.88 0.87 0.87 Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x = 1.
95% interval coverage for KILE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.88 0.89 0.89 0.86 0.86 0.88 0.85 0.91 0.87 0.88 0.77 0.83 0.88 0.87 0.90 0.62 0.70 0.85 0.88 0.88 0.59 0.62 0.72 0.79 0.80 0.54 0.60 0.57 0.57 0.56 95% interval coverage for KALE σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 0.87 0.87 0.89 0.86 0.86 0.89 0.86 0.89 0.87 0.88 0.82 0.86 0.89 0.87 0.90 0.65 0.70 0.86 0.88 0.88 0.53 0.58 0.72 0.79 0.80 0.43 0.52 0.56 0.57 0.56 95% interval coverage for HMC σu
2
1e−04 0.01 0.1 0.5 1 0.001 0.01 0.1 0.5 1 2 β 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.97 0.96 0.97 0.97 0.96 0.96 0.96 0.94 0.95 0.95 Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
50 60 70 80 −100 100
longitude latitude −2 −1 1 temp
1Data available: http://www.cru.uea.ac.uk/cru/data/temperature/
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
50 60 70 80 −100 100
longitude latitude −2 −1 1 temp
1Data available: http://www.cru.uea.ac.uk/cru/data/temperature/
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
50 60 70 80 −100 100
longitude latitude −2 −1 1 temp
1Data available: http://www.cru.uea.ac.uk/cru/data/temperature/
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2]
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x1[s1 = s2]
u
1 cos2(φi)
u = 500 yields a 28km expected distance between s + u and s.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
50 60 70 80 −100 100
longitude latitude −1 1 temp
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x
50 60 70 80 −100 100
longitude latitude −0.001 0.000 0.001 0.002 temp
50 60 70 80 −100 100
longitude latitude −0.002 −0.001 0.000 0.001 0.002 temp
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
50 60 70 80 −100 100
longitude latitude −1 1 temp
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u = 0 model using HMC.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u = 0 model using HMC.
u = 500} − {σ2 u = 0} point predictions:
50 60 70 80 −100 100
longitude latitude −0.04 0.00 0.04 temp
u = 500} − {σ2 u = 0} interval lengths:
50 60 70 80 −100 100
longitude latitude −0.15 −0.12 −0.09 temp
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u = 500} and {σ2 u = 0} models:
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u = 500} and {σ2 u = 0} models: 10000 20000 30000 40000 50000 0e+00 1e−04 2e−04 3e−04
sig2u 500
u = 0} agrees with parameter inference from KALE/KILE.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
u = 500} and {σ2 u = 0} models: 25 50 75 100 0.00 0.05 0.10 0.15
sig2u 500
u = 0} agrees with parameter inference from KALE/KILE.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x is large.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x is large.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x is large.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
x is large.
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
temperature site ice core tree ring varve Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
temperature site ice core tree ring varve
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
[1]
Modelling map positional error to infer true feature location. Canadian Journal of Statistics, 34(4):659–676, 2006. [2]
Methodologic issues and approaches to spatial epidemiology. Environ Health Perspect, 116(8):1105–1110, 2008. [3]
Positional accuracy of geocoded addresses in epidemiologic research. Epidemiology, 14(4):408–412, 2003. [4]
Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. Journal of Geophysical Research: Atmospheres (1984–2012), 111(D12), 2006. [5]
Measurement error in nonlinear models: a modern perspective. CRC press, 2006. [6]
Spatial statistics in the presence of location error with an application to remote sensing of the environment. Statistical science, 18(4):436–456, 2003. [7]
Hemispheric and large-scale land-surface air temperature variations: An extensive revision and an update to 2010. Journal of Geophysical Research: Atmospheres (1984–2012), 117(D5), 2012. [8]
Nonstationary covariance models for global data. The Annals of Applied Statistics, pages 1271–1289, 2008. [9]
Model-based object tracking in traffic scenes. In Computer VisionECCV’92, pages 437–452. Springer, 1992. Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
[10]
Geocoding and monitoring of us socioeconomic inequalities in mortality and cancer incidence: does the choice of area-based measure and geographic level matter? the public health disparities geocoding project. American journal of epidemiology, 156(5):471–482, 2002. [11]
Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia. Proceedings of the National Academy of Sciences, 105(36):13252–13257, 2008. [12] X.-L. Meng. Multiple-imputation inferences with uncongenial sources of input. Statistical Science, pages 538–558, 1994. [13]
Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The hadcrut4 data set. Journal of Geophysical Research: Atmospheres (1984–2012), 117(D8), 2012. [14]
Natural exponential families with quadratic variance functions: statistical theory. The Annals of Statistics, pages 515–529, 1983. [15]
Capturing the uncertainty of moving-object representations. In Advances in Spatial Databases, pages 111–131. Springer, 1999. [16]
Accounting for uncertainty when mapping species distributions: the need for maps of ignorance. Progress in Physical Geography, 35(2):211–226, 2011. [17]
Space–time covariance functions. Journal of the American Statistical Association, 100(469):310–321, 2005. [18]
A bayesian anova scheme for calculating climate anomalies, with applications to the instrumental temperature record. Journal of Climate, 25(2):777–791, 2012. Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015
[19]
A bayesian algorithm for reconstructing climate anomalies in space and time. part i: Development and applications to paleoclimate reconstruction problems. Journal of Climate, 23(10):2759–2781, 2010. [20]
Problems with likelihood estimation of covariance functions of spatial gaussian processes. Biometrika, 74(3):640–642, 1987. Dan Cervone (Harvard Statistics Department) GP Regression with Noisy Inputs March 3, 2015