Gravitational Wave Data Analysis: II. Model Selection and Parameter - - PowerPoint PPT Presentation

gravitational wave data analysis ii model selection and
SMART_READER_LITE
LIVE PREVIEW

Gravitational Wave Data Analysis: II. Model Selection and Parameter - - PowerPoint PPT Presentation

Gravitational Wave Data Analysis: II. Model Selection and Parameter Estimation Chris Van Den Broeck Kavli RISE Summer School on Gravitational Waves, Cambridge, UK, 23-27 September 2019 Bayesian inference Aim: use available data to


slide-1
SLIDE 1

Gravitational Wave Data Analysis:

  • II. Model Selection and Parameter Estimation

Chris Van Den Broeck

Kavli RISE Summer School on Gravitational Waves, Cambridge, UK, 23-27 September 2019

slide-2
SLIDE 2

ØAim: use available data to § Evaluate which out of several hypotheses is the most likely: Model selection § Construct probability density distribution for parameters associated with hypotheses: Parameter estimation Ø Do this while making explicit all assumptions made

Bayesian inference

slide-3
SLIDE 3

Ø Propositions (or statements) denoted by uppercase letters: Ø Boolean algebra: § Conjunction: and are both true § Disjunction: At least one of or is true § Negation: is false § Implication: From follows

Probabilities of propositions

A, B, C, . . . , X

A B

A ∧ B

A B

A ∨ B

A

¬A

A B

A ⇒ B

slide-4
SLIDE 4

Ø Useful to view propositions as sets which are subsets of a “Universe” § Conjunction: intersection of sets § Disjunction: union of sets § Negation: complement within Universe Ø Each of these sets have a probability associated with them § If then § If and are disjoint then § The Universe has probability 1, so that e.g.

Probabilities of propositions

A ∧ B

A ∨ B

¬A

A ⊂ B pB p(A) ≤ p(B) A B

p(A ∨ B) = p(A) + p(B)

p(A) + p(¬A) = 1

slide-5
SLIDE 5

Ø Conditional probability: … from which follows the product rule: … and from the product rule follows Bayes’ theorem:

Bayes’ theorem

p(A|B) = p(B|A) p(A) p(B) p(A|B) ≡ p(A ∧ B) p(B)

p(A ∧ B) = p(A|B) p(B)

slide-6
SLIDE 6

Ø Note that for any and , and are disjoint sets whose union is , so that Ø Consider sets such that § They are disjoint: § They are exhaustive: is the Universe, so that Then one has Marginalization rule

Marginalization

A B

A ∧ B

B A ∧ (¬B)

A

p(A) = p(A ∧ B) + p(A ∧ (¬B))

{Bk}

Bk ^ Bl = ; k 6= l

∨kBk

X

k

p(Bk) = 1

p(A) = X

k

p(A ∧ Bk)

slide-7
SLIDE 7

Ø Consider the proposition “The continuous variable has the value “ Then the probability might be zero Ø Instead assign probabilities to finite intervals: where is called the probability density function § Exhaustiveness given by Ø Marginalization for continuous variables:

Marginalization over a continuous variable

x α

p(x = α) p(x1 ≤ x ≤ x2) = Z x2

x1

pdf(x) dx "

pdf(x) Z xmax

xmin

pdf(x) dx = 1

p(A) = Z xmax

xmin

pdf(A, x) dx

slide-8
SLIDE 8

Ø The template banks we use to search for signals from coalescing binaries is coarse at high masses Ø Information about angles and distance enter through the waveform amplitude, hence matched filtering with a normalized template only involves intrinsic parameters (masses, spins) § Fast sky position estimates instead come from different arrival times and phases at the different detectors a network Ø After detection has taken place, we will want information about all parameters § Binary black holes: 15 parameters § Binary neutron stars: 17 parameters

Application to gravitational wave data analysis

✓ S N ◆

max

= max

i

(h(¯ θi)|s) p (h(¯ θi)|h(¯ θi))

{m1, m2, ~ S1, ~ S2, ↵, , ◆, , dL, tc, 'c}

{m1, m2, ~ S1, ~ S2, ↵, , ◆, , dL, tc, 'c, Λ1, Λ2}

slide-9
SLIDE 9

Ø Parameter estimation: find the posterior probability density where § are the parameters § is the hypothesis that e.g. the signal was from the inspiral of two neutron stars, which comes with a family of waveforms § are the detector data Ø Model selection: compare different hypotheses though an odds ratio where § The hypotheses , correspond to different waveform models

§ Binary neutron star versus binary black hole § Waveform predicted by general relativity versus alternative theory of gravity § …

§ The probabilities (not probability densities) , do not involve any statement about parameters

Application to gravitational wave data analysis

p(¯ θ|d, H)

¯ θ = (θ1, θ2, . . . , θN)

H

) + h(¯ θ; t)

d(t) = n(t) + h(¯ θ; t)

OH1

H2 = p(H1|d)

p(H2|d)

H1 H2 p(H1|d) p(H2|d)

slide-10
SLIDE 10

Ø Using Bayes’ theorem: where § is called the likelihood § is the prior probability density § the evidence for the hypothesis Ø The prior probability density is a function we will choose ourselves, based

  • n what we know about them prior to the measurement:

§ If the hypothesis is binary neutron star inspiral, then we can take the prior on the component masses to be uniform in the interval § For sources that roughly uniformly distributed over spatial volume, we take distance prior § The prior for all parameters together is usually taken to be the product of priors for the individual parameters

Ø The evidence is not important here; it is set by the requirement that the posterior probability density be normalized Ø The likelihood is something we can calculate!

  • 1. Parameter estimation

p(¯ θ|H, d) = p(d|H, ¯ θ) p(¯ θ|H) p(d|H)

) = p(d|H, ¯ θ) ¯ θ) p(¯ θ|H) |H p(d|H) ¯ θ) p(¯ θ|H)

[1, 3] M p(r) dr ∝ r2 dr

|H p(d|H)

p(¯ θ|H, d)

) = p(d|H, ¯ θ)

slide-11
SLIDE 11

Ø How to calculate the likelihood ? Ø One has § In the conditional probability density above, the hypotheses and parameter values are assumed known, hence is assumed known § We have a probability distribution for noise realizations!

Ø Assuming stationary, Gaussian noise,

  • r in terms of the noise-weighted inner product :

Ø But in our case we can write , which gives us Ø We now have all we need to calculate the posterior probability density of the parameters:

  • 1. Parameter estimation

) = p(d|H, ¯ θ)

d(t) = n(t) + h(¯ θ; t)

h(¯ θ; t) (A|B) = 4< Z ∞ d f ˜ A∗(f) ˜ B(f) Sn(f) p[n] = N e−2

R ∞

|˜ n(f)|2 Sn(f) d

f

p[n] = N e− 1

2(n|n)

˜ n(f) = ˜ d(f) − ˜ h(¯ θ; f)

p(d|H, ¯ θ) = N e− 1

2(d−h|d−h)

p(¯ θ|H, d) = p(d|H, ¯ θ) p(¯ θ|H) p(d|H)

slide-12
SLIDE 12

Ø The posterior is the likelihood weighted by the prior Conclusions drawn are based on: § Experimental data obtained (likelihood) § Information available before experiment (prior) Ø If we want posterior distribution for just one variable then we marginalize

  • ver all the others:
  • 1. Parameter estimation

θ1

p(θ1|d, H) = Z θmax

2

θmin

2

. . . Z θmax

N

θmin

N

p(θ1, θ2, . . . , θN) dθ2 . . . dθN

p(¯ θ|d, H) ∝ p(d|¯ θ, H) p(¯ θ|H)

slide-13
SLIDE 13

Ø Suppose we want to compare two hypotheses ,

§ Binary neutron star versus binary black hole § Waveform predicted by general relativity versus alternative theory of gravity § …

Ø Want to compare probabilities and Ø Bayes theorem for e.g. : Ø Define odds ratio: where factors of have canceled out § ratio of prior odds § ratio of evidences

  • 2. Model selection

H1 H2

p(H1|d) p(H2|d)

H1

p(H1|d) = p(d|H1) p(H1) p(d)

OH1

H2 = p(H1|d)

p(H2|d) = p(d|H1) p(d|H2) p(H1) p(H2)

p(d)

p(H1)/p(H2)

p(d|H1)/p(d|H2)

slide-14
SLIDE 14

Ø Recall from parameter estimation:

  • r

Ø Integrate both sides over all parameters: Note that independent of parameter(s), and is normalized, hence left hand side becomes: Therefore the evidence is given by

  • 2. Model selection

) p(d|H)

p(¯ θ|H, d) = p(d|H, ¯ θ) p(¯ θ|H) p(d|H) p(¯ θ|d, H) p(d|H) = p(d|H, ¯ θ) p(¯ θ|H) Z p(¯ θ|d, H) p(d|H) dNθ = Z p(d|H, ¯ θ) p(¯ θ|H) dNθ Z p(¯ θ|d, H) Z p(¯ θ|d, H) p(d|H) dNθ = p(d|H) Z p(¯ θ|d, H) dNθ = p(d|H)

p(d|H) = Z p(d|H, ¯ θ) p(¯ θ|H) dNθ

slide-15
SLIDE 15

Ø Odds ratio Ø Define Bayes factor Ø Evidences Ø Hypotheses can have arbitrary number of free parameters § Does model that fits data the best tend to give highest evidence? § If so, model with more parameters could give highest evidence even if incorrect!

  • 2. Model selection

OH1

H2 = p(H1|d)

p(H2|d) = p(d|H1) p(d|H2) p(H1) p(H2)

BH1

H2 = p(d|H1)

p(d|H2) p(d|H) = Z p(d|H, ¯ θ) p(¯ θ|H) dNθ

slide-16
SLIDE 16

Ø For simplicity, compare two hypotheses of the following form: § has no free parameters § has one free parameter, Will automatically be favored over ? Ø Odds ratio Ø Evidence for : Ø For simplicity assume flat prior for :

Occam’s razor

X Y X Y

λ

X Y X Y

OX

Y = p(d|X)

p(d|Y ) p(X) p(Y ) Z

X Y

p(d|Y ) = Z p(d|λ, Y ) p(λ|Y ) dλ " λ ∈ [λmin, λmax] p(λ|Y ) = 1 λmax − λmin

slide-17
SLIDE 17

Ø Evidence for : Ø Flat prior: Ø For definiteness, assume likelihood of the form Ø Then the evidence for is:

Occam’s razor

X Y

p(d|Y ) = Z p(d|λ, Y ) p(λ|Y ) dλ " p(λ|Y ) = 1 λmax − λmin λmin ≤ λ ≤ λmax p(d|λ, Y ) = p(d|λ0, Y ) exp  −(λ − λ0)2 2σ2

λ

  • "

X Y

p(d|Y ) = Z λmax

λmin

p(d|λ, Y ) p(λ|Y ) dλ = Z λmax

λmin

1 λmax − λmin p(d|λ0, Y ) exp  −(λ − λ0)2 2σ2

λ

= p(d|λ0, Y ) λmax − λmin Z λmax

λmin

exp  −(λ − λ0)2 2σ2

λ

= p(d|λ0, Y ) σλ √ 2π λmax − λmin

slide-18
SLIDE 18

Ø Evidence for : Ø Hence odds ratio becomes: where § ratio of prior odds § compares best fits to the data § penalizes if experimental uncertainty on is much smaller than prior range Ø Occam’s Razor: “Plurality is not to be posited without necessity”

Occam’s razor

X Y

p(d|Y ) = p(d|λ0, Y ) σλ √ 2π λmax − λmin " OX

Y = p(X)

p(Y ) p(d|X) p(d|λ0, Y ) λmax − λmin σλ √ 2π " p(X)/p(Y ) p(d|X)/p(d|λ0, Y )

(λmin − λmax)/(σλ √ 2π)

X Y

λ

slide-19
SLIDE 19

Ø Parameter estimation requires computing the posterior density distribution from likelihood and prior using Bayes’ theorem: Ø Often the parameter space has high dimensionality (e.g. 15 dimensions for quasi-circular inspiral of binary black holes), making it computationally challenging to map out the likelihood function Ø Similarly, calculation of evidence integral over high-dimensional space: Ø Efficient way of obtaining both: nested sampling

Computing posterior densities and evidences in practice

p(¯ θ|d, H) = p(d|¯ θ, H) p(¯ θ|H) p(d|H) p(d|H) = Z dNθ p(d|¯ θ, H) p(¯ θ|H) = Z dNθ L(¯ θ) π(¯ θ)

slide-20
SLIDE 20

Ø Nested sampling computes the evidence by rewriting the above integral in terms of a single scalar called the posterior mass Ø “Fraction of prior volume with likelihood greater than “ Mathematically: Element of prior mass: Ø Since prior is normalized, § Lower bound : surface within which no higher likelihood; § Upper bound : surface within which all points have higher likelihood;

Nested sampling: basic idea

p(d|H) = Z dNθ p(d|¯ θ, H) p(¯ θ|H) = Z dNθ L(¯ θ) π(¯ θ)

X Y

λ

X(λ) ≡ Z

L(¯ θ)>λ

π(¯ θ) dNθ

dX = π(¯ θ) dNθ

X ∈ [0, 1]

X = 0

λ = Lmax

X = 1

λ = Lmin

slide-21
SLIDE 21

Ø Rewrite as Ø Idea behind nested sampling: § Construct the function by finding locations in parameter space with progressively higher likelihood and hence progressive smaller prior mass § Approximate evidence by § Approximate posterior density by

Nested sampling: basic idea

Z = Z Z . . . Z L(¯ θ) π(¯ θ) dNθ = Z ˜ L(X) dX ˜ L(X)

Z ' X

k

Lk ∆Xk

Lk

∆Xk

p(d|H) = Z dNθ p(d|¯ θ, H) p(¯ θ|H) = Z dNθ L(¯ θ) π(¯ θ)

¯ θk p(¯ θ|d, H) ' Lk Z ∆Xk

slide-22
SLIDE 22

Nested sampling: schematically

slide-23
SLIDE 23

Ø Drop samples across parameter space, drawn from the prior These are called “lived points” § Each has a likelihood value associated with it § Also has a volume associated with it such that likelihood lowest at boundary § Live points are uniformly sampled in prior mass between 0 and 1 Ø Discard live point with lowest likelihood , i.e. highest prior mass § Replace with new live point, sampled from the prior, which has higher likelihood than lowest remaining one § Some different point within the new set of live points now has the lowest likelihood , with and highest prior mass , with Ø Repeat the step above

Nested sampling: the algorithm

M

L0

X0

L1 > L0 X1 < X0 X1

L1

slide-24
SLIDE 24

Ø Having discarded the lowest-likelihood point with prior mass , how do we assign prior mass to the new lowest-likelihood live point? § Can in practice only be done statistically § Make an educated guess = draw from a distribution! Ø Probability that the surface with highest prior mass is at : Ø Probability density that highest of samples has prior mass : Ø Define shrinkage ratio between new and old highest prior mass: This has the same probability density: Ø Hence we assign by drawing a shrinkage ratio from the above distribution

Nested sampling: the algorithm

X0 X1 X = χ

P(Xi < χ) =

M

Y

i=1

Z χ dXi =

M

Y

i=1

χ = χM M

χ

p(χ, M) = MχM1

t = X1/X0

p(t, M) = MtM1

X1

slide-25
SLIDE 25

Ø At first step: set Ø At iteration: live point with largest prior mass has Ø Recall distribution of shrinkage ratios: Mean and standard deviation of : Ø Mean and standard deviation of : Hence mean values of go like § Prior mass where likelihood is largest is reached exponentially quickly § Errors decrease exponentially quickly

Nested sampling: the algorithm

X = 1

kth

Xk =

k

Y

j=1

tj

p(t, M) = MtM1

log(t) log(t) ∼ (−1 ± 1)/M log(Xk) ⇠ (k ± p k)/M

log(Xk) hXki = exp(k/M) hXki

slide-26
SLIDE 26

Ø Evidence Ø Termination condition of the algorithm? No natural choice, but in practice: § Estimate the amount of evidence still to be accumulated as current largest likelihood among live points, , times current largest prior mass § Compare with currently accumulated evidence, § Terminate when where is a user-specified constant Ø Posterior probability density for parameters obtained through

Nested sampling: termination condition

Z ' X

k

Lk ∆Xk Lmax,cur Xcur < αZcur

Lmax,cur

cur Xmax,cur

αZcur < αZ

p(¯ θ|d, H) ' Lk Z ∆Xk

slide-27
SLIDE 27

Example: parameter estimation on GW150914

LIGO + Virgo, PRL 116, 241102 (2016)

slide-28
SLIDE 28

All binary black hole events so far

LIGO + Virgo, PRX 9, 031040 (2019)

slide-29
SLIDE 29
slide-30
SLIDE 30

First-ever tests of the strong-field dynamics of GR

Ø Recall frequency domain phenomenological waveforms Ø Characterized by parameters Ø Allow for possible deviations in these parameters, one by one: § Waveform now has an additional free parameter:

˜ h(f)

{pi} = {ϕ0, ϕ1, . . . , ϕ7, β2, β3, α2, α3, α4} pi → (1 + δˆ pi) pi

{m1, m2, ~ S1, ~ S2, ↵, , ◆, , dL, tc, 'c, ˆ pi}

LIGO + Virgo, PRL 116, 221101 (2019)

slide-31
SLIDE 31

First-ever tests of the strong-field dynamics of GR

Ø Allow for possible deviations in these parameters, one by one: § Waveform now has an additional free parameter: Ø Results from GW150914:

pi → (1 + δˆ pi) pi

{m1, m2, ~ S1, ~ S2, ↵, , ◆, , dL, tc, 'c, ˆ pi} inspiral intermediate merger/ringdown

LIGO + Virgo, PRL 116, 221101 (2019)

slide-32
SLIDE 32

First-ever tests of the strong-field dynamics of GR

Ø Allow for possible deviations in these parameters, one by one: § Waveform now has an additional free parameter: Ø Results from GW150914:

pi → (1 + δˆ pi) pi

{m1, m2, ~ S1, ~ S2, ↵, , ◆, , dL, tc, 'c, ˆ pi} inspiral intermediate merger/ringdown

LIGO + Virgo, PRL 116, 221101 (2019)

slide-33
SLIDE 33

First-ever tests of the strong-field dynamics of GR

Ø From the posterior density functions we obtain 90% upper bounds on Ø For the post-Newtonian coefficients in particular: |δˆ pi|

LIGO + Virgo, PRL 116, 221101 (2019)

slide-34
SLIDE 34

Combining information from multiple detections

Can we combine information from multiple detections so as to arrive at an increasingly better result? Ø Consider detections Ø For a given detection with data we have a posterior density function for each of the “testing parameters” Ø Posterior density function for using all detections:

d1, d2, . . . , dN dn

p(δˆ pi|dn) δˆ pi δˆ pi

p(δˆ pi|d1, d2, . . . , dN) = p(d1, d2, . . . , dN|δˆ pi) p(δˆ pi) p(d1, d2, . . . , dN) = p(δˆ pi)

N

Y

n=1

p(dn|δˆ pi) p(dn) = p(δˆ pi)

N

Y

n=1

p(δˆ pi|dn) p(dn) p(dn) p(δˆ pi) = p(δˆ pi)1N

N

Y

n=1

p(δˆ pi|dn)

slide-35
SLIDE 35

Combining information from multiple detections

Ø Posterior density function for using all detections: Ø “Bayesian updating”: posterior after nth measurement becomes prior for the (n+1)th measurement! Ø Combined bounds on deviations in PN parameters from all BBH detections: δˆ pi

p(δˆ pi|d1, d2, . . . , dN) = p(δˆ pi)1N

N

Y

n=1

p(δˆ pi|dn)

LIGO + Virgo, arXiv:1903.04467

slide-36
SLIDE 36

Ø Dispersion of gravitational waves? E.g. as a result of non-zero graviton mass:

  • Dispersion relation:
  • Group velocity:
  • Modification to gravitational wave phase:

Ø Bound on graviton mass:

The propagation of gravitational waves

δΨ = −πDc/[λ2

g(1 + z) f]

vg/c = 1 − m2

gc4/2E2

E2 = p2c2 + m2

gc4

mg ≤ 5.0 × 10−23 eV/c2 λg = h/(mgc)

LIGO + Virgo, arXiv:1903.04467

slide-37
SLIDE 37

Ø More general forms of dispersion:

§

corresponds to violation of local Lorentz invariance E2 = p2c2 + Ap↵c↵

α 6= 0

LIGO + Virgo, arXiv:1903.04467

The propagation of gravitational waves

slide-38
SLIDE 38

Ringdown and the no-hair conjecture

Ø Assuming vacuum Einstein equations: “Stationary black holes are completely characterized by mass and spin” Ø Ringdown regime: Kerr metric + linear perturbations § Ringdown signal is superposition of “quasi-normal modes” § Characteristic frequencies and damping times completely determined by mass and spin : § Empirically checking these dependences can be viewed as an indirect test

  • f the no-hair conjecture

h(t) = X

nlm

Anlme−t/τnlm cos(ωnlmt + φnlm)

Mf

af

ωnlm = ωnlm(Mf, af) τnlm = τnlm(Mf, af)

slide-39
SLIDE 39

Ringdown and the no-hair conjecture

Ø Given a waveform model, (indirectly) test the no-hair theorem by allowing for deviations: Ø Let the different and vary in turn, and measure them together with all other parameters in the problem § Obtain probability density distributions § Combine information from multiple signals Ø Assuming Advanced LIGO/Virgo at design sensitivity, and 6 sources similar to GW150914: § measurable to O(2%) § measurable to O(10%) ωlmn(Mf, af) → (1 + δˆ ωlmn) ωlmn(Mf, af) τlmn(Mf, af) → (1 + δˆ τlmn) τlmn(Mf, af)

δˆ ωlmn δˆ τlmn

δˆ ω220

δˆ τ220

1 2 3 4 5 6 Nevents −0.15 −0.10 −0.05 0.00 0.05 0.10 0.15 δˆ ω220 probability density

Carulllo et al., PRD 98, 104020 (2018)

slide-40
SLIDE 40

The equation of state of neutron stars

Ø Structure of neutron stars?

§ Structure of the crust? § Proton superconductivity § Neutron superfluidity § “Pinning” of fluid vortices to the crust § Origin of magnetic fields? § More exotic objects?

Ø Widely differing theoretical predictions for equation of state

§ Pressure as a function of density § Mass as a function of radius § Tidal deformability as a function of mass

Demorest et al., Nature 467, 1081 (2010)

slide-41
SLIDE 41

Ø Gravitational waves from inspiraling neutron stars: § When close, the stars induce tidal deformations in each other § These affect orbital motion, which modifies the gravitational wave signal Ø Tidal field of one star causes quadrupole deformation in the other: where depends on internal structure (equation of state) Ø Enters inspiral phase at 5PN order, through § O(102 - 105) depending on mass and EOS

Tidal deformations in binary neutron stars

Qij = −λ(EOS; m) Eij

λ(EOS; m) λ(m)/m5 ∝ (R/m)5

slide-42
SLIDE 42

Probing the structure of neutron stars

Ø Measurement of tidal deformations on GW170817

§ Free parameters in the waveform: § First results: more compact neutron stars favored § Since then more detailed investigations: LIGO + Virgo, PRL 121, 161101 (2018)

Λi ≡ λi m5

i

i = 1, 2

{m1, m2, ~ S1, ~ S2, ↵, , ◆, , dL, tc, 'c, Λ1, Λ2}

LIGO + Virgo, PRL 119, 161101 (2017)

slide-43
SLIDE 43

Equations-of-state: model selection

Ø Given two hypotheses , we can calculate the odds ratio:

§

ratio of prior odds § ratio of evidences Ø Consider hypotheses , corresponding to different theoretical predictions for the equation of state § For a given equation of state , one has a waveform model in which the tidal deformations depend on component masses in a specific way: , § The free parameters in each model are Ø Define some reference model , e.g. one in which , and compute for H1 H2

HA A = 1, . . . , N

HA

Λ1 = Λ(A)(m1) Λ2 = Λ(A)(m2) {m1, m2, ~ S1, ~ S2, ↵, , ◆, , dL, tc, 'c}

Href

Λ(ref)(m) ≡ 0

OHA

Href

A = 1, . . . , N OHX

HY = p(HX|d)

p(HY |d) = p(d|HX) p(d|HY ) p(HX) p(HY )

p(HX)/p(HY ) p(d|HX)/p(d|HY )

slide-44
SLIDE 44

Equations-of-state: model selection

Ø Results from GW170817:

LIGO + Virgo, arXiv:1908.01012

slide-45
SLIDE 45

A new cosmic distance ladder

Ø Mapping out the large-scale structure and evolution of spacetime by comparing: § Distance § Redshift Ø Current measurements depend on cosmic distance ladder § Intrinsic brightness of e.g. supernovae determined by comparison with different, closer-by objects § Possibility of systematic errors at every “rung” of the ladder

Ø Gravitational waves from binary mergers:

Distance can be measured directly from the gravitational wave signal!

slide-46
SLIDE 46

A new cosmic distance ladder

Ø Measurement of the local expansion of the Universe: the Hubble constant § Distance from GW signal § Redshift from EM counterpart (galaxy NGC 4993) Ø One detection: limited accuracy Ø Few tens of detections: O(1%) accuracy

LIGO + Virgo, Nature 551, 85 (2017) Del Pozzo, PRD 86, 043011 (2012) Chen et al., Nature 562, 7728 (2018) Feeney et al., PRL 112, 061105 (2019)

  • ). The `true’ value of
slide-47
SLIDE 47

Science with gravitational waves

Ø Once a detection has been made, need to explore a high-dimensional parameter space in order to: § Extract parameter values § Compute evidences for hypotheses Ø Nested sampling (and other techniques) to explore the likelihood function Ø First scientific pay-off: § Population studies § Probing the strong-field dynamics of spacetime § Constraining the neutron star equation of state § Cosmology: independent measurement of the Hubble constant