SLIDE 1
Triple Variational Principles for Operator Functions
Matthias Langer University of Strathclyde, Glasgow
SLIDE 2 Operator functions Let T be an operator function in a Hilbert space H defined on Ω ⊂ C: T(λ) is a closed operator in H for every λ ∈ Ω. For example, T(λ) = λ2A + λB + C with bounded operators A, B, C,
- r T(λ) = A − λI with a closed operator A.
❊✐❣❡♥✈❛❧✉❡s✳ ✕✵ ✷ ✡ ✐s ❝❛❧❧❡❞ ❡✐❣❡♥✈❛❧✉❡ ♦❢ ❚ ✭ ✮ ✾ ①✵ ✷ ❞♦♠✭❚✭✕✵✮✮ ♥ ❢✵❣ ✿ ❚✭✕✵✮①✵ ❂ ✵ ✭ ✮ ✵ ✐s ❡✐❣❡♥✈❛❧✉❡ ♦❢ ❚✭✕✵✮ ❙♣❡❝tr✉♠✳ ✕✵ ✷ ✛✭❚✮ ✭ ✮ ✵ ✷ ✛✭❚✭✕✵✮✮ ✕✵ ✷ ✛❡ss✭❚✮ ✭ ✮ ✵ ✷ ✛❡ss✭❚✭✕✵✮✮
SLIDE 3 Operator functions Let T be an operator function in a Hilbert space H defined on Ω ⊂ C: T(λ) is a closed operator in H for every λ ∈ Ω. For example, T(λ) = λ2A + λB + C with bounded operators A, B, C,
- r T(λ) = A − λI with a closed operator A.
Eigenvalues. λ0 ∈ Ω is called eigenvalue of T ⇐ ⇒ ∃ x0 ∈ dom(T(λ0)) \ {0} : T(λ0)x0 = 0 ⇐ ⇒ 0 is eigenvalue of T(λ0) ❙♣❡❝tr✉♠✳ ✕✵ ✷ ✛✭❚✮ ✭ ✮ ✵ ✷ ✛✭❚✭✕✵✮✮ ✕✵ ✷ ✛❡ss✭❚✮ ✭ ✮ ✵ ✷ ✛❡ss✭❚✭✕✵✮✮
SLIDE 4 Operator functions Let T be an operator function in a Hilbert space H defined on Ω ⊂ C: T(λ) is a closed operator in H for every λ ∈ Ω. For example, T(λ) = λ2A + λB + C with bounded operators A, B, C,
- r T(λ) = A − λI with a closed operator A.
Eigenvalues. λ0 ∈ Ω is called eigenvalue of T ⇐ ⇒ ∃ x0 ∈ dom(T(λ0)) \ {0} : T(λ0)x0 = 0 ⇐ ⇒ 0 is eigenvalue of T(λ0) Spectrum. λ0 ∈ σ(T) ⇐ ⇒ 0 ∈ σ(T(λ0)) λ0 ∈ σess(T) ⇐ ⇒ 0 ∈ σess(T(λ0))
SLIDE 5 Assumptions I
- Ω ⊆ C domain; ∆ ⊆ Ω ∩ R interval
- T(λ) is m-sectorial for each λ ∈ Ω
- T(λ) is self-adjoint for λ ∈ ∆
- Let t(λ) be the closed quadratic form: t(λ)[x] = (T(λ)x, x);
assume that D := dom(t(λ)) is independent of λ
- for each x ∈ D: λ → t(λ)[x] is analytic on Ω
❊✐❣❡♥✈❛❧✉❡s ❛♥❞ t❤❡ ❢✉♥❝t✐♦♥ t(·) ▲❡t ✕✵ ❜❡ ❛♥ ❡✐❣❡♥✈❛❧✉❡ ♦❢ ❚ ✇✐t❤ ❡✐❣❡♥✈❡❝t♦r ①✵✳ ❚❤❡♥ t✭✕✵✮❬①✵❪ ❂ ✭❚✭✕✵✮①✵❀ ①✵✮ ❂ ✵❀ ✐✳❡✳ t❤❡ ❢✉♥❝t✐♦♥ ✕ ✼✦ t✭✕✮❬①✵❪ ❤❛s ❛ ③❡r♦ ❛t ✕✵✳
SLIDE 6 Assumptions I
- Ω ⊆ C domain; ∆ ⊆ Ω ∩ R interval
- T(λ) is m-sectorial for each λ ∈ Ω
- T(λ) is self-adjoint for λ ∈ ∆
- Let t(λ) be the closed quadratic form: t(λ)[x] = (T(λ)x, x);
assume that D := dom(t(λ)) is independent of λ
- for each x ∈ D: λ → t(λ)[x] is analytic on Ω
Eigenvalues and the function t(·) Let λ0 be an eigenvalue of T with eigenvector x0. Then t(λ0)[x0] = (T(λ0)x0, x0) = 0, i.e. the function λ → t(λ)[x0] has a zero at λ0.
SLIDE 7
‘Hyperbolic’ or ‘overdamped’ case Let ∆ = [α, β) and assume that for each x ∈ D \ {0}: t(α)[x] > 0, t(·)[x] has exactly one zero in (α, β). Denote this unique zero by p(x). The mapping x → p(x) is called generalised Rayleigh functional. ■❢ ✕✵ ✐s ❛♥ ❡✐❣❡♥✈❛❧✉❡ ♦❢ ❚ ✇✐t❤ ❡✐❣❡♥✈❡❝t♦r ①✵✱ t❤❡♥ ♣✭①✵✮ ❂ ✕✵✿ ■♥ t❤❡ ❝❛s❡ ✇❤❡♥ ❚✭✕✮ ❂ ❆ ✕■ ❢♦r ❛ s❡❧❢✲❛❞❥♦✐♥t ♦♣❡r❛t♦r ❆ ✇✐t❤ q✉❛❞r❛t✐❝ ❢♦r♠ a ✇❡ ❤❛✈❡ t✭✕✮❬①❪ ❂ a❬①❪ ✕❦①❦✷ ❛♥❞ ❤❡♥❝❡ ♣✭①✮ ❂ a❬①❪ ❦①❦✷
SLIDE 8
‘Hyperbolic’ or ‘overdamped’ case Let ∆ = [α, β) and assume that for each x ∈ D \ {0}: t(α)[x] > 0, t(·)[x] has exactly one zero in (α, β). Denote this unique zero by p(x). The mapping x → p(x) is called generalised Rayleigh functional. If λ0 is an eigenvalue of T with eigenvector x0, then p(x0) = λ0. ■♥ t❤❡ ❝❛s❡ ✇❤❡♥ ❚✭✕✮ ❂ ❆ ✕■ ❢♦r ❛ s❡❧❢✲❛❞❥♦✐♥t ♦♣❡r❛t♦r ❆ ✇✐t❤ q✉❛❞r❛t✐❝ ❢♦r♠ a ✇❡ ❤❛✈❡ t✭✕✮❬①❪ ❂ a❬①❪ ✕❦①❦✷ ❛♥❞ ❤❡♥❝❡ ♣✭①✮ ❂ a❬①❪ ❦①❦✷
SLIDE 9
‘Hyperbolic’ or ‘overdamped’ case Let ∆ = [α, β) and assume that for each x ∈ D \ {0}: t(α)[x] > 0, t(·)[x] has exactly one zero in (α, β). Denote this unique zero by p(x). The mapping x → p(x) is called generalised Rayleigh functional. If λ0 is an eigenvalue of T with eigenvector x0, then p(x0) = λ0. In the case when T(λ) = A − λI for a self-adjoint operator A with quadratic form a we have t(λ)[x] = a[x] − λx2 and hence p(x) = a[x] x2
SLIDE 10
Variational principle for eigenvalues In situations as above (or more general situations) variational principles were proved by Duffin 1955 Rogers 1964 Turner 1967 Hadeler 1968 Werner 1971 Barston 1974 Binding, Eschw´ e, H. Langer 2000 Eschw´ e, M.L. 2004 Voss 2015 Jacob, M.L., Trunk 2016 . . .
SLIDE 11
Theorem. Consider the situation as above and assume that σess(T) ∩ [α, β) = ∅, α ∈ ρ(T). Then the spectrum of T in (α, β) consists of eigenvalues: λ1 ≤ λ2 ≤ · · · and λn = min
L⊂D dim L=n
max
x∈L\{0} p(x) =
max
L⊂H dim L=n−1
inf
x∈D\{0} x⊥L
p(x).
SLIDE 12
Comparison of two operator functions Let T1 and T2 be two operator functions as above and assume that D1 ⊇ D2 and t1(λ)[x] ≤ t2(λ)[x], x ∈ D2, λ ∈ ∆. ❚❤❡♥ ♣✶✭①✮ ✔ ♣✷✭①✮❀ ① ✷ ❉✷❀ ❛♥❞ ❤❡♥❝❡ ✕✭✶✮
♥
❂ ♠✐♥
▲✚❉✶ ❞✐♠ ▲❂♥
♠❛①
①✷▲♥❢✵❣ ♣✶✭①✮ ✔
♠✐♥
▲✚❉✷ ❞✐♠ ▲❂♥
♠❛①
①✷▲♥❢✵❣ ♣✶✭①✮
✔ ♠✐♥
▲✚❉✷ ❞✐♠ ▲❂♥
♠❛①
①✷▲♥❢✵❣ ♣✷✭①✮ ❂ ✕✭✷✮ ♥
✿
SLIDE 13
Comparison of two operator functions Let T1 and T2 be two operator functions as above and assume that D1 ⊇ D2 and t1(λ)[x] ≤ t2(λ)[x], x ∈ D2, λ ∈ ∆. Then p1(x) ≤ p2(x), x ∈ D2, and hence λ(1)
n
= min
L⊂D1 dim L=n
max
x∈L\{0} p1(x) ≤
min
L⊂D2 dim L=n
max
x∈L\{0} p1(x)
≤ min
L⊂D2 dim L=n
max
x∈L\{0} p2(x) = λ(2) n
.
SLIDE 14
Dropping the assumption that T is hyperbolic We relax the conditions that t(α)[x] > 0 and that t(·)[x] has exactly one zero. Assume that for each x ∈ D \ {0} and λ ∈ ∆: t(λ)[x] = 0 ⇒ t′(λ)[x] < 0. ❆ ♠❛♣♣✐♥❣ ♣ ✿ ❉ ♥ ❢✵❣ ✦ R ❬ ❢✝✶❣ ✐s ❝❛❧❧❡❞ ❣❡♥❡r❛❧✐s❡❞ ❘❛②❧❡✐❣❤ ❢✉♥❝t✐♦♥❛❧ ❢♦r ❚ ✐❢ ❢♦r ❛❧❧ ① ✷ ❉ ♥ ❢✵❣✿ ♣✭①✮ ❂ ✕✵ ✐❢ t✭✕✵✮❬①❪ ❂ ✵❀ ♣✭①✮ ❁ ☛ ✐❢ t✭✕✮❬①❪ ❁ ✵ ♦♥ ✁❀ ♣✭①✮ ✕ ☞ ✐❢ t✭✕✮❬①❪ ❃ ✵ ♦♥ ✁✿
SLIDE 15
Dropping the assumption that T is hyperbolic We relax the conditions that t(α)[x] > 0 and that t(·)[x] has exactly one zero. Assume that for each x ∈ D \ {0} and λ ∈ ∆: t(λ)[x] = 0 ⇒ t′(λ)[x] < 0. A mapping p : D \ {0} → R ∪ {±∞} is called generalised Rayleigh functional for T if for all x ∈ D \ {0}: p(x) = λ0 if t(λ0)[x] = 0, p(x) < α if t(λ)[x] < 0 on ∆, p(x) ≥ β if t(λ)[x] > 0 on ∆.
SLIDE 16 Triple variational principle A subspace M ⊂ D is called t(λ)-non-negative if t(λ)[x] ≥ 0 for all x ∈ M; it is called maximal t(λ)-non-negative if it is maximal with this property. Denote by M+
α the set of all maximal t(α)-non-negative subspaces
❚❤❡♦r❡♠✳ ❬▼✳▲✳✱ ❙tr❛✉ss ✷✵✶✻❪ ❆ss✉♠❡ t❤❛t ❬☛❀ ☞✮ ❭ ✛❡ss✭❚✮ ❂ ∅❀ ☛ ✷ ✚✭❚✮✿ ❚❤❡♥ t❤❡ s♣❡❝tr✉♠ ♦❢ ❚ ✐♥ ✭☛❀ ☞✮ ❝♦♥s✐sts ♦❢ ❡✐❣❡♥✈❛❧✉❡s✿ ✕✶ ✔ ✕✷ ✔ ✁ ✁ ✁ ❛♥❞ ✕♥ ❂ s✉♣
▼✷M✰
☛
s✉♣
▲✚▼ ❞✐♠ ▲❂♥✶
✐♥❢
①✷▼♥❢✵❣ ①❄▲
♣✭①✮✿ ✭❚❤❡ ✐♥❡q✉❛❧✐t② ❵✕✬ ✇❛s ♣r♦✈❡❞ ✐♥ ❬❊s❝❤✇✓ ❡✱ ❍✳ ▲❛♥❣❡r ✷✵✵✷❪ ✇❤❡♥ ❚✭✕✮ ❛r❡ ❜♦✉♥❞❡❞✳✮
SLIDE 17 Triple variational principle A subspace M ⊂ D is called t(λ)-non-negative if t(λ)[x] ≥ 0 for all x ∈ M; it is called maximal t(λ)-non-negative if it is maximal with this property. Denote by M+
α the set of all maximal t(α)-non-negative subspaces
- f D.
- Theorem. [M.L., Strauss 2016]
Assume that [α, β) ∩ σess(T) = ∅, α ∈ ρ(T). Then the spectrum of T in (α, β) consists of eigenvalues: λ1 ≤ λ2 ≤ · · · and λn = sup
M∈M+
α
sup
L⊂M dim L=n−1
inf
x∈M\{0} x⊥L
p(x). ✭❚❤❡ ✐♥❡q✉❛❧✐t② ❵✕✬ ✇❛s ♣r♦✈❡❞ ✐♥ ❬❊s❝❤✇✓ ❡✱ ❍✳ ▲❛♥❣❡r ✷✵✵✷❪ ✇❤❡♥ ❚✭✕✮ ❛r❡ ❜♦✉♥❞❡❞✳✮
SLIDE 18 Triple variational principle A subspace M ⊂ D is called t(λ)-non-negative if t(λ)[x] ≥ 0 for all x ∈ M; it is called maximal t(λ)-non-negative if it is maximal with this property. Denote by M+
α the set of all maximal t(α)-non-negative subspaces
- f D.
- Theorem. [M.L., Strauss 2016]
Assume that [α, β) ∩ σess(T) = ∅, α ∈ ρ(T). Then the spectrum of T in (α, β) consists of eigenvalues: λ1 ≤ λ2 ≤ · · · and λn = sup
M∈M+
α
sup
L⊂M dim L=n−1
inf
x∈M\{0} x⊥L
p(x). (The inequality ‘≥’ was proved in [Eschw´ e, H. Langer 2002] when T(λ) are bounded.)
SLIDE 19
Virozub–Matsaev condition We say that T satisfies the condition (VM) if for every compact I ⊆ ∆ there exist ε, δ > 0: |t(λ)[x]| ≤ ε ⇒ t′(λ)[x] ≤ −δ for every λ ∈ I and every x ∈ D with x = 1.
SLIDE 20
A perturbation of a linear function Let A be a self-adjoint operator that is bounded below with quadratic form a and dom(a) = D. Let T satisfy Assumptions I and (VM) and assume that t(λ)[x] = a[x] − λx2 + t1(λ)[x], x ∈ D, λ ∈ Ω, where t1(λ) is a quadratic form that satisfies 0 ≤ t1(λ)[x] ≤ a(λ)x2 + b(λ)a[x], x ∈ D, λ ∈ ∆ = (α, β). with a(λ) ∈ R, b(λ) ≥ 0. ❆ss✉♠❡ t❤❛t ✭☛❀ ☞✮ ❭ ✛❡ss✭❆✮ ❂ ∅ ❛♥❞ t❤❛t t❤❡r❡ ❡①✐sts ❫ ☛ ✷ ✭☛❀ ☞✮ s♦ t❤❛t ✽ ✧ ❃ ✵ ✾ ✌ ✷ ✭❫ ☛❀ ❫ ☛ ✰ ✧✮✿ ❛✭✌✮ ✰ ☛✭✶ ✰ ❜✭✌✮✮ ❁ ✌✿ ❚❤❡♥ ✭❫ ☛❀ ☞✮ ❭ ✛❡ss✭❚✮ ❂ ∅✳ ■❢ ✖✶ ✔ ✖✷ ✔ ✿ ✿ ✿ ❛r❡ t❤❡ ❡✐❣❡♥✈❛❧✉❡s ♦❢ ❆ ✐♥ ✭☛❀ ☞✮ ❛♥❞ ✕✶ ✔ ✕✷ ✿ ✿ ✿ t❤❡ ❡✐❣❡♥✈❛❧✉❡ ♦❢ ❚ ✐♥ ✭❫ ☛❀ ☞✮✱ t❤❡♥ ✖♥ ✔ ✕♥✳
SLIDE 21
A perturbation of a linear function Let A be a self-adjoint operator that is bounded below with quadratic form a and dom(a) = D. Let T satisfy Assumptions I and (VM) and assume that t(λ)[x] = a[x] − λx2 + t1(λ)[x], x ∈ D, λ ∈ Ω, where t1(λ) is a quadratic form that satisfies 0 ≤ t1(λ)[x] ≤ a(λ)x2 + b(λ)a[x], x ∈ D, λ ∈ ∆ = (α, β). with a(λ) ∈ R, b(λ) ≥ 0. Assume that (α, β) ∩ σess(A) = ∅ and that there exists ˆ α ∈ (α, β) so that ∀ ε > 0 ∃ γ ∈ (ˆ α, ˆ α + ε): a(γ) + α(1 + b(γ)) < γ. Then (ˆ α, β) ∩ σess(T) = ∅. ■❢ ✖✶ ✔ ✖✷ ✔ ✿ ✿ ✿ ❛r❡ t❤❡ ❡✐❣❡♥✈❛❧✉❡s ♦❢ ❆ ✐♥ ✭☛❀ ☞✮ ❛♥❞ ✕✶ ✔ ✕✷ ✿ ✿ ✿ t❤❡ ❡✐❣❡♥✈❛❧✉❡ ♦❢ ❚ ✐♥ ✭❫ ☛❀ ☞✮✱ t❤❡♥ ✖♥ ✔ ✕♥✳
SLIDE 22
A perturbation of a linear function Let A be a self-adjoint operator that is bounded below with quadratic form a and dom(a) = D. Let T satisfy Assumptions I and (VM) and assume that t(λ)[x] = a[x] − λx2 + t1(λ)[x], x ∈ D, λ ∈ Ω, where t1(λ) is a quadratic form that satisfies 0 ≤ t1(λ)[x] ≤ a(λ)x2 + b(λ)a[x], x ∈ D, λ ∈ ∆ = (α, β). with a(λ) ∈ R, b(λ) ≥ 0. Assume that (α, β) ∩ σess(A) = ∅ and that there exists ˆ α ∈ (α, β) so that ∀ ε > 0 ∃ γ ∈ (ˆ α, ˆ α + ε): a(γ) + α(1 + b(γ)) < γ. Then (ˆ α, β) ∩ σess(T) = ∅. If µ1 ≤ µ2 ≤ . . . are the eigenvalues of A in (α, β) and λ1 ≤ λ2 . . . the eigenvalue of T in (ˆ α, β), then µn ≤ λn.
SLIDE 23 An operator matrix Let A =
B B∗ D
- where A and D are self-adjoint; A bounded below; D bounded;
B∗x2 ≤ a0x2 + b0a[x], x ∈ dom(a), with a0 ∈ R, b0 ≥ 0. ❈♦r♦❧❧❛r②✳ ▲❡t ♠❛① ✛✭❉✮ ❂ ❞✰ ❁ ☛ ❁ ☞ s✉❝❤ t❤❛t ✭☛❀ ☞✮ ❭ ✛❡ss✭❆✮ ❂ ∅✳ ❙❡t ❫ ☛ ❂ ☛ ✰ ❞✰ ✷ ✰
✒☛ ❞✰
✷
✓✷
✰ ❛✵ ✰ ❜✵☛ ✿ ❚❤❡♥ ✭❫ ☛❀ ☞✮ ❭ ✛❡ss✭❆✮ ❂ ∅ ♣r♦✈✐❞❡❞ t❤❛t ❫ ☛ ❁ ☞✳ ■❢ ✖✶ ✔ ✖✷ ✔ ✿ ✿ ✿ ❛r❡ t❤❡ ❡✐❣❡♥✈❛❧✉❡s ♦❢ ❆ ✐♥ ✭☛❀ ☞✮ ❛♥❞ ✕✶ ✔ ✕✷ ✿ ✿ ✿ t❤❡ ❡✐❣❡♥✈❛❧✉❡ ♦❢ ❆ ✐♥ ✭❫ ☛❀ ☞✮✱ t❤❡♥ ✖♥ ✔ ✕♥✳ Pr♦♦❢ ✉s❡s t❤❡ ❙❝❤✉r ❝♦♠♣❧❡♠❡♥t ❚✭✕✮ ❂ ❆ ✕ ❇✭❉ ✕✮✶❇✄✳
SLIDE 24 An operator matrix Let A =
B B∗ D
- where A and D are self-adjoint; A bounded below; D bounded;
B∗x2 ≤ a0x2 + b0a[x], x ∈ dom(a), with a0 ∈ R, b0 ≥ 0.
- Corollary. Let max σ(D) = d+ < α < β such that (α, β) ∩ σess(A) = ∅.
Set ˆ α = α + d+ 2 +
α − d+
2
2
+ a0 + b0α . Then (ˆ α, β) ∩ σess(A) = ∅ provided that ˆ α < β. If µ1 ≤ µ2 ≤ . . . are the eigenvalues of A in (α, β) and λ1 ≤ λ2 . . . the eigenvalue of A in (ˆ α, β), then µn ≤ λn. Proof uses the Schur complement T(λ) = A − λ − B(D − λ)−1B∗.