Advanced Statistical Physics Leticia F. Cugliandolo Sorbonne - - PowerPoint PPT Presentation

advanced statistical physics
SMART_READER_LITE
LIVE PREVIEW

Advanced Statistical Physics Leticia F. Cugliandolo Sorbonne - - PowerPoint PPT Presentation

Advanced Statistical Physics Leticia F. Cugliandolo Sorbonne Universit Institut Universitaire de France leticia@lpthe.jussieu.fr www.lpthe.jussieu.fr/ leticia Disorder Plan 1. Principles and Formalism Recap on classical mechanics


slide-1
SLIDE 1

Advanced Statistical Physics

Leticia F. Cugliandolo Sorbonne Université Institut Universitaire de France leticia@lpthe.jussieu.fr www.lpthe.jussieu.fr/˜leticia

Disorder

slide-2
SLIDE 2

Plan

  • 1. Principles and Formalism

— Recap on classical mechanics — (In)Equivalence of ensembles for (long) short-range interactions — Generalised Gibbs Ensembles (for integrable systems) — Systems’ reduction (role of environments)

  • 2. Phase transitions

— Important concepts (phase diagrams, order parameters, spontaneous symmetry breaking, etc.) — Uncommon mechanisms (e.g. topological phases, condensation)

  • 3. Disordered systems

— Concepts (competition & frustration, self-averageness, etc.) — Random matrix theory — Methods (scaling arguments, mean-field theory, replica trick)

slide-3
SLIDE 3

Randomness

Impurities No material is perfect and totally free of impurities (vacancies, substitutions, amorphous structures, etc.) First distinction — Weak randomness : phase diagram respected, criticality may change — Strong randomness : phases modified Second distinction — Annealed : fluctuating (easier) — Quenched : frozen, static (harder)

τ0 ≪ tobs ≪ τ qd

eq

slide-4
SLIDE 4

Quenched disorder

Variables frozen in time-scales over which other variables fluctuate. Time scales

τ0 ≪ tobs ≪ τ qd

eq

τ qd

eq could be the diffusion time-scale for magnetic impurities the magnetic

moments of which will be the variables of a magnetic system;

  • r the flipping time of impurities that create random fields acting on
  • ther magnetic variables.

Weak disorder (modifies the critical properties but not the phases) vs. strong disorder (that modifies both). e.g. random ferromagnets vs. spin-glasses.

slide-5
SLIDE 5

Geometrical problems

Random graphs & Percolation

slide-6
SLIDE 6

Neural Networks

Real neural network Neurons connected by synapsis on a random graph

Figures from AI, Deep Learning, and Neural Networks explained, A. Castrounis

slide-7
SLIDE 7

Neural Networks

Sketch & artificial network

The connections in wT may have a random component The state of the neuron up (firing), down (quiescent) is a result of the calculation In the artificial network on chooses the geometry (number of nodes in internal layer, number of hidden layers, connections between layers)

Figures from AI, Deep Learning, and Neural Networks explained, A. Castrounis

slide-8
SLIDE 8

Spin-glasses

Magnetic impurities (spins) randomly placed in an inert host

  • ri are random and time-independent since

the impurities do not move during experimental time-scales ⇒ quenched randomness

Magnetic impurities in a metal host

spins can flip but not move RKKY potential

V (rij) ∝ cos 2kFrij r3

ij

sisj

very rapid oscillations about 0

positive & negative

slow power law decay.

slide-9
SLIDE 9

Spin-glasses

Models on a lattice with random couplings Ising (or Heisenberg) spins si = ±1 sitting on a lattice

Jij are random and time-independent since

the impurities do not move during experimental time-scales ⇒ quenched randomness

Magnetic impurities in a metal host

spins can flip but not move Edwards-Anderson model

HJ[{si}] = −

  • ij

Jijsisj

Jij drawn from a pdf with

zero mean & finite variance

slide-10
SLIDE 10

Neural networks

Models on graphs with random couplings The neurons are Ising spins si = ±1 on a graph

Jij are random and time-independent since

the synapsis do not change during experimental time-scales ⇒ quenched randomness

The neural net

spins can flip but not move Hopfield model

HJ[{si}] = −

ij Jijsisj

memory stored in the synapsis

Jij = 1/Np Np

µ=1 ξµ i ξµ j

the patterns ξµ

i

are drawn from a pdf with zero mean & finite variance

slide-11
SLIDE 11

Optimization problems

K-Satisfiability

The problem is to determine whether the variables of a given Boolean formula F can be assigned in such a way to make the formula evaluate to TRUE (satisfied)

  • Example. Call the variable x

We use x for the evaluation x = TRUE and x for the requirement x = FALSE Take the formula F = C1 : x1 OR x2 made by a single clause C1 it is satisfiable because one can find the values x1 = TRUE (and x2 free) or

x2 = FALSE (and x1 free), which make C1 : x1 OR x2 TRUE

This formula is so simple that 3 out of 4 possible configurations of the two variables solve it. This example belongs to the k = 2 class of satisfiability problems since the clause is made by two literals (involving different variables) only. It has M = 1 clauses and N = 2 variables.

slide-12
SLIDE 12

Optimization problems

K-Satisfiability

Harder to decide formulæ are made of M clauses involving k literals re- quired to take the true value (x) or the false value (x) each, these taken from a pool of N variables. An example in k = 3-SAT is

F =              C1 : x1 OR x2 OR x3 C2 : x5 OR x7 OR x9 C3 : x1 OR x4 OR x7 C4 : x2 OR x5 OR x8

All clauses have to be satisfied simultaneously so the formula has to be read F : C1 AND C2 AND C3 AND C4 When α ≡ M/N ≫ 1 the problems typically become unsolvable while many solutions exist for α ≪ 1. A sharp threshold at αc for N → ∞

slide-13
SLIDE 13

Optimization problems

Random K-Satisfiability An instance of the problem, i.e. a formula F , is chosen at random with the following procedure :

First one takes k variables out of the N available ones. Second one decides to require xi or xi for each of them with probability 1/2 Third one creates a clause taking the OR of these k literals. Forth one returns the variables to the pool and the outlined three steps are repeated M times. The M resulting clauses form the final formula.

slide-14
SLIDE 14

Optimization problems

Random K-Satisfiability

Boolean variables ⇒ Ising spins

xi evaluated to TRUE (FALSE) corresponds to si = 1 (−1)

The requirement that a formula be evaluated TRUE by an assignment of va- riables (i.e. a configuration of spins) ⇒ ground state of an adequately chosen energy function = cost function In the simplest setting, each clause will contribute zero (when satisfied) or one (when unsatisfied) to this cost function. There are several equivalent ways to reach this goal. The fact that the variables are linked together through the clauses suggests to define k-uplet interactions between them.

slide-15
SLIDE 15

Optimization problems

Random K-Satisfiability

A way to represent a clause in an energy function, for instance,

C1 : x1 OR x2 OR x3

as an interaction between spins. In this case

(1 − s1)(1 + s2)(1 − s3)/8

This term vanishes if s1 = 1 or s2 = −1 or s3 = 1 and does not contribute to the total energy, that is written as a sum of terms of this kind. It is then simple to see that the total energy can be rewritten in a way that resembles strongly physical spin models,

HJ[{si}] = M 2K +

K

  • R=1

(−1)R

  • i1<···<iR

Ji1...iRsi1 . . . siR

and Ji1...iR =

1 2K

M

a=1 Jai1 . . . JaiR .

slide-16
SLIDE 16

Pinning by impurities

Competition between elasticity and quenched randomness

d-dimensional elastic manifold in a transverse N-dimensional quenched

random potential.

Oil Water Interface between two phases; vortex line in type-II supercond; stretched polymer. Distorted Abrikosov lattice Goa et al. 01

slide-17
SLIDE 17

Randomness

Properties — Spatial inhomogeneity — Frustration (spectrum pushed up, degeneracy of ground state) — probability distribution of couplings, fields, etc. — Lack of self-averageness

slide-18
SLIDE 18

Frustration

Properties

HJ[{s}] = −

ij Jijsisj

Ising model

+ + + + + + + + + +

Disordered Geometric

Efrust

gs

> EFM

gs

and

Sfrust

gs

> SFM

gs

Frustration enhances the ground-state energy and entropy One can expect to have metastable states too One cannot satisfy all couplings simultaneously if

  • loop Jij < 0
slide-19
SLIDE 19

Heterogeneity

Each variable, spin or other, feels a different local field, hi = z

j=1 Jijsj,

contrary to what happens in a ferromagnetic sample, for instance. Homogeneous Heterogeneous

hi = 4J ∀ i. hj = −2J hk = 0 hl = 2J.

Each sample is a priori different but, do they all have a different thermodynamic and dynamic behavior?

slide-20
SLIDE 20

Self-averageness

The disorder-induced free-energy density distribution approaches a Gaussian with vanishing dispersion in the thermodynamic limit :

limN→∞fN(β, J) = f∞(β)

independently of disorder — Experiments : all typical samples behave in the same way. — Theory : one can perform a (hard) average of disorder, [. . . ],

−βNf∞(β) = limN→∞[ln ZN(β, J)]

Exercise : Prove it for the 1d Ising chain; argument for finite d systems. Intensive quantities are also self-averaging. Replica theory

−βf∞(β) = limN→∞ limn→0 [Zn

N(β, J)] − 1

Nn

slide-21
SLIDE 21

Self-averageness

The question Given two samples with different quenched randomness (e.g. different interaction strengths Jijs or random fields hi) but drawn from the same (kind of) distribution is their behaviour going to be totally different? Which quantities are expected to be the same and which not?

slide-22
SLIDE 22

Self-averageness

Observables & distributions Given a quantity AJ, which depends on the quenched randomness J, it is distributed according to

P(A) =

  • dJ p(J) δ(A − AJ)

This pdf is expected to be narrower and narrower (more peaked) as

N → ∞

Therefore, one will observe Atyp = maxA P(A) However, it is difficult to calculate Atyp, what about calculating

[A] =

  • dAP(A)A?
slide-23
SLIDE 23

Self-averageness

Warm-up exercise

A function is convex function iff ∀x1, x2 and t ∈ [0, 1] :

f(tx1 + (1 − t)x − 2) ≤ tf(x1) + (1 − t)f(x2) .

slide-24
SLIDE 24

Self-averageness

Warm-up exercise

slide-25
SLIDE 25

Self-averageness

Warm-up exercise

slide-26
SLIDE 26

Self-averageness

Example : the disordered Ising chain

HJ[{si}] = −

  • i

Jisisi+1 Ji distributed p(Ji) with any pdf

Compute the partition function ZJ by introducing σi = sisi+1

ZJ[{si}] =

  • si=±1

i Jisisi+1 =

  • σi=±1

i Jiσi =

N

  • i=1

2 coth βJi

(boundary condition effects negligible for N → ∞) It is a product of N random numbers The free-energy is −βFJ[{si}] = N

i=1 ln coth βJi + N ln 2

It is a sum of N random numbers

slide-27
SLIDE 27

Self-averageness

Example : the disordered Ising chain

HJ[{si}] = −

  • i

Jisisi+1 Ji distributed p(Ji) with any pdf

The partition function & the free energy density are different objects

ZJ[{si}] =

N

  • i=1

2 coth βJi −βfJ[{si}] = 1 N

N

  • i=1

ln coth βJi+ln 2

Take Ji to be i.i.d with zero mean [Ji] = 0 & finite variance [J2

i ] = σ2 and

use the Central Limit Theorem :

X = 1

N

  • i xi is Gaussian distributed with average X = xi and variance

(X − X)2 = σ2/N

Therefore fJ is Gaussian distributed and its variance vanishes for N → ∞ Moreover, ftyp

J

= [fJ]

slide-28
SLIDE 28

Self-averageness

Systems with short-range interactions

Divide a, say, cubic system of volume V = Ld in n sub-cubes, of volume

v = ℓd with V = nv L

slide-29
SLIDE 29

Self-averageness

Systems with short-range interactions

For short-range interactions the total free-energy is the sum of two terms, a contribution from the bulk of the subsystems and a contribution from the inter- faces between the subsystems :

−βFJ = ln ZJ = ln

  • conf

e−βHJ(conf) ≈ ln

  • conf

e−βHJ(bulk)−βHJ(surf) = ln

  • bulk

e−βHJ(bulk) + ln

  • surf

e−βHJ(surf) = −βF bulk

J

− βF surf

J

where the ≈ indicates that we dropped the contributions of interactions between the bulk and the interfaces (surf)

slide-30
SLIDE 30

Self-averageness

Systems with short-range interactions

If the interaction extends over a short distance l and the linear size of the boxes is ℓ ≫ l, we also assume that the surface energy is negligible with respect to the bulk one (same for possible entropic contributions) and

−βFJ ≈ −βF bulk

J

= ln

  • bulk

e−βHJ(bulk)

The disorder dependent free-energy is a sum of n = (L/ℓ)d independent random numbers, each one being the disorder dependent free-energy of the bulk of each subsystem :

−βFJ ≈ n

k=1 ln bulkk e−βHJ(bulkk)

In the limit of a very large number of subsystems (L ≫ ℓ or n ≫ 1) the CLT

⇒ the free-energy density is Gaussian distributed with ftyp

J

= [ fJ ]

slide-31
SLIDE 31

Self-averageness

Systems with short-range interactions

The dispersion about the typical value of the total free-energy vanishes in the large n limit, σFJ/[ FJ ] ∝ √n/n = n−1/2 → 0 The one of the free-energy density, or intensive free-energy, fJ = FJ/N, as well, σfJ/[fJ] = O(n−1/2) In a sufficiently large system the typical free-energy density ftyp

J

is then very close to the averaged [ fJ ] and one can compute the latter to understand the static properties of typical systems. Much easier to do analytically. More later.

slide-32
SLIDE 32

Self-averageness

Failure and quenched vs. Annealed

Go back to the one dimensional disordered Ising chain and show that the partition function and the spatial correlations are not self-averaging. The annealed free-energy is defined as −βF annealed = ln[ZJ] The quenched free-energy is defined as −βF quenched = [ln ZJ] Jenssen’s inequality applied to the convex function − ln y implies

− ln[ZJ] ≤ −[ln ZJ]

and for the free-energies one deduces

F annealed = −β−1 ln[ZJ] ≤ −β−1[ln ZJ] = F quenched

slide-33
SLIDE 33

Methods

disordered systems Statics

TAP Thouless-Anderson-Palmer Replica theory

  

fully-connected (complete graph) Gaussian approx. to field-theories Cavity or Peierls approx.

  • dilute (random graph)

Bubbles & droplet arguments functional RG1

  

finite dimensions

Dynamics

Generating functional for classical field theories (MSRJD). Schwinger-Keldysh closed-time path-integral for quantum dissipative models (the previous is recovered in the → 0 limit). Perturbation theory, renormalization group techniques, self-consistent approx.