Casimir effect and 3d QED from machine learning
Harold Erbin
Università di Torino & Infn (Italy)
In collaboration with: M. Chernodub (Tours), V. Goy, I. Grishmanovky,
- A. Molochkov (Vladivostok) [1911.07571 + to appear]
1 / 49
Casimir effect and 3d QED from machine learning Harold Erbin - - PowerPoint PPT Presentation
Casimir effect and 3d QED from machine learning Harold Erbin Universit di Torino & Infn (Italy) In collaboration with: M. Chernodub (Tours), V. Goy, I. Grishmanovky, A. Molochkov (Vladivostok) [ 1911.07571 + to appear] 1 / 49 Outline: 1.
1 / 49
2 / 49
3 / 49
3 / 49
3 / 49
4 / 49
4 / 49
5 / 49
6 / 49
7 / 49
8 / 49
9 / 49
10 / 49
11 / 49
12 / 49
13 / 49
14 / 49
Zebras Horses horse zebra zebra horse Summer Winter summer winter winter summer Photograph Van Gogh Cezanne Monet Ukiyo-e Monet Photos Monet photo photo Monet
15 / 49
16 / 49
17 / 49
17 / 49
18 / 49
18 / 49
19 / 49
19 / 49
20 / 49
20 / 49
21 / 49
21 / 49
21 / 49
22 / 49
23 / 49
24 / 49
25 / 49
26 / 49
27 / 49
28 / 49
50 100 150 200 250 50 100 150 200 250
id 136: error = 0.000596 true = -13.5286, pred = -13.5205
50 100 150 200 250 50 100 150 200 250
id 722: error = 0.000225 true = -36.4675, pred = -36.4593
50 100 150 200 250 50 100 150 200 250
id 471: error = 1.190834 true = -1.54119, pred = -3.37649
50 100 150 200 250 50 100 150 200 250
id 98: error = 0.042850 true = -37.6339, pred = -36.0213
29 / 49
3 5 3 2 5 2 1 5 1 5 E_c 10 20 30 40 Count true pred 3 7 . 5 3 7 . 3 6 . 5 3 6 . 3 5 . 5 3 5 . 3 4 . 5 E_c 10 20 30 40 50 60 70 Count true pred 0.0 0.5 1.0 1.5 2.0 Relative errors on E_c 25 50 75 100 125 150 175 200 Count 0.000 0.002 0.004 0.006 0.008 0.010 0.012 0.014 Relative errors on E_c 10 20 30 40 50 Count
30 / 49
20 40 60 80 100 120 140 Epochs 10
1
100 Loss train validation
0.2 0.4 0.6 0.8 Percentage of training data (n = 2000) 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Loss train validation
31 / 49
33 / 49
34 / 49
35 / 49
Lt −1
36 / 49
37 / 49
38 / 49
39 / 49
1.6 1.8 2.0 2.2 2.4 beta 0.0 0.1 0.2 0.3 0.4 0.5 0.6 MonDens (ML) PL_mod (ML) MonDens (MC) PL_mod (MC)
1.6 1.8 2.0 2.2 2.4 beta 0.0 0.1 0.2 0.3 0.4 0.5 0.6 MonDens (ML) PL_mod (ML) MonDens (MC) PL_mod (MC)
40 / 49
1.6 1.8 2.0 2.2 2.4 beta 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 PL_mod confined deconfined
1.6 1.8 2.0 2.2 2.4 beta 0.1 0.2 0.3 0.4 0.5 0.6 PL_mod confined deconfined
1.6 1.8 2.0 2.2 2.4 beta 0.0 0.1 0.2 0.3 0.4 0.5 PL_mod confined deconfined
1.6 1.8 2.0 2.2 2.4 beta 0.0 0.1 0.2 0.3 0.4 0.5 0.6 PL_mod confined deconfined
41 / 49
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 PL_mod 200 400 600 800 1000 1200 1400 Count true pred 0.00 0.02 0.04 0.06 0.08 MonDens 500 1000 1500 2000 Count true pred
42 / 49
5 10 15 20 25 30 35 Epochs 1 2 3 4 5 6 7 Loss train validation
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Percentage of training data (n = 64200) 0.5 1.0 1.5 2.0 2.5 3.0 Loss train validation
43 / 49
44 / 49
45 / 49
1.6 1.8 2.0 2.2 2.4 beta 0.0 0.2 0.4 0.6 0.8 1.0 phase_prob (mean) (4, 16, 16) (4, 32, 32) (6, 16, 16) (6, 32, 32) (8, 16, 16) (8, 32, 32) 1.6 1.8 2.0 2.2 2.4 beta 0.00 0.02 0.04 0.06 0.08 phase_prob (var) (4, 16, 16) (4, 32, 32) (6, 16, 16) (6, 32, 32) (8, 16, 16) (8, 32, 32)
46 / 49
4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 Lt 0.02 0.04 0.06 0.08 0.10 0.12
c (relative error)
Ls = 16 Ls = 32
47 / 49
48 / 49
49 / 49