Large Pages May Be Harmful on NUMA Systems Fabien Gaud - - PowerPoint PPT Presentation

large pages may be harmful on numa systems
SMART_READER_LITE
LIVE PREVIEW

Large Pages May Be Harmful on NUMA Systems Fabien Gaud - - PowerPoint PPT Presentation

Large Pages May Be Harmful on NUMA Systems Fabien Gaud Bap?ste Lepers Jeremie Decouchant Simon Fraser University CNRS Grenoble University Jus?n


slide-1
SLIDE 1

Large ¡Pages ¡May ¡Be ¡Harmful ¡on ¡ NUMA ¡Systems ¡

Fabien ¡Gaud ¡ Simon ¡Fraser ¡University ¡ Bap?ste ¡Lepers ¡ CNRS ¡ Jeremie ¡Decouchant ¡ Grenoble ¡University ¡ Jus?n ¡Funston ¡ Simon ¡Fraser ¡University ¡ Alexandra ¡Fedorova ¡ Simon ¡Fraser ¡University ¡ Vivien ¡Quéma ¡ Grenoble ¡INP ¡

slide-2
SLIDE 2

2

Virtual-­‑to-­‑physical ¡transla?on ¡is ¡done ¡ by ¡the ¡TLB ¡and ¡page ¡table ¡ ¡ ¡

Virtual address TLB Physical address Page table TLB hit TLB miss

Typical TLB size: 1024 entries (AMD Bulldozer), 512 entries (Intel i7).

slide-3
SLIDE 3

3

Virtual-­‑to-­‑physical ¡transla?on ¡is ¡done ¡ by ¡the ¡TLB ¡and ¡page ¡table ¡ ¡ ¡

Virtual address TLB Physical address Page table TLB hit TLB miss

Typical TLB size: 1024 entries (AMD Bulldozer), 512 entries (Intel i7).

43 cycles

slide-4
SLIDE 4

4

To ¡reduce ¡the ¡number ¡of ¡TLB ¡misses, ¡ developers ¡can ¡use ¡“large ¡pages” ¡ ¡

Page size 512 entries coverage 1024 entries coverage 4KB (default) 2MB 4MB 2MB 1GB 2GB 1GB 512GB 1024GB In Linux:

  • Manually: mmap(…, flags | MAP_HUGETLB)
  • Automatically: using Transparent Huge Pages (THP). THP uses 2MB

pages for anonymous memory and clusters groups of 4K pages periodically.

slide-5
SLIDE 5

5

Large ¡pages ¡known ¡advantages ¡& ¡ downsides ¡

Known advantages:

  • Fewer TLB misses
  • Fewer page allocations (reduces contention in the kernel memory

manager) Known downsides:

  • Increased memory footprint
  • Memory fragmentation
slide-6
SLIDE 6

6

New ¡observa?on: ¡large ¡pages ¡may ¡hurt ¡ performance ¡on ¡NUMA ¡machines ¡

  • 30
  • 20
  • 10

10 20 30

B T . B C G . D D C . A E P . C F T . C I S . D L U . B M G . D S P . B U A . B U A . C W C W R K m e a n s M a t r i x M u l t i p l y p c a w r m e m S S C A . 2 S P E C j b b

  • Perf. improvement relative

to default Linux (%)

THP

  • 30
  • 20
  • 10

10 20 30

B T . B C G . D D C . A E P . C F T . C I S . D L U . B M G . D S P . B U A . B U A . C W C W R K m e a n s M a t r i x M u l t i p l y p c a w r m e m S S C A . 2 S P E C j b b

  • Perf. improvement relative

to default Linux (%)

THP

  • 43

109 70 51

Machine A, 24 cores Machine B, 64 cores

slide-7
SLIDE 7

Machines ¡are ¡NUMA ¡

8GB/s 160 cycles 3GB/s 300 cycles Node 1 Node 2 Node 3 Memory Memory Memory Memory

CPU0 CPU1 CPU2 CPU3

7

Remote memory accesses hurt performance

slide-8
SLIDE 8

Machines ¡are ¡NUMA ¡

1200 cycles ! Node 1 Node 2 Node 3 Memory Memory Memory Memory

CPU0 CPU1 CPU2 CPU3

8

Contention hurts performance even more.

slide-9
SLIDE 9

Large ¡pages ¡on ¡NUMA ¡machines ¡(1/2) ¡

Node 1 Node 2 Node 3

9

Node 0 void *a = malloc(2MB); With 4K pages, load is balanced.

slide-10
SLIDE 10

Large ¡pages ¡on ¡NUMA ¡machines ¡(1/2) ¡

Node 1 Node 2 Node 3

10

Node 0 void *a = malloc(2MB); With 2M pages, data are allocated on 1 node => contention.

slide-11
SLIDE 11

Large ¡pages ¡on ¡NUMA ¡machines ¡(1/2) ¡

Node 1 Node 2 Node 3

11

Node 0 void *a = malloc(2MB); With 2M pages, data are allocated on 1 node => contention. HOT PAGE

slide-12
SLIDE 12

Performance ¡example ¡(1/2) ¡

App. Perf. increase THP/4K (%) % of time spent in TLB miss 4K % of time spent in TLB miss 2M Imbalance 4K (%) Imbalance 2M (%) CG.D

  • 43

1 59 SSCA.20 17 15 2 8 52 SpecJBB

  • 6

7 16 39 Using large pages, 1 node is overloaded in CG, SSCA and SpecJBB. Only SSCA benefits from the reduction of TLB misses.

12

slide-13
SLIDE 13

Large ¡pages ¡on ¡NUMA ¡machines ¡(2/2) ¡

Node 1 Node 2 Node 3

13

Node 0 void *a = malloc(1.5MB); // node 0 void *b = malloc(1.5MB); // node 1 PAGE-LEVEL FALSE SHARING Page-level false sharing reduces the maximum achievable locality.

slide-14
SLIDE 14

Performance ¡example ¡(2/2) ¡

App. Perf. increase THP/4K (%) Local Access Ratio 4K (%) Local Access Ratio 2M (%) UA.C

  • 15

88 66 The locality decreases when using large pages.

14

slide-15
SLIDE 15

Can ¡exis?ng ¡memory ¡management ¡ algorithms ¡solve ¡the ¡problem? ¡

15

slide-16
SLIDE 16

Exis?ng ¡memory ¡management ¡ algorithms ¡do ¡not ¡solve ¡the ¡problem ¡

We run the application with Carrefour[1], the state-of-the-art memory management algorithm. Carrefour monitors memory accesses and places pages to minimize imbalance and maximize locality.

[1] DASHTI M., FEDOROVA A., FUNSTON J., GAUD F.,LACHAIZE R., LEPERS B., QUEMA V., AND ROTH M. Traffic management: A holistic approach to memory placement on NUMA systems. ASPLOS 2013.

  • 30
  • 20
  • 10

10 20 30

C G . D L U . B U A . B U A . C M a t r i x M u l t i p l y w r m e m S S C A . 2 S P E C j b b

  • Perf. improvement relative

to default Linux (%)

THP Carrefour-2M

Carrefour solves imbalance / locality issues on some applications But does not improve performance on some other applications (hot pages or page-level false sharing)

16

slide-17
SLIDE 17

We ¡need ¡a ¡new ¡memory ¡management ¡ algorithm ¡

17

slide-18
SLIDE 18

Our ¡solu?on ¡– ¡Carrefour-­‑LP ¡

  • Built on top of Carrefour.
  • By default, 2M pages are activated.
  • Two components that run every second:

Reactive component Conservative component Splits 2M pages Detects and removes “hot pages” and page-level “false sharing”. Deactivate 2M page allocation Promotes 4K pages When the time spent handling TLB misses is high. Forces 2M page allocation In case of contention in the page fault handler.

  • We show in the paper that the two components are required.

18

slide-19
SLIDE 19

Implementa?on ¡

Reactive component (splits 2M pages)

Sample memory accesses using IBS

A page represents more than 5% of all accesses and is accessed from multiple nodes?

Split and interleave the hot page YES

19

slide-20
SLIDE 20

Implementa?on ¡

Reactive component (splits 2M pages)

Sample memory accesses using IBS

  • Compute observed local access ratio (LAR1)
  • Compute the LAR that would have been obtained if each page was

placed on the node that accessed it the most.

LAR1 can be significantly improved?

Run carrefour YES

  • Compute the LAR that would have been obtained if each page was

split and then placed on the node that accessed it the most.

LAR1 can be significantly improved?

Split all 2M pages and run carrefour YES NO

20

slide-21
SLIDE 21

Implementa?on ¡challenges ¡

Reactive component (splits 2M pages)

Sample memory accesses using IBS

  • Compute observed local access ratio (LAR1)
  • Compute the LAR that would have been obtained if each page was

placed on the node that accessed it the most (without splitting).

LAR1 can be significantly improved?

Run carrefour YES

  • Compute the LAR that would have been obtained if each page was

split and then placed on the node that accessed it the most.

LAR1 can be significantly improved?

Split all 2M pages and run carrefour YES NO

COSTLY COSTLY IMPRECISE

21

slide-22
SLIDE 22

Implementa?on ¡challenges ¡

Reactive component (splits 2M pages)

  • We only have few IBS samples.
  • The LAR with “2M pages split into 4K pages” can be wrong.
  • We try to be conservative by running Carrefour first and only splitting

pages when necessary (splitting pages is expensive).

  • Predicting that splitting a 2M page will increase TLB miss rate is hard. This

is why the conservative component is required.

22

slide-23
SLIDE 23

Implementa?on ¡

Conservative component

Monitor time spent in TLB miss (hardware counters)

> 5%

Cluster 4K pages and force 2M pages allocation YES Monitor time spent in page fault handler (kernel statistics)

> 5%

Force 2M pages allocation YES

23

slide-24
SLIDE 24

Evalua?on ¡

  • 30
  • 20
  • 10

10 20 30

C G . D L U . B U A . B U A . C M a t r i x M u l t i p l y w r m e m S S C A . 2 S P E C j b b

  • Perf. improvement relative

to default Linux (%)

Carrefour-2M Conservative Reactive Carrefour-LP

  • 30
  • 20
  • 10

10 20 30

C G . D L U . B U A . B U A . C M a t r i x M u l t i p l y w r m e m S S C A . 2 S P E C j b b

  • Perf. improvement relative

to default Linux (%)

Carrefour-2M Conservative Reactive Carrefour-LP

  • 40

46 32 45 46

The reactive and conservative components work together. Machine A, 24 cores Machine B, 64 cores

24

slide-25
SLIDE 25

Evalua?on ¡

  • On the selected set of applications, our solution performs up to:
  • 46% better than Linux
  • 50% better than THP.

(The full set of applications is available in the paper.)

  • Overhead:
  • Less than 3% CPU overhead.

25

slide-26
SLIDE 26

Conclusion ¡

  • Large pages can hurt performance on NUMA systems.
  • We identified two new issues when using large pages on NUMA systems:

“hot pages” and “page-level false sharing”.

  • We designed a new algorithm, Carrefour-LP, that:
  • Splits large pages when they hurt performance.
  • Promotes 4K pages and uses 2M page allocation when beneficial.
  • Carrefour-LP restores the performance when it was lost due to large

pages and makes their benefits accessible to applications.

26

slide-27
SLIDE 27

Ques?ons? ¡

slide-28
SLIDE 28

28

slide-29
SLIDE 29

Performance ¡example ¡

App. Perf. increas e THP/ 4K Time spent in page fault handler 4K Time spent in page fault handler 2M Local acces s ratio 4K (%) Local Access ratio 2M (%) Imbalan ce 4K (%) Imbalan ce 2M (%) CG.D

  • 43

2200ms (0.1%) 450ms (0.1%) 40 36 1 59 UA.C

  • 15

100ms (0.2%) 50ms (0.1%) 88 66 14 12 WR 109 8700ms (38%) 3700ms (32%) 50 55 147 136 SSCA. 20 17 90ms (0%) 150ms (0%) 25 26 8 52 SpecJB B

  • 6

8400ms (2%) 5900ms (1.5%) 12 15 16 39