Memory Thursday, 14 February 19 Challenge managing memory see - - PowerPoint PPT Presentation

memory
SMART_READER_LITE
LIVE PREVIEW

Memory Thursday, 14 February 19 Challenge managing memory see - - PowerPoint PPT Presentation

IN2140: Introduction to Operating Systems and Data Communication Operating Systems: Memory Thursday, 14 February 19 Challenge managing memory see which seats are available How to find one particular person allocate a seat


slide-1
SLIDE 1

Operating Systems:

Memory

Thursday, 14 February 19

IN2140: Introduction to Operating Systems and Data Communication

slide-2
SLIDE 2

IN2140, Pål Halvorsen

University of Oslo

Challenge – managing memory

How to…

  • see which seats are available
  • allocate a seat
  • find one particular person
slide-3
SLIDE 3

IN2140, Pål Halvorsen

University of Oslo

Overview

§ Hierarchies § Multiprogramming and memory management § Addressing § A process’ memory § Partitioning § Paging and Segmentation § Virtual memory § Page replacement algorithms § Paging example in IA32 § Data paths

slide-4
SLIDE 4

IN2140, Pål Halvorsen

University of Oslo

Memory Management

§ Memory management is concerned with managing

the systems’ memory resources

− allocate space to processes − protect the memory regions − provide a virtual view of memory giving the impression

  • f having more than the number of available bytes

− control different levels of memory in a hierarchy

slide-5
SLIDE 5

IN2140, Pål Halvorsen

University of Oslo

Memory Hierarchies

§ A process typically needs a lot of memory:

We can’t access the disk every time we need data

§ Typical computer systems therefore have several different

components where data may be stored

− different capacities − different speeds − less capacity gives faster access and higher cost per byte

§ Lower levels have a copy of

data in higher levels

§ A typical memory hierarchy:

cache(s) main memory secondary storage (disks) tertiary storage (tapes) speed capacity price

2x 100x 107x

slide-6
SLIDE 6

IN2140, Pål Halvorsen

University of Oslo

cache(s) main memory secondary storage (disks) tertiary storage (tapes)

Memory Hierarchies

0.3 ns On die memory - 1 ns 50 ns 5 ms < 1 s 2 s 1.5 minutes 3.5 months

slide-7
SLIDE 7

IN2140, Pål Halvorsen

University of Oslo

~0.3 ns

cache(s) main memory secondary storage (disks) tertiary storage (tapes)

Memory Hierarchies

On die memory - 1 ns 50 ns 5 ms < 1 s 2 s 1.5 minutes 3.5 months

Tapes? Well, they …

  • store a LOOOT of data

(IBM TS3500 Tape Library: 2.25 exabytes)

  • are SLOOOOOOOW…

(seconds à minutes à hours) à Going to other far away galaxies… à “5 sec ≈ ~290 years”

slide-8
SLIDE 8

IN2140, Pål Halvorsen

University of Oslo

Storage Costs: Access Time vs Capacity

10-9 10-6 10-3 10 103

access time (sec)

1015 1013 1011 109 107 105 cache

main memory magnetic disks

  • nline

tape

  • ffline

tape

typical capacity (bytes) from Gray & Reuter

slide-9
SLIDE 9

IN2140, Pål Halvorsen

University of Oslo

Storage Costs: Access Time vs Price

10-9 10-6 10-3 10 103

access time (sec) from Gray & Reuter dollars/Mbytes

cache main memory magnetic disks

  • nline

tape

  • ffline

tape

104 102 100 10-2

slide-10
SLIDE 10

IN2140, Pål Halvorsen

University of Oslo

Absolute and Relative Addressing

§ Hardware often uses absolute addressing

− reserved memory regions − reading data by referencing the byte numbers in memory − read absolute byte 0x000000ff − fast!!!

§ What about software?

− read absolute byte 0x000fffff (process A) ð result dependent of a process' physical location − absolute addressing not convenient − but, addressing within a process is determined during programming!!??

Ä Relative addressing

− independent of process position in memory − address is expressed relative to some base location − dynamic address translation – find absolute address during run-time adding the relative and base addresses

0x000… 0xfff… process A process A

  • ffset from process’ start address
  • ffset from memory’s start address
slide-11
SLIDE 11

IN2140, Pål Halvorsen

University of Oslo

Processes’ Memory Layout

process A low address high address

… ...

8048314 <add>: 8048314: push %ebp 8048315: mov %esp,%ebp 8048317: mov 0xc(%ebp),%eax 804831a: add 0x8(%ebp),%eax 804831d: pop %ebp 804831e: ret 804831f <main>: 804831f: push %ebp 8048320: mov %esp,%ebp 8048322: sub $0x18,%esp 8048325: and $0xfffffff0,%esp 8048328: mov $0x0,%eax 804832d: sub %eax,%esp 804832f: movl $0x0,0xfffffffc(%ebp) 8048336: movl $0x2,0x4(%esp,1) 804833e: movl $0x4,(%esp,1) 8048345: call 8048314 <add> 804834a: mov %eax,0xfffffffc(%ebp) 804834d: leave 804834e: ret 804834f: nop

code segment system data segment (PCB) data segment initialized variables uninitialized variables data segment heap stack

possible thread stacks, arguments

§ On most architectures, a process partitions its

available memory (address space), but for what?

− a text (code) segment

  • read from program file

for example by exec

  • usually read-only
  • can be shared

− a data segment

  • initialized global/static variables (data)
  • uninitialized global/static variables (BSS)
  • heap

§ dynamic memory, e.g., allocated using malloc § grows against higher addresses

− a stack segment

  • stores parameters/variables in a function
  • stores register states (e.g., calling function’s EIP)
  • grows against lower addresses

− system data segment (PCB)

  • segment pointers
  • pid
  • program and stack pointers

− possibly more stacks for threads − command line arguments and environment variables at highest addresses

slide-12
SLIDE 12

IN2140, Pål Halvorsen

University of Oslo

Processes’ Memory Layout

§ On most architectures, a task partitions its

available memory (address space), but for what?

− a text (code) segment

  • read from program file

for example by exec

  • usually read-only
  • can be shared

− a data segment

  • initialized global variables (data)
  • uninitialized global variables (bss)
  • heap

§ dynamic memory, e.g., allocated using malloc § grows against higher addresses

− a stack segment

  • variables in a function
  • stored register states (e.g., calling function’s EIP)
  • grows against lower addresses

− system data segment (PCB)

  • segment pointers
  • pid
  • program and stack pointers

− possibly more stacks for threads − command line arguments and environment variables at highest addresses

process A low address high address code segment system data segment (PCB) data segment initialized variables uninitialized variables data segment heap stack

possible thread stacks, arguments

slide-13
SLIDE 13

IN2140, Pål Halvorsen

University of Oslo

Global Memory Layout

§ Memory is usually divided into regions

− operating system occupies low memory

  • system control
  • resident routines

− the remaining area is used for transient operating system routines and application programs

§ How to assign memory to concurrent processes?

0x000… 0xfff… system control information resident operating system

(kernel)

transient area

(application programs – and transient operating system routines)

slide-14
SLIDE 14

IN2140, Pål Halvorsen

University of Oslo

The Challenge of Multiprogramming

§ Many “tasks” require memory

− several processes concurrently loaded into memory − memory is needed for different tasks within a process − process memory demand may change over time ➥ OS must arrange (dynamic) memory sharing

slide-15
SLIDE 15

IN2140, Pål Halvorsen

University of Oslo

Memory Management for Multiprogramming

§ Use of secondary storage

− keeping all programs and their data in memory may be impossible − move (parts of) a process from memory

§ Swapping: remove a process from memory

− with all of its state and data − store it on a secondary medium (disk, flash RAM, … , historically also tape)

§ Overlays: manually replace parts of code/data

− programmer’s rather than OS’s work − only for very old and memory-scarce systems

§ Segmentation/paging: remove parts of a process from memory

− store it on a secondary medium − sizes of such parts are usually fixed

slide-16
SLIDE 16

IN2140, Pål Halvorsen

University of Oslo

Fixed Partitioning

§ Divide memory into static partitions

at system initialization time (boot or earlier)

§ Advantages

− easy to implement − can support swapping of processes

§ Equal-size partitions

− large programs cannot be executed

(unless parts of a program are loaded from disk)

− small programs don't use the entire partition

(problem called “internal fragmentation”)

§ Unequal-size partitions

− large programs can be loaded at once − less internal fragmentation − require assignment of jobs to partitions − one queue or one queue per partition − …but, what if only small or large processes?

Operating system 8MB 8MB 8MB 8MB 8MB 8MB 8MB 8MB Operating system 8MB 8MB 8MB 2MB 4MB 6MB 12MB 16MB

Equal sized: Unequal sized:

slide-17
SLIDE 17

IN2140, Pål Halvorsen

University of Oslo

Dynamic Partitioning

§ Divide memory at run-time

− partitions are created dynamically − removed after jobs are finished

§ External fragmentation increases

with system running time

Operating system 8MB 56MB free Process 1 20MB 36MB free 22MB free Process 2 14MB 4MB free Process 3 18MB 14MB free Process 4 8MB 6MB free 20MB free Process 5 14MB 6MB free

External fragmentation

slide-18
SLIDE 18

IN2140, Pål Halvorsen

University of Oslo

Operating system 8MB

Dynamic Partitioning

§ Compaction removes fragments by moving

data in memory

− takes time − consumes processing resources 4MB free Process 3 18MB Process 4 8MB 6MB free Process 5 14MB 6MB Process 4 8MB 6MB free Process 3 18MB 6MB free 6MB free 16MB free

§ Divide memory at run-time

− partitions are created dynamically − removed after jobs are finished

§ External fragmentation increases

with system running time

slide-19
SLIDE 19

IN2140, Pål Halvorsen

University of Oslo

Operating system 8MB

Dynamic Partitioning

§ Compaction removes fragments by moving

data in memory

− takes time − consumes processing resources

§ Proper placement algorithm might reduce

need for compaction

− first fit – simplest, fastest, typically the best − next fit – problems with large segments − best fit – slowest, lots of small fragments, therefore often worst 4MB free 10MB free 8MB free 6MB free

§ Divide memory at run-time

− partitions are created dynamically − removed after jobs are finished

§ External fragmentation increases

with system running time

14MB free first next best

3MB

last allocated placement??

slide-20
SLIDE 20

IN2140, Pål Halvorsen

University of Oslo

The Buddy System

§ Mix of fixed and dynamic partitioning

− partitions have sizes 2k, L ≤ k ≤ U

§ Maintain a list of holes with sizes § Assigning memory to a process:

− find the smallest k so that the process fits into 2k − find a hole of size 2k − if not available, split the smallest hole larger than 2k recursively into halves

§ Merge partitions if possible when released § … but what if I now got a 513kB process?

… do we really need the process in continuous memory?

Process 128kB 1MB 512kB 512kB 256kB 256kB 128kB 128kB Process 128kB Process 256kB 256kB Process 256kB 256kB Process 256kB

Process 32kB

64kB 64kB

32kB 32kB Process 32kB

slide-21
SLIDE 21

IN2140, Pål Halvorsen

University of Oslo

Segmentation

§ Requiring that a process is placed in contiguous memory gives much

fragmentation (and memory compaction is expensive)

§ Segmentation

− different lengths − determined by programmer − memory frames

§ Programmer (or compiler tool-chain) organizes program in parts

− move control − needs awareness of possible segment size limits

§ Pros and Cons

C principle as in dynamic partitioning – can have different sizes C no internal fragmentation C less external fragmentation because on average smaller segments D adds a step to address translation

slide-22
SLIDE 22

IN2140, Pål Halvorsen

University of Oslo

Segmentation: address lookup

process A, segment 0 process A, segment 1 process A, segment 2

  • perating system
  • ther regions/programs
  • ther regions/programs
  • ther regions/programs
  • ther regions/programs

address

segment number | offset

0x…a… 0x…b… 0x…c… …

segment table with segment start addresses

1.

find segment table address in register

2.

extract segment number from address

3.

find segment address using segment number as index to segment table

4.

find absolute address within segment using relative address

+

slide-23
SLIDE 23

IN2140, Pål Halvorsen

University of Oslo

Process 6 Process 7

Paging

§ Paging

− equal lengths determined by processor − one page moved into

  • ne page (memory) frame

§ Process is loaded into several frames

(not necessarily consecutive)

§ Fragmentation

− no external fragmentation − little internal fragmentation (depends on frame size)

§ Addresses are dynamically translated during run-time

(similar to segmentation)

§ Can combine segmentation and paging

Process 1 Process 2 Process 3 Process 4 Process 5 Process 1

?

slide-24
SLIDE 24

IN2140, Pål Halvorsen

University of Oslo

Virtual Memory

§ The described partitioning schemes may be used in applications, but a modern OS

also uses virtual memory:

− early attempt to give a programmer more memory than physically available

  • older computers had relatively little main memory
  • but still today, all instructions do not have to be in memory before execution starts

§ break program into smaller independent parts § load currently active parts § when a program is finished with one part a new can be loaded

− memory is divided into equal-sized frames often called pages − some pages reside in physical memory, others are stored on disk and retrieved if needed − virtual addresses are translated to physical addresses (in the MMU) using a page table − both Linux and Windows implement a flat linear 32-bit (4 GB) memory model on IA-32

  • Windows: 2 GB (high addresses) kernel, 2 GB (low addresses) user mode threads
  • Linux: 1 GB (high addresses) kernel, 3 GB (low addresses) user mode threads
slide-25
SLIDE 25

IN2140, Pål Halvorsen

University of Oslo

Virtual Memory

1

virtual address space

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 7 1 5 4 13 2 18

physical memory

3 3

slide-26
SLIDE 26

IN2140, Pål Halvorsen

University of Oslo

0 0 1 0 0 0 1 0

Memory Lookup

0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0

12-bit offset Outgoing physical address 4-bit index into page table virtual page = 0010 = 2 Incoming virtual address (0x2004, 8196)

010 1 1 001 1 2 110 1 3 000 1 4 100 1 5 011 1 6 000 0 7 000 0 8 000 0 9 101 1 10 000 0 11 111 1 12 000 0 13 000 0 14 000 0 15 000 0

Page table

0 0 1 0

present bit

0 0 0 0 0 0 0 0 0 1 0 0

(0x6004, 24580)

1 1 0 0 0 0 0 0 0 0 0 0 1 0 0

Example:

  • 4 KB pages (12-bit offsets within page)
  • 16 bit virtual address space à 16 pages (4-bit index)
  • 8 physical pages (3-bit index)

The memory lookup can be for almost anything:

8048314 <add>: 8048314: push %ebp 8048315: mov %esp,%ebp 8048317: mov 0xc(%ebp),%eax 804831a: add 0x8(%ebp),%eax 804831d: pop %ebp 804831e: ret 804831f <main>: 804831f: push %ebp 8048320: mov %esp,%ebp 8048322: sub $0x18,%esp 8048325: and $0xfffffff0,%esp 8048328: mov $0x0,%eax 804832d: sub %eax,%esp 804832f: movl $0x0,0xfffffffc(%ebp) 8048336: movl $0x2,0x4(%esp,1) 804833e: movl $0x4,(%esp,1) 8048345: call 8048314 <add> 804834a: mov %eax,0xfffffffc(%ebp) 804834d: leave 804834e: ret

slide-27
SLIDE 27

IN2140, Pål Halvorsen

University of Oslo

0 0 1 0 0 0 1 0

Memory Lookup

0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0

12-bit offset Outgoing physical address 4-bit index into page table virtual page = 0010 = 2 Incoming virtual address (0x2004, 8196)

010 1 1 001 1 2 110 0 3 000 1 4 100 1 5 011 1 6 000 0 7 000 0 8 000 0 9 101 1 10 000 0 11 111 1 12 000 0 13 000 0 14 000 0 15 000 0

Page table

0 0 1 0

present bit

0 0 0 0 0 0 0 0 0 1 0 0

slide-28
SLIDE 28

IN2140, Pål Halvorsen

University of Oslo

Page Fault Handling

1.

Hardware traps to the kernel saving program counter and process state information

2.

Save general registers and other volatile information

3.

OS discovers the page fault and determines which virtual page is requested

4.

OS checks if the virtual page is valid and if protection is consistent with access

5.

Select a page to be replaced

6.

Check if selected page frame is ”dirty”, i.e., updated. If so, write back to disk,

  • therwise, just overwrite

7.

When selected page frame is ready, the OS finds the disk address where the needed data is located and schedules a disk operation to bring in into memory

8.

A disk interrupt is executed indicating that the disk I/O operation is finished, the page tables are updated, and the page frame is marked ”normal state”

9.

Faulting instruction is backed up and the program counter is reset

10.

Faulting process is scheduled, and OS returns to the routine that made the trap to the kernel

11.

The registers and other volatile information are restored, and control is returned to user space to continue execution as no page fault had occured

slide-29
SLIDE 29

IN2140, Pål Halvorsen

University of Oslo

Page Replacement Algorithms

§ Page fault → OS has to select a page for replacement § How do we decide which page to replace?

→ determined by the page replacement algorithm → several algorithms exist:

  • Random
  • Other algorithms take into account usage, age, etc.

(e.g., FIFO, not recently used, least recently used, second chance, clock, …)

  • which is best???
slide-30
SLIDE 30

IN2140, Pål Halvorsen

University of Oslo

First In First Out (FIFO)

§ All pages in memory are maintained in a list sorted by age § FIFO replaces the oldest page, i.e., the first in the list

  • Low overhead
  • Non-optimal replacement (and disc accesses are EXPENSIVE)

➥ FIFO is rarely used in its pure form

Page most recently loaded Page first loaded, i.e., FIRST REPLACED

Reference string: A B C D A E F G H I A J

A C B A B A E D C B A F E D C B A G F E D C B A I H G F E D C B A I H G F E D C J A I H G F E D D C B A D C B A

No change in the FIFO chain

H G F E D C B A

Now the buffer is full, next page fault results in a replacement

slide-31
SLIDE 31

IN2140, Pål Halvorsen

University of Oslo

Page most recently loaded Page first loaded R-bit

Second Chance

§ Modification of FIFO § R bit: when a page is referenced again, the R bit is set,

and the page will be treated as a newly loaded page

Reference string: A B C D A E F G H I

E D C B A 1 F E D C B A 1 G F E D C B A 1 D C B A D C B A 1

The R-bit for page A is set

H G F E D C B A 1

Now the buffer is full, next page fault results in a replacement

H G F E D C B A 1

Page I will be inserted, find a page to page out by looking at the first page loaded:

  • if R-bit = 0 → replace
  • if R-bit = 1 → clear R-bit, move page last, and finally look at the new first page

A H G F E D C B

Page A’s R-bit = 1 → move last in chain and clear R-bit, look at new first page (B)

I A H G F E D C

Page B’s R-bit = 0 → page out, shift chain left, and insert I last in the chain

  • Second chance is a reasonable algorithm,

but inefficient because it is moving pages around the list

slide-32
SLIDE 32

IN2140, Pål Halvorsen

University of Oslo

Reference string: A B C D A E F G H I

Clock

§ More efficient implemention Second Chance § Circular list in form of a clock § Pointer to the oldest page:

− R-bit = 0 → replace and advance pointer − R-bit = 1 → set R-bit to 0, advance pointer until R-bit = 0, replace and advance pointer

A D B C A 1 E F G H I

slide-33
SLIDE 33

IN2140, Pål Halvorsen

University of Oslo

Least Recently Used (LRU)

§ Replace the page that has the longest time since last reference § Based on the observation that

pages that was heavily used in the last few

instructions will probably be used again in the next few instructions

§ Several ways to implement this algorithm

slide-34
SLIDE 34

IN2140, Pål Halvorsen

University of Oslo

Least Recently Used (LRU)

§ LRU as a linked list:

Page most recently used Page least recently used

Reference string: A B C D A E F G H A C I

E A D C B F E A D C B G F E A D C B D C B A A D B C

Move A last in the chain (most recently used)

H G F E A D C B

Now the buffer is full, next page fault results in a replacement

I C A H G F E D

Page fault, replace LRU (B) with I

A H G F E D C B

Move A last in the chain (most recently used)

C A H G F E D B

Move C last in the chain (most recently used)

  • Saves (usually) a lot of disk accesses
  • Expensive - maintaining an ordered list of all pages in memory:
  • most recently used at front, least at rear
  • update this list every memory reference !!
  • Many other approaches: using aging and counters (e.g., Working Set and WSClock)
slide-35
SLIDE 35

IN2140, Pål Halvorsen

University of Oslo

Need For Application-Specific Algorithms?

§ Most existing systems use an LRU-variant

− keep a sorted list (most recently used at the head) − replace last element in list − insert new data elements at the head − if a data element is re-accessed, move back to the end of the list

§ Extreme example – video frame playout:

LRU buffer l

  • n

g e s t t i m e s i n c e a c c e s s s h

  • r

t e s t t i m e s i n c e a c c e s s

play video (7 frames): 1 2 3 4 5 6 7 rewind and restart playout at 1: 7 6 5 4 3 2 1 playout 2: 1 7 6 5 4 3 2 playout 3: 2 1 7 6 5 4 3 playout 4: 3 2 1 7 6 5 4

In this case, LRU replaces the next needed frame. So the answer is in many cases YES…

slide-36
SLIDE 36

IN2140, Pål Halvorsen

University of Oslo

“Classification” of Mechanisms

§ Block-level caching consider (possibly unrelated) set of blocks

− each data element is viewed upon as an independent item − usually used in “traditional” systems − e.g., FIFO, LRU, LFU, CLOCK, … − multimedia (video) approaches:

  • Least/Most Relevant for Presentation (L/MRP)

§ Stream-dependent caching consider (parts of) a stream object as a whole

− related data elements are treated in the same way − research prototypes in multimedia systems − e.g.,

  • BASIC
  • DISTANCE
  • Interval Caching (IC)
  • Generalized Interval Caching (GIC)
  • Split and Merge (SAM)
slide-37
SLIDE 37

IN2140, Pål Halvorsen

University of Oslo

Least/Most Relevant for Presentation (L/MRP)

§ L/MRP is a buffer management mechanism for a single

interactive, continuous data stream

− adaptable to individual multimedia applications − preloads units most relevant for presentation from disk − replaces units least relevant for presentation − client pull based architecture

Server

request

Homogeneous stream e.g., MJPEG video

Client

Buffer

request Continuous Presentation Units (COPU) e.g., MJPEG video frames

slide-38
SLIDE 38

IN2140, Pål Halvorsen

University of Oslo current presentation point

Least/Most Relevant for Presentation (L/MRP)

§ Relevance values are calculated with respect to current playout of the

multimedia stream

  • presentation point (current position in file)
  • mode / speed (forward, backward, FF, FB, jump)
  • relevance functions are configurable

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

COPUs – continuous object presentation units

COPU number 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 relevance value 1.0 0.8 0.6 0.4 0.2 X referenced X history playback direction 12 13 14 15 16 17 18 19 25 24 23 22 X skipped 16 18 20 22 24 26 20 21 26 10 11

slide-39
SLIDE 39

IN2140, Pål Halvorsen

University of Oslo

loaded frames

§ Global relevance value

− each COPU can have more than one relevance value

  • bookmark sets (known interaction points)
  • several viewers (clients) of the same

− = maximum relevance for each COPU

Least/Most Relevant for Presentation (L/MRP)

... ...

1

Relevance

Bookmark-Set Referenced-Set History-Set

100 101 102 103 99 98

current presentation point S1

91 92 93 94 90 89 95 96 97 104 105 106

current presentation point S2

global relevance value

slide-40
SLIDE 40

IN2140, Pål Halvorsen

University of Oslo

Least/Most Relevant for Presentation (L/MRP)

§ L/MRP …

J … gives “few” disk accesses (compared to other schemes) J … supports interactivity J … supports prefetching L … targeted for single streams (users) L … expensive (!) to execute

(calculate relevance values for all COPUs each round)

slide-41
SLIDE 41

IN2140, Pål Halvorsen

University of Oslo

Interval Caching (IC)

§ Interval caching (IC) is a caching strategy for streaming servers

− caches data between requests for same video stream – based on playout intervals between requests − following requests are thus served from the cache filled by preceding stream − sort intervals on length, buffer requirement is data size of interval − to maximize cache hit ratio (minimize disk accesses) the shortest intervals are cached first

Video clip 1 S11 Video clip 1 S11 S12 Video clip 1 S12 S11 S13 Video clip 2 S22 S21 Video clip 3 S33 S31 S32 S34

I11 I12 I21 I31 I32 I33

: I32 I33 I21 I11 I31 I12

slide-42
SLIDE 42

IN2140, Pål Halvorsen

University of Oslo

Generalized Interval Caching (GIC)

§ Interval caching (IC) does not work for short clips

− a frequently accessed short clip will not be cached

§ GIC generalizes the IC strategy

− manages intervals for long video objects as IC − short intervals extend the interval definition

  • keep track of a finished stream for a while after its termination
  • define the interval for short stream as the length between the new stream

and the position of the old stream if it had been a longer video object

  • the cache requirement is, however, only the real requirement

− cache the shortest intervals as in IC

Video clip 1 S11 S12

I11 C11

S11 Video clip 2 S22 S21

I21

I11 < I21 Ä GIC caches I11 before I21

slide-43
SLIDE 43

IN2140, Pål Halvorsen

University of Oslo

Generalized Interval Caching (GIC)

§ Open function:

form if possible new interval with previous stream; if (NO) {exit} /* don’t cache */ compute interval size and cache requirement; reorder interval list; /* smallest first */ if (not already in a cached interval) { if (space available) {cache interval} else if (larger cached intervals exist and sufficient memory can be released) { release memory from larger intervals; cache new interval; } }

§ Close function

if (not following another stream) {exit} /* not served from cache */ delete interval with preceding stream; free memory; if (next interval can be cached in released memory) { cache next interval }

slide-44
SLIDE 44

IN2140, Pål Halvorsen

University of Oslo

wasted buffering

LRU vs. L/MRP vs. IC Caching

§ What kind of caching strategy is best (video streaming)?

− caching effect

movie X S5 S4 S2 S1 S3 Memory (L/MRP): Memory (IC):

loaded page frames global relevance values

I1 I2 I3 I4

4 streams from disk, 1 from cache 2 streams from disk, 3 from cache

Memory (LRU):

4 streams from disk, 1 from cache

slide-45
SLIDE 45

IN2140, Pål Halvorsen

University of Oslo

LRU vs. L/MRP vs. IC Caching

§ What kind of caching strategy is best (video streaming)?

− caching effect (IC best) − CPU requirement LRU

for each I/O request reorder LRU chain

L/MRP

for each I/O request for each COPU RV = 0 for each stream tmp = rel ( COPU, p, mode ) RV = max ( RV, tmp )

IC

for each block consumed if last part of interval release memory element

slide-46
SLIDE 46

IN2140, Pål Halvorsen

University of Oslo

Speeding up paging…

§ Every memory reference needs a virtual-to-physical mapping § Each process has its own virtual address space (an own page table) § Large tables:

  • 32-bit addresses, 4 KB pages à

1.048.576 entries

  • 64-bit addresses, 4 KB pages à 4.503.599.627.370.496 entries

➥ Translation lookaside buffers (aka associative memory)

− hardware "cache" for the page table − a fixed number of slots containing the last page table entries

➥ Page size:

larger page sizes reduce number of pages

➥ Multi-level page tables

slide-47
SLIDE 47

IN2140, Pål Halvorsen

University of Oslo

Speeding up paging…

§ Every memory reference needs a virtual-to-physical mapping § Each process has its own virtual address space (an own table) § Large tables:

− 32-bit addresses, 4 KB pages à 1.048.576 entries − 64-bit addresses, 4 KB pages à 4.503.599.627.370.496 entries

➥ Translation lookaside buffers (aka associative memory)

− hardware "cache" for the page table − a fixed number of slots containing the last page table entries

➥ Page size:

larger page sizes reduce number of pages

➥ Multi-level page tables

slide-48
SLIDE 48

IN2140, Pål Halvorsen

University of Oslo

Many Design Issues

§ Replacement policies:

− Reference locality in time and space − Local vs. global − Algorithm complexity

§ Free block management: bitmaps vs. linked lists § Demand paging vs. pre-paging § Allocation policies: equal share vs. proportional share § …

slide-49
SLIDE 49

IN2140, Pål Halvorsen

University of Oslo

Allocation Policies

§ Page fault frequency (PFF):

Usually, more page frames → fewer page faults P F F : p a g e f a u l t s / s e c # page frames assigned

PFF is unacceptable high → process needs more memory PFF might be too low → process may have too much memory!!!??????

If the system experience too many page faults, what should we do? Reduce number of processes competing for memory

  • reassign a page frame
  • swap one or more to disk, divide up pages they held
  • reconsider degree of multiprogramming
slide-50
SLIDE 50

Multi-level paging example: 32-bit (Pentium)

slide-51
SLIDE 51

IN2140, Pål Halvorsen

University of Oslo

Paging on Pentium

§ The executing process has a 4 GB address space (232) –

viewed as 1M (220) 4 KB (212) pages

− The 4 GB address space is divided into 1 K page groups (pointed to by the 1 level table – page directory) − Each page group has 1 K 4 KB pages (pointed to by the 2 level tables – page tables)

§ Mass storage space is also divided into 4 KB blocks of

information

§ Control registers: used to change/control general behavior

(e.g., interrupt control, switching the addressing mode, paging control, etc.)

slide-52
SLIDE 52

IN2140, Pål Halvorsen

University of Oslo

Control Registers used for Paging on Pentium

§ Control register 0 (CR0): § Control register 1 (CR1) – does not exist, returns only zero § Control register 2 (CR2)

− only used if CR0[PG]=1 & CR0[PE]=1

31 30 29 16

PG CD NW WP PE

Not-Write-Through and Cache Disable: used to control internal cache Paging Enable: OS enables paging by setting CR0[PG] = 1 Write-Protect: If CR0[WP] = 1,

  • nly OS may write to read-only pages

31

Page Fault Linear Address

Protected Mode Enable: If CR0[PE] = 1, the processor is in protected mode

slide-53
SLIDE 53

IN2140, Pål Halvorsen

University of Oslo

Control Registers used for Paging on Pentium

§ Control register 3 (CR3) – page directory base address:

− only used if CR0[PG]=1 & CR0[PE]=1

§ Control register 4 (CR4):

31 11 4 3

Page Directory Base Address

PCD PWT

A 4KB-aligned physical base address of the page directory Page Cache Disable: If CR3[PCD] = 1, caching is turned off Page Write-Through: If CR3[PWT] = 1, use write-through updates

31 4

PSE

Page Size Extension: If CR4[PSE] = 1, the OS designer may designate some pages as 4 MB

slide-54
SLIDE 54

IN2140, Pål Halvorsen

University of Oslo

Pentium Memory Lookup

31 22 21 12 11

1 1 1 1 1 1

Incoming virtual address (CR2) (0x1802038, 20979768)

Page directory:

31 12 7 6 5 4 3 2 1

PT base address ... PS A U W P

physical base address of the page table page size accessed present allowed to write user access allowed

slide-55
SLIDE 55

IN2140, Pål Halvorsen

University of Oslo

Pentium Memory Lookup

31 22 21 12 11

1 1 1 1 1 1

Incoming virtual address (CR2) (0x1802038, 20979768)

31 12 7 6 5 4 3 2 1

0...01010101111 ... 1 0...01111111000 ... 0...01110000111 ... 0...00001010101 ... 1 0...01111000101 ... 0...00000000100 ... ...... Index to page directory (0x6, 6) Page Directory Base Address

CR3:

Page table PF: 1. Save pointer to instruction 2. Move linear address to CR2 3. Generate a PF exception – jump to handler 4. Programmer reads CR2 address 5. Upper 10 CR2 bits identify needed PT 6. Page directory entry is really a mass storage address 7. Allocate a new page – write back if dirty 8. Read page from storage device 9. Insert new PT base address into page directory entry 10. Return and restore faulting instruction 11. Resume operation reading the same page directory entry again – now P = 1

slide-56
SLIDE 56

IN2140, Pål Halvorsen

University of Oslo

Pentium Memory Lookup

31 22 21 12 11

1 1 1 1 1 1

Incoming virtual address (CR2) (0x1802038, 20979768)

31 12 7 6 5 4 3 2 1

0...01010101111 ... 1 0...01111111000 ... 0...01110000111 ... 0...00001010101 ... 1 0...01111000101 ... 0...00000000100 ... 1 ...... Index to page directory (0x6, 6) Page Directory Base Address

CR3:

31 12 7 6 5 4 3 2 1

0...01010101111 ... 1 0...01010100000 0...01100110011 1 0...00010000100 1 ......

Page table:

Index to page table (0x2, 2)

Page frame PF: 1. Save pointer to instruction 2. Move linear address to CR2 3. Generate a PF exception – jump to handler 4. Programmer reads CR2 address 5. Upper 10 CR2 bits identify needed PT 6. Use middle 10 CR2 bit to determine entry in PT – holds a mass storage address 7. Allocate a new page – write back if dirty 8. Read page from storage device 9. Insert new page frame base address into page table entry 10. Return and restore faulting instruction 11. Resume operation reading the same page directory entry and page table entry again – both now P = 1

slide-57
SLIDE 57

IN2140, Pål Halvorsen

University of Oslo

Pentium Memory Lookup

31 22 21 12 11

1 1 1 1 1 1

Incoming virtual address (CR2) (0x1802038, 20979768)

31 12 7 6 5 4 3 2 1

0...01010101111 ... 1 0...01111111000 ... 0...01110000111 ... 0...00001010101 ... 1 0...01111000101 ... 0...00000000100 ... 1 ...... Index to page directory (0x6, 6) Page Directory Base Address

CR3:

31 12 7 6 5 4 3 2 1

0...01010101111 ... 1 0...01010100000 1 0...01100110011 1 0...00010000100 1 ...... Index to page table (0x2, 2) Page offset (0x38, 56)

Page:

requested data

slide-58
SLIDE 58

IN2140, Pål Halvorsen

University of Oslo

Pentium Page Fault Causes

§ Page directory entry’s P-bit = 0:

page group’s directory (page table) not in memory

§ Page table entry’s P-bit = 0:

requested page not in memory

§ Attempt to write to a read-only page § Insufficient page-level privilege to access page table or frame § One of the reserved bits are set in the page directory or

table entry

slide-59
SLIDE 59

IN2140, Pål Halvorsen

University of Oslo

32-bit (protected/compatibility mode) vs. 64-bit (long mode)

§ Virtual address space could in theory be 64-bit (16 EB) (vs. 4 GB for 32-bit) § Processors allow addressing of only a portion of that § Most common processor implementations allow 48-bit (256 TB) § An address is now 8 bytes (64-bit) (vs. 4 byte for 32-bit) § Each 4-KB page can hold 29 (vs. 210 for 32-bit),

need 9 bit index for each level (vs. 10 for 32-bit)

§ Virtual address:

63-48 47-39 38-30 29-21 20-12 11-0

unused page map level 4 page directory pointer index page directory index page table index page offset

slide-60
SLIDE 60

In-Memory Copy Operations

slide-61
SLIDE 61

IN2140, Pål Halvorsen

University of Oslo

Delivery Systems

Network bus(es)

slide-62
SLIDE 62

IN2140, Pål Halvorsen

University of Oslo

file system communication system application

user space kernel space bus(es)

Delivery Systems

F several in-memory data movements and context switches

expensive expensive

slide-63
SLIDE 63

Existing Linux Data Paths

A lot of research has been performed in this area!!!! BUT, what is the status of commodity operating systems?

slide-64
SLIDE 64

IN2140, Pål Halvorsen

University of Oslo

file system communication system application

user space kernel space bus(es)

data_pointer data_pointer

Zero-Copy Data Paths

slide-65
SLIDE 65

IN2140, Pål Halvorsen

University of Oslo

Content Download

file system communication system application

user space kernel space bus(es)

slide-66
SLIDE 66

IN2140, Pål Halvorsen

University of Oslo

Content Download: read / send

application kernel page cache socket buffer application buffer

read send

copy copy DMA transfer DMA transfer

Ø 2n copy operations Ø 2n system calls

slide-67
SLIDE 67

IN2140, Pål Halvorsen

University of Oslo

Content Download: mmap / send

application kernel page cache socket buffer

mmap send

copy DMA transfer DMA transfer

Ø n copy operations Ø 1 + n system calls

slide-68
SLIDE 68

IN2140, Pål Halvorsen

University of Oslo

Content Download: sendfile

application kernel page cache socket buffer

sendfile

gather DMA transfer append descriptor DMA transfer

Ø 0 copy operations Ø 1 system calls

slide-69
SLIDE 69

IN2140, Pål Halvorsen

University of Oslo

Content Download: Results

TCP

§ Tested transfer of 1 GB file on Linux 2.6

slide-70
SLIDE 70

IN2140, Pål Halvorsen

University of Oslo

Summary

§ Memory management is concerned with managing the

systems’ memory resources

− allocating space to processes − protecting the memory regions − in the real world

  • programs are loaded dynamically
  • physical addresses are not known to program – dynamic address translation
  • program size at run-time is not known to kernel

§ Each process usually has text, data and stack segments § Systems like Windows and Unix use virtual memory with paging § Many issues when designing a memory component