1
1
Memory Management
2
Overview
- Basic memory “management”
- Address Spaces
- Virtual memory
- Page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
Memory Management 1 Overview Basic memory management Address - - PDF document
Memory Management 1 Overview Basic memory management Address Spaces Virtual memory Page replacement algorithms Design issues for paging systems Implementation issues Segmentation 2 1 Memory Management
1
2
3
– large – fast – non volatile
– small amount of fast, expensive memory – cache – some medium-speed, medium price main memory – gigabytes of slow, cheap disk storage
4
5
6
7
8
– processes come into memory – leave memory
9
10
11
12
13
14
15
16
17
18
Second-level page tables Top-level page table
19
20
21
22
23
24
25
1.
2.
3.
4.
26
27
28
29
30
31
32
33
– Temporal locality: if the process accessed a particular address, it is likely to do so again soon. – Spatial locality: if the process accessed a particular address, it is likely to access nearby addresses soon.
34
const int ROWS = 10000; const int COLS = 1024; int arr[ROWS][COLS]; int main() { for (int row = 0; row < ROWS; ++row) for (int col = 0; col < COLS; ++col) arr[row][col] = row * col; for (int col = 0; col < COLS; ++col) for (int row = 0; row < ROWS; ++row) arr[row][col] = row * col; }
35
36
37
38
39
40
41
42
43
44
45
46
47
– s = average process size in bytes – p = page size in bytes – e = page entry page table space internal fragmentation
48
49
50
51
1.
Process creation
2.
Context Switch
3.
Memory Reference
4.
Page Replacement
–
Record page access / modifies
–
Determine page to replace.
5.
Process termination time
52
53 54
55
56
57
Linux: kswapd, NT: Working Set Manger)
– NT brings in “clusters”, typically 8 pages for code, 4 for data. – Unix’s swapper used to bring in all referenced pages, when swapping a process back in. Now uses demand paging.
– Reduces changes needed in TLB and cache. – No changes needed for system call.
something that one hopes to avoid, but the feature is still present.
paging file. Unix: Core Map. NT: Page Frame Database.
58
– BSD: lotsfree = ¼ memory. – SysV: has some min/max. If less than min, fill until max. Goal is to reduce frequency of page daemon, thereby reducing thrashing.
– Front clears referenced bit – Back hand picks victim, – Result: a referenced page is really recent.
– Only boots page out if count greater than some value – Seems they had the opposite problem of the BSD folk.
59
sleeping…
page tables. (I’m surprised page tables are removed. Must record swapped out page for process somewhere…)
60
– (code: 8, data: 4). XP does some prepaging after application’s first run.
thread)
– WS is based on size, not time window. – Keeps a min/max per process – If we have > max and a page fault then steal process’s page. Otherwise add.
Set Manager to free pages.
– Sort processes by “desirability”. Large idle, first. Foreground last. – If < min and “enough page faults” since last check, then leave alone. – Determine #pages to remove from process. Depends on need, size of WS relative to min/max. – Examine all pages in a process. Uses an unreferenced count, like AT&T Unix.
– Continue examining additional processes till sufficient pages free. More aggressive passes as needed.
61
– Swapper runs every 4 sec. – If thread asleep for 7 sec. Mark thread’s kernel stack “in transition”. – When all threads in process marked, then swap out.
62
63
kernel services, then configure for 2GB
– Wired – Active – Inactive. Dirty (min: 0%, max 4.5%)
– attempts to balance i/o load by not starting too many concurrent writes. – Runs as a kernel process. This way it has access to kernel data structures, etc., but can be scheduled to run as a process so it can sleep.
– Cache. Clean. (min: 3%, max 6%) – Free. (min: 0.7%, max 3%)
64
– Cache holds pages. A 1MB cache with 4KB pages has 256 cache pages. Each physical page on a 1MB boundary maps to the same cache page. Thus each cache page represents as many physical pages as there are MB’s of physical memory. – When allocating frames, color coding attempts to avoid such potential conflicts by making contiguous virtual pages, get frames that map to contiguous cache pages.
– Back in BSD4.3, count of resident pages was used. DIoF says might return.
– Least actively used algorithm. – Page has count of three when first brought in – Incremented each time reference bit found set to a max of 64 – Decremented each time referenced bit found not set.
65
66
67
68
69
70
71
Conversion of a 2-part MULTICS address into a main memory address
72
73
74
75
76
77
Level