Part II Processes and Threads Threads Basics You think you know - - PowerPoint PPT Presentation

part ii processes and threads
SMART_READER_LITE
LIVE PREVIEW

Part II Processes and Threads Threads Basics You think you know - - PowerPoint PPT Presentation

Part II Processes and Threads Threads Basics You think you know when you learn, are more sure when you can write, even more when you can teach, but certain when you can program. 1 Fall 2015 Alan J. Perlis Wh What at Is a s a Th Threa


slide-1
SLIDE 1

1

Part II Processes and Threads

Threads Basics

Fall 2015

You think you know when you learn, are more sure when you can write, even more when you can teach, but certain when you can program. Alan J. Perlis

slide-2
SLIDE 2

2

Wh What at Is a s a Th Threa ead? d?

  • A thread

thread, also known as ligh ghtw twei eigh ght t process process (LWP), is a basic unit of CPU execution, and is created by a process.

  • A thread has a thread ID, a program counter, a

register set, and a stack. Thus, it is similar to a process.

  • However, a thread shares with other threads in

the same process its code section, data section, and other OS resources (e.g., files and signals).

  • A process, or heavyweight process, has a single

thread of control.

slide-3
SLIDE 3

3

process control block user address space user stack system stack process control block user address space user stack system stack user stack system stack user stack system stack thread control block thread control block thread control block

Single-threaded process Multithreaded process

Single gle Threaded aded and Multithreaded ithreaded Proces ess

slide-4
SLIDE 4

4

Be Bene nefits s of

  • f Us

Using ng Th Threa eads ds

  • Responsiveness: Other part (i.e., threads) of a

program may still be running even if one part (e.g., a thread) is blocked.

  • Resource Sharing: Threads of a process, by

default, share many system resources (e.g., files and memory).

  • Economy: Creating and terminating processes,

allocating memory and resources, and context switching processes are very time consuming.

  • Utilization of Multiprocessor Architecture:

Multiple CPUs may run multiple threads of the same process. No program change is necessary.

slide-5
SLIDE 5

5

Use User and and Ke Kerne nel Thr Threa eads ds: 1/ 1/3

  • Use

ser T r Thr hrea eads ds: User threads are supported at the user level. The kernel is not aware of user threads. A library provides all support for thread creation, termination, joining, and scheduling. Since there is no kernel intervention, user threads are usually more efficient. Unfortunately, since the kernel only recognizes the containing process (of the threads), if on

  • ne

e th thre read ad i is bl s bloc

  • cke

ked, d, a all thr threa eads ds of

  • f the

the sa same pr e proc

  • ces

ess s ar are e al also so b bloc

  • cke

ked because the containing process is blocked.

slide-6
SLIDE 6

6

Use User and and Ke Kerne nel Thr Threa eads ds: 2/ 2/3

  • Ker

erne nel thr threa eads ds: Kernel threads are supported by the kernel. The kernel does thread creation, termination, joining, and scheduling in kernel space. Kernel threads are usually slower than user threads due to system overhead. However, bl bloc

  • cki

king ng o

  • ne

ne th thre read ad will no not t ca caus use e ot

  • the

her r th thre read ads s of

  • f the

the sa same e process to block process to block. The kernel simply runs

  • ther kernel threads.

In a multiprocessor environment, the kernel may run threads on different processors.

slide-7
SLIDE 7

7

Use User and and Ke Kerne nel Thr Threa eads ds: 3/ 3/3

slide-8
SLIDE 8

8

Mu Multith thre read adin ing g Mo Mode dels

  • Different systems support threads in different
  • ways. Here are three commonly seen thread

models: Man any-to to-One ne Mod

  • del

el: One kernel thread (or process) has multiple user threads. Thus, this is a user thread model. One ne-to to-One ne Mod

  • del

el: One user thread maps to one kernel thread (e.g., old Unix/Linux and Windows systems). Man any-to to-Man any y Mod

  • del

el: Multiple user threads map to a number of kernel threads.

slide-9
SLIDE 9

9

Ma Many ny-to to-On One e Mo Mode del

Each ch process ess has mu multipl tiple e user threads eads that at are associa

  • ciated

ted with th one kernel el threads.

  • ads. If a process

ess is blocked, ed, all user r threads eads of that at process ess are blocked. ed.

slide-10
SLIDE 10

10

On One-to to-On One e Mo Mode del: 1/ 1/2

An Extreme reme Case: e: Traditional ditional Un Unix

Each ch proces ess s has only one user r thread ad that at is associa

  • ciated

ed with th exactl tly one kernel el thread ad

slide-11
SLIDE 11

11

On One-to to-On One e Mo Mode del: 2/ 2/2

Each ch process ess has mu multiple tiple user thread eads s each ch of which h is associa ciated ted with h one kernel el thread.

  • ad. If a kernel

nel thread ad is blocked, ed, the associa

  • ciated

ed user thread ead is blocked. ed.

slide-12
SLIDE 12

12

Ma Many ny-to to-Ma Many ny Mo Mode del

Each ch process ess has mu multipl tiple e threads ads that at are associa

  • ciated

ed with mu mult ltiple iple kernel el threads ads. . If a kernel el thread ad is blocked, ed, all user r thread eads associa ciated ted with h that at kernel el thread ad are blocked. ed.

slide-13
SLIDE 13

13

Mu Multico core re Pr Prog

  • gram

ammi ming ng: : 1/ 1/6

  • With a single-core CPU, threads are scheduled

by a scheduler and can only run one at a time.

  • With a multicore CPU, multiple threads may

run at the same time, one on each core.

  • Therefore, system design becomes more

complex than one may expect.

  • Five issues have to be addressed properly:

dividing activities, balance, data splitting, data dependency, and testing and debugging.

slide-14
SLIDE 14

14

Mu Multico core re Pr Prog

  • gram

amming ng: : 2/ 2/6

  • Divi

viding ding Activ ivit itie ies: Since each thread can run on a core, one must study the problem in hand so that program activities can be divided and run concurrently.

  • Matrix multiplication is a

good example.

  • Unfortunately, some

problems are inherently sequential (e.g., DFS).

C A B

i j i k k j k n , , ,

 

1

We may create a thread for each Cij

slide-15
SLIDE 15

15

Mu Multico core re Pr Prog

  • gram

ammi ming ng: 3/ 3/6

  • Bal

alan ance ce: Make sure that each thread has equal contribution, if possible, to the whole computation.

  • If an insignificant thread runs frequently,
  • ccupying a core, other more useful threads

would have less chance to run.

slide-16
SLIDE 16

16

Mu Multico core re Pr Prog

  • gram

amming ng: : 4/ 4/6

  • Data

ata Sp Splitting tting: Data may also be split into different sections so that each of which can be processed separately.

  • Matrix multiplication is

a good example.

  • Quicksort is another.

After partitioning, the two sections can be sorted separately.

After partitioning a[L..U] into a[L..M-1] and a[M+1..U], we may create two threads, one for each section. Then, each thread sorts its own section. Threads are created in a binary tree.

slide-17
SLIDE 17

17

Mu Multico core re Pr Prog

  • gram

ammi ming ng: : 5/ 5/6

  • Dat

ata D a Dep epen ende denc ncy: Watch for data items that are used by different threads. For example, two threads may update a common variable at the same time.

  • Should this happen, unexpected results may
  • ccur. As a result, the execution of threads has

to be sy sync nchr hron

  • nized

ed so that only one thread can update a shared variable at any time.

  • This is a very difficult issue in threaded

programming.

slide-18
SLIDE 18

18

Mu Multico core re Pr Prog

  • gram

ammi ming ng: : 6/ 6/6

  • Tes

esti ting ng and and Deb ebug uggi ging ng: The behavior of a threaded program is dynamic. A bug that appears in this test run may not occur in the

  • next. Some bugs may never surface throughout

the life-span of a threaded program, or may appear at an unexpected time.

  • Some debugging issues (e.g., race condition –

updating a shared resource at the same time, and system deadlock) do not have efficient solutions.

  • Thus, testing and debugging is an art, and

requires a careful design and planning.

slide-19
SLIDE 19

19

Th Threa ead d Ca Canc ncel ellat ation

  • n:

: 1/ 1/2

  • Thr

hrea ead d ca canc ncel ellat ation

  • n means terminating a

thread before its completion. The thread to be cancelled is the target thread.

  • There are two types:

Asy sync nchr hron

  • nou
  • us C

s Can ance cellat ation

  • n: the target

thread terminates immediately. Def efer erre red d Can ance cellat ation

  • n: The target

thread can periodically check if it should terminate, allowing the target thread an

  • pportunity to terminate itself in an orderly
  • way. The point a thread can terminate itself is

a cancellation point.

slide-20
SLIDE 20

20

Th Thread Cancellation: : 2/2

  • With asynchronous cancellation, if the target

thread owns some system-wide resources, the system may not be able to reclaim these recourses because other threads may be using them.

  • With deferred cancellation, the target thread

determines the time to terminate itself. Reclaiming resources is not a problem.

  • Many systems use asynchronous cancellation for

processes (e.g., system call kill) and threads.

  • POSIX Threads (i.e., Pthreads) supports deferred

cancellation.

slide-21
SLIDE 21

21

Thr hread ead-Specif Specific ic Da Data ta/Thread /Thread-Safe Safe

  • Data that a thread needs for its own operation are

thread-specific.

  • Poor support for thread-specific data could cause
  • problems. For example, while threads have their
  • wn stacks, they share the heap.
  • What if two malloc()s are executed at the same

time requesting for memory from the heap? Or, two printfs are run simultaneously?

  • A library that can be used by multiple threads

properly is a thread-safe one.

slide-22
SLIDE 22

22

Co Coro routine utines s an and d Fi Fibe bers: rs: 1/ 1/3

  • A conventional call to a function always starts from

the very beginning of that function.

  • A co

coro rout utine ne has multiple entry points and exits so that the next “call” to a coroutine resumes its execution from the statement/instruction following the previous exit point.

entry/exit execution coroutine enter/exit A B C execution flow ABCACBABC

slide-23
SLIDE 23

23

Co Coro routine utines s an and d Fi Fibe bers: rs: 2/ 2/3

  • Do the enter and exit activities look like what a

scheduler does?

  • Yes, an exit is a switching out, and an enter/re-

enter is a switching in.

  • Hence, coroutines resemble scheduling activities.

entry/exit execution coroutine enter/exit

Thread 1 Thread 2 Thread 3

  • ut
  • ut
  • ut

in in

slide-24
SLIDE 24

24

Co Coro routine utines s an and d Fi Fibe bers: rs: 3/ 3/3

  • A fibe

ber is a lightweight thread just like a thread is a lightweight process.

  • A fiber is created in a thread and shares resource

with other fibers of that thread.

  • A fiber has a stack, a subset of registers, and data

(or local storage) provided when it is created.

  • Fibers are scheduled with co-operative scheduling.
  • Co-operative scheduling means a fiber voluntarily

and explicitly yields its execution to another fiber with a YIELD or similar function call.

  • Thus, fibers are simpler than threads, and resemble

coroutines.

slide-25
SLIDE 25

25

The End