chapter 12 cpu structure and function contents processor
play

Chapter 12 CPU Structure and Function Contents Processor - PowerPoint PPT Presentation

Chapter 12 CPU Structure and Function Contents Processor organization Register organization Instruction cycle Instruction pipelining Pentium processor PowerPC processor 12.1 Processor Organization Requirements


  1. Chapter 12 CPU Structure and Function

  2. Contents • Processor organization • Register organization • Instruction cycle • Instruction pipelining • Pentium processor • PowerPC processor

  3. 12.1 Processor Organization • Requirements on CPU —Fetch instructions —Interpret instructions —Fetch data —Process data —Write data • CPU consists of —ALU —Control unit —Registers —Internal bus

  4. CPU With Systems Bus

  5. CPU Internal Structure

  6. 12.2 Register Organization • Design issues — Completely GPRs vs specialized registers – Specialized registers for particular operands + only BX, SI, and DI used for storing offset address in 80x86 + saving bits to represent them – Specialization limits programmer’s flexibility — Number of registers – For CISC, between 8 and 32 regarded as optimum + Fewer registers result in more memory references + More registers do not noticeably reduce memory references – RISC uses hundreds of registers — Register length – Address register must be long enough to hold the target address – Data register must be long enough to hold values of most data types + Some machine allow consecutive registers to hold double-length values

  7. User Visible Registers • GPR • Data register • Address register —Segment pointers —Index registers —Stack pointer • Condition codes(flags) —Set according to the result of operations —Used for checking certain condition —Can be read (implicitly) by programs – e.g. Jump if zero —Can not (usually) be set by programs

  8. Control & Status Registers • Program Counter — Updated after each instruction fetch — Updated when branch instruction is met • Instruction Register • Memory Address Register — Connected directly to address bus • Memory Buffer Register — Connected directly to data bus • Program Status Word — Sign, zero, carry, equal, overflow, interrupt enable/disable, supervisor mode • Others — Pointer to PCB (Process Control Block) , Interrupt vector register — Stack-related registers, Page table pointer

  9. Supervisor Mode • Intel microprocessor has 4 modes —Ring zero – Kernel functions —Ring one – Operating system functions —Ring three – User programs —Ring two – May be used for DBMS

  10. Example Register Organizations • Motorola MC68000 ( Not including purly internal regs ) —8 data registers – Used primarily for data manipulation + 8-, 16-, and 32-bit operations are possible – Also used as index registers —9 address registers – 32-bit wide – Includes two stack pointers + One for u ser and one for system —PC and status register There are no special perpose registers in this CPU.�

  11. Example Register Organizations • Intel 8086 (Every register is special purpose) —4 16-bit data registers ( can be used as general in some instructions ) – AX, BX, CX, DX —4 pointer and index registers – SP, BP, SI, DI —4 segment registers – CS, DS, SS, ES —Instruction Pointer and flags Registers have general as well as special purposes.� � There is no universally accepted philosophy � concerning the best way to organize CPU registers.

  12. Example Register Organizations

  13. 12.3 Instruction Cycle • Subcycles of instruction cycle —Fetch —Execute —Interrupt —Indirect(Newly added) • Indirect cycle —Indirect addressing requires additional memory access —Can be thought of as additional instruction subcycle

  14. Instruction Cycle State Diagram

  15. Data Flow • Fetch cycle —PC contains address of next instruction —Address moved to MAR —Address placed on address bus —Control unit requests memory read —Result placed on data bus, copied to MBR, then to IR —Meanwhile PC incremented by 1

  16. Data Flow, Fetch Cycle

  17. Data Flow • Indirect cycle —IR is examined —If indirect addressing, indirect cycle is performed – Right most N bits of MBR transferred to MAR – Control unit requests memory read – Result (address of operand) moved to MBR

  18. Data Flow, Indirect Cycle

  19. Data Flow • Execute cycle —May take many forms depending on instructions —May include – Memory read/write – Input/Output – Register transfers – ALU operations

  20. Data Flow • Interrupt cycle —Current PC saved to allow resumption after interrupt – Contents of PC copied to MBR – Special memory location (e.g. stack pointer) loaded to MAR – MBR written to memory —PC loaded with address of interrupt handling routine —Next instruction (first of interrupt handler) can be fetched

  21. Data Flow, Interrupt Cycle

  22. 12.4 Instruction Pipelining • Pipelining strategy —Similar to an assembly line in automobile factory – Instruction has a number of stages – Stages can be executed simultaneously • Simple two-stage pipelining —Fetch and execute stages —If two stages were of equal duration, instruction cycle time would be halved —But things are not that easy – Execution time is longer than fetch time + Fetch stage may have to wait – Conditional branch makes the next instruction unknown + Fetch stage wait or guess the branch

  23. Two Stage Instruction Pipeline

  24. Instruction Pipelining • More stages mean further speedup —Fetch Instruction(FI) —Decode Instruction(DI) —Calculate Operands (CO) —Fetch Operands(FO) —Execute Instructions(EI) —Write Operand(WO) • Characteristics(Equal duration assumed) —Reduced execution time for 9 inst. from 54 to 14 —Some instructions may not go through all 6 stages – LOAD does not need WO stage —Some stages may not be performed in parallel – FI, FO, and WO stages involve a memory access

  25. Timing Diagram for Instruction Pipeline Operation

  26. Instruction Pipelining • Factors that limit performance enhancement —Stages may not be of equal duration —Conditional branch instruction – Invalidate several instruction fetches —Interrupt —Data dependency – CO stage may depend on the contents of a register that could be altered by a previous instruction that is still in pipeline – System need to contain logic to solve this conflict

  27. Effect of a Conditional Branch

  28. Fetch Instruction FI Decode DI Instruction Calculate CO Operands Uncon- Yes ditional Branch? No Fetch FO Operands Execute EI Instruction Update Write WO PC Operands Empty Branch Pipe Yes No or Inter -rupt? Figure 12.12 Six-Stage Instruction Pipeline

  29. FI DI CO FO EI WO FI DI CO FO EI WO 1 I1 1 I1 2 I2 I1 2 I2 I1 3 I3 I2 I1 3 I3 I2 I1 4 I4 I3 I2 I1 4 I4 I3 I2 I1 5 I5 I4 I3 I2 I1 5 I5 I4 I3 I2 I1 6 I6 I5 I4 I3 I2 I1 6 I6 I5 I4 I3 I2 I1 Time 7 I7 I6 I5 I4 I3 I2 7 I7 I6 I5 I4 I3 I2 8 I8 I7 I6 I5 I4 I3 8 I15 I3 9 I9 I8 I7 I6 I5 I4 9 I16 I15 10 I9 I8 I7 I6 I5 10 I16 I15 11 I9 I8 I7 I6 11 I16 I15 12 I9 I8 I7 12 I16 I15 13 I9 I8 13 I16 I15 14 I9 14 I16 (a) No branches (b) With conditional branch Figure 12.13 An Alternative Pipeline Depiction

  30. Pipeline Performance • Measures of performance —Cycle time can be determined as τ = max[ τ i ] + d = τ m + d 1 <= i <= k where τ m = maximum stage delay k = number of stages in the instruction pipeline d = time delay of a latch, needed to advance signals and data from one stage to the next —We can ignore d since τ m >> d —Total time T k to execute n instructions is T k = [k + (n - 1)] τ —Thus speedup factor is defined as S k = T 1 /T k = nk τ /[k +(n - 1)] τ = nk/ [k +(n - 1)]

  31. Speedup Factors with Pipelining

  32. Speedup Factors with Pipelining

  33. Dealing with Branches • Approaches for dealing with branches —Multiple streams —Prefetch branch target —Loop buffer —Branch prediction —Delayed branch • Multiple streams —Have two pipelines – Prefetch each branch into a separate pipeline – Use appropriate pipeline —Problems – There may be contention delays for accessing data – Additional branch instruction needs an additional stream

  34. Dealing with Branches • Prefetch branch target —Target of branch is prefetched in addition to the instruction following branch —Keep target until branch is executed —Used in IBM 360/91 • Loop buffer —Contains n most recently fetched instructions, in sequence —Whenever a branch is to be taken, buffer is checked —Well suited to dealing with loops – If loop buffer is large enough to contain all the instructions in a loop, we need to fetch them only once —Used in CDC and CRAY-1

  35. Loop Buffer Diagram

  36. Dealing with Branches • Branch prediction —Predict never taken —Predict always taken —Predict by opcode —Taken/not taken switch —Branch history table • Predict never taken —Assume that jump will not happen – Always fetch next instruction —Used in MC68020 & VAX 11/780 —VAX will not prefetch the instruction after branch if a page fault would result

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend