ECE 327: Digital Systems Engineering Lecture Slides 2020t1 (Winter) - - PowerPoint PPT Presentation

ece 327 digital systems engineering lecture slides
SMART_READER_LITE
LIVE PREVIEW

ECE 327: Digital Systems Engineering Lecture Slides 2020t1 (Winter) - - PowerPoint PPT Presentation

ECE 327: Digital Systems Engineering Lecture Slides 2020t1 (Winter) Mark Aagaard University of Waterloo Department of Electrical and Computer Engineering ii CONTENTS iv 1.3.2 Library Units . . . . . . . . . . . . . . . . . . . . . . . . . .


slide-1
SLIDE 1

ECE 327: Digital Systems Engineering Lecture Slides

2020t1 (Winter) Mark Aagaard University of Waterloo Department of Electrical and Computer Engineering

slide-2
SLIDE 2

ii

Contents

1 Fundamentals of VHDL 21 1.1 Introduction to VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.1.1 Levels of Abstraction . . . . . . . . . . . . . . . . . . . . . . . . 22 1.1.2 VHDL Origins and History . . . . . . . . . . . . . . . . . . . . . 23 1.1.3 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.1.4 Synthesis of a Simulation-Based Language . . . . . . . . . . . 26 1.1.5 Solution to Synthesis Sanity . . . . . . . . . . . . . . . . . . . 27 1.1.6 Standard Logic 1164 . . . . . . . . . . . . . . . . . . . . . . . . 28 1.2 Comparison of VHDL to Other Hardware Description Languages . . . 29 1.3 Overview of Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.3.1 Syntactic Categories . . . . . . . . . . . . . . . . . . . . . . . . 29 iii

CONTENTS iv

1.3.2 Library Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.3.3 Entities and Architecture . . . . . . . . . . . . . . . . . . . . . 30 1.3.4 Concurrent Statements . . . . . . . . . . . . . . . . . . . . . . 33 1.3.5 Component Declaration and Instantiations . . . . . . . . . . . 36 1.3.6 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 1.3.7 Generate Statements . . . . . . . . . . . . . . . . . . . . . . . 41 1.3.8 Sequential Statements . . . . . . . . . . . . . . . . . . . . . . 42 1.3.9 A Few More Miscellaneous VHDL Features . . . . . . . . . . . 43 1.4 Concurrent vs Sequential Statements . . . . . . . . . . . . . . . . . . 43 1.4.1 Concurrent Assignment vs Process . . . . . . . . . . . . . . . 44 1.4.2 Conditional Assignment vs If Statements . . . . . . . . . . . . 45 1.4.3 Selected Assignment vs Case Statement . . . . . . . . . . . . 46 1.4.4 Coding Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 1.5 Overview of Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 48 1.5.1 Combinational Process vs Clocked Process . . . . . . . . . . . 52 1.5.2 Latch Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 1.6 VHDL Execution: Delta-Cycle Simulation . . . . . . . . . . . . . . . . 64 1.6.1 Simple Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 64 1.6.2 Temporal Granularities of Simulation . . . . . . . . . . . . . . . 65 1.6.3 Zero-Delay Simulation . . . . . . . . . . . . . . . . . . . . . . . 66 1.6.4 Intuition Behind Delta-Cycle Simulation . . . . . . . . . . . . . 67 1.6.4.1 Introduction to Delta-Cycle Simulation . . . . . . . . . 67 1.6.4.2 Intuitive Rules for Delta-Cycle Simulation . . . . . . . 68 1.6.4.3 Example of Delta: Buffers . . . . . . . . . . . . . . . . 69 1.6.4.4 Example of Delta and Proj: Buffers . . . . . . . . . . 69 1.6.4.5 Example of Proj Asn: Flip-Flops . . . . . . . . . . . . 70 1.6.4.6 Example of Delta and Proj: Comb Loop . . . . . . . . 71 1.6.5 VHDL Delta-Cycle Simulation . . . . . . . . . . . . . . . . . . . 78 1.6.5.1 Informal Description of Algorithm . . . . . . . . . . . . 79 1.6.5.2 Example: VHDL Sim for Buffers . . . . . . . . . . . . 80 1.6.5.3 Definitions and Algorithm . . . . . . . . . . . . . . . . 81 1.6.5.4 Example: Delta-Cycle for Flip-Flops . . . . . . . . . . 83 1.6.5.5 Ex: VHDL Sim of Comb Loop . . . . . . . . . . . . . 84 1.6.5.6 Rules and Observations for Drawing Delta-Cycle Sim- ulations . . . . . . . . . . . . . . . . . . . . . . . . . 86 1.6.6 External Inputs and Flip-Flops . . . . . . . . . . . . . . . . . . 88 1.7 Register-Transfer-Level Simulation . . . . . . . . . . . . . . . . . . . . 90 1.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 1.7.2 Technique for Register-Transfer Level Simulation . . . . . . . . 92 1.7.3 Examples of RTL Simulation . . . . . . . . . . . . . . . . . . . 93

CONTENTS v

slide-3
SLIDE 3

vi CONTENTS

1.7.3.1 RTL Simulation Example 1 . . . . . . . . . . . . . . . 93 1.8 Simple RTL Simulation in Software . . . . . . . . . . . . . . . . . . . . 100 1.9 Variables in VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 1.10 Delta-Cycle Simulation with Delays . . . . . . . . . . . . . . . . . . . 100 1.11 VHDL and Hardware Building Blocks . . . . . . . . . . . . . . . . . . 101 1.11.1 Basic Building Blocks . . . . . . . . . . . . . . . . . . . . . . 101 1.11.2 Deprecated Building Blocks for RTL . . . . . . . . . . . . . . 101 1.11.3 Hardware and Code for Flops . . . . . . . . . . . . . . . . . . 102 1.11.3.1 Flops with Waits and Ifs . . . . . . . . . . . . . . . . 102 1.11.3.2 Flops with Synchronous Reset . . . . . . . . . . . . 103 1.11.3.3 Flop with Chip-Enable and Mux on Input . . . . . . . 110 1.11.3.4 Flops with Chip-Enable, Muxes, and Reset . . . . . 111 1.11.4 Example Coding Styles . . . . . . . . . . . . . . . . . . . . . 111 1.12 Synthesizable vs Non-Synthesizable Code . . . . . . . . . . . . . . . 112 1.12.1 Wait For . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 1.12.2 Initial Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 1.12.3 Assignments before Wait Statement . . . . . . . . . . . . . . 114 1.12.4 “if rising edge” and “wait” in Same Process . . . . . . . . . . 115 1.12.5 “if rising edge” with “else” Clause . . . . . . . . . . . . . . . . 116 1.12.6 While Loop with Dynamic Condition and Combinational Body 117 1.13 Guidelines for Desirable Hardware . . . . . . . . . . . . . . . . . . . 119 1.13.1 Latches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 1.13.2 Combinational Loops . . . . . . . . . . . . . . . . . . . . . . . 124 1.13.3 Multiple Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . 125 1.13.4 Asynchronous Reset . . . . . . . . . . . . . . . . . . . . . . . 127 1.13.5 Using a Data Signal as a Clock . . . . . . . . . . . . . . . . . 128 1.13.6 Using a Clock Signal as Data . . . . . . . . . . . . . . . . . . 129 1.14 Bad VHDL Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 1.14.1 Tri-State Buffers and Signals . . . . . . . . . . . . . . . . . . 130 1.14.2 Variables in Processes . . . . . . . . . . . . . . . . . . . . . . 133 1.14.3 Bits and Booleans as Signals . . . . . . . . . . . . . . . . . . 134

vii CONTENTS CONTENTS viii

2 Additional Features of VHDL 137 2.1 Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 2.1.1 Numeric Literals . . . . . . . . . . . . . . . . . . . . . . . . . . 138 2.1.2 Bit-String Literals . . . . . . . . . . . . . . . . . . . . . . . . . . 139 2.2 Arrays and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 2.2.1 Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 2.2.2 Indexing, Slicing, Concatenation, Aggregates . . . . . . . . . . 142 2.3 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 2.3.1 Arithmetic Packages . . . . . . . . . . . . . . . . . . . . . . . . 145 2.3.2 Arithmetic Types . . . . . . . . . . . . . . . . . . . . . . . . . . 146 2.3.3 Overloading of Arithmetic . . . . . . . . . . . . . . . . . . . . . 147 2.3.4 Widths for Addition and Subtraction . . . . . . . . . . . . . . . 148 2.3.5 Overloading of Comparisons . . . . . . . . . . . . . . . . . . . 150 2.3.6 Widths for Comparisons . . . . . . . . . . . . . . . . . . . . . . 151 2.3.7 Type Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . 152 2.3.8 Shift and Rotate Operations . . . . . . . . . . . . . . . . . . . . 156 2.3.9 Arithmetic Optimizations . . . . . . . . . . . . . . . . . . . . . . 157 2.4 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 2.4.1 Enumerated Types . . . . . . . . . . . . . . . . . . . . . . . . . 158 2.4.2 Defining New Array Types . . . . . . . . . . . . . . . . . . . . . 159 3 Overview of FPGAs 161 3.1 Generic FPGA Hardware . . . . . . . . . . . . . . . . . . . . . . . . . 161 3.1.1 Generic FPGA Cell . . . . . . . . . . . . . . . . . . . . . . . . 162 3.1.2 Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 3.1.3 Interconnect for Generic FPGA . . . . . . . . . . . . . . . . . . 167 3.1.4 Blocks of Cells for Generic FPGA . . . . . . . . . . . . . . . . 170 3.1.5 Special Circuitry in FPGAs . . . . . . . . . . . . . . . . . . . . 172 3.2 Area Estimation for FPGAs . . . . . . . . . . . . . . . . . . . . . . . . 173 3.2.1 Area for Circuit with one Target . . . . . . . . . . . . . . . . . . 174 3.2.2 Algorithm to Allocate Gates to Cells . . . . . . . . . . . . . . . 177 3.2.3 Area for Arithmetic Circuits . . . . . . . . . . . . . . . . . . . . 182

CONTENTS ix

slide-4
SLIDE 4

x CONTENTS

4 State Machines 191 4.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 4.2 Finite State Machines in VHDL . . . . . . . . . . . . . . . . . . . . . . 193 4.2.1 HDL Coding Styles for State Machines . . . . . . . . . . . . . . 193 4.2.2 State Encodings . . . . . . . . . . . . . . . . . . . . . . . . . . 194 4.2.3 Traditional State-Machine Notation . . . . . . . . . . . . . . . . 195 4.2.4 Our State-Machine Notation . . . . . . . . . . . . . . . . . . . 196 4.2.5 Bounce Example . . . . . . . . . . . . . . . . . . . . . . . . . . 197 4.2.6 Registered Assignments . . . . . . . . . . . . . . . . . . . . . 202 4.2.7 More Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 4.2.7.1 Extension: Transient States . . . . . . . . . . . . . . . 205 4.2.7.2 Assignments within States . . . . . . . . . . . . . . . 207 4.2.7.3 Conditional Expressions . . . . . . . . . . . . . . . . 210 4.2.7.4 Default Values . . . . . . . . . . . . . . . . . . . . . . 211 4.2.8 Semantic and Syntax Rules . . . . . . . . . . . . . . . . . . . . 218 4.2.9 Reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 4.3 LeBlanc FSM Design Example . . . . . . . . . . . . . . . . . . . . . . 228 4.3.1 State Machine and VHDL . . . . . . . . . . . . . . . . . . . . . 229 4.3.2 State Encodings . . . . . . . . . . . . . . . . . . . . . . . . . . 232 4.4 Parcels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 4.4.1 Bubbles and Throughput . . . . . . . . . . . . . . . . . . . . . 240 4.4.2 Parcel Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . 245 4.4.3 Valid Bits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 4.5 LeBlanc with Bubbles . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 4.6 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 4.7 Interparcel Variables and Loops . . . . . . . . . . . . . . . . . . . . . 255 4.7.1 Introduction to Looping Le Blanc . . . . . . . . . . . . . . . . . 255 4.7.2 Pseudo-Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 4.7.3 State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 4.7.4 VHDL Code for Loop and Bubbles . . . . . . . . . . . . . . . . 260 4.8 Memory Arrays and RTL Design . . . . . . . . . . . . . . . . . . . . . 262 4.8.1 Memory Operations . . . . . . . . . . . . . . . . . . . . . . . . 262 4.8.2 Memory Arrays in VHDL . . . . . . . . . . . . . . . . . . . . . . 265 4.8.3 Using Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 4.8.3.1 Writing from Multiple Vars . . . . . . . . . . . . . . . 267 4.8.3.2 Reading from Memory to Multiple Variables . . . . . . 268 4.8.3.3 Example: Maximum Value Seen so Far . . . . . . . . 270 4.8.4 Build Larger Memory from Slices . . . . . . . . . . . . . . . . . 273 4.8.5 Memory Arrays in High-Level Models . . . . . . . . . . . . . . 274

xi CONTENTS CONTENTS xii

5 Dataflow Diagrams 275 5.1 Dataflow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 5.1.1 Dataflow Diagrams Overview . . . . . . . . . . . . . . . . . . . 276 5.1.2 Dataflow Diagram Execution . . . . . . . . . . . . . . . . . . . 284 5.1.3 Dataflow Diagrams, Hardware, and Behaviour . . . . . . . . . 286 5.1.4 Performance Estimation . . . . . . . . . . . . . . . . . . . . . . 290 5.1.5 Area Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 291 5.1.6 Design Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 293 5.2 Design Example: Hnatyshyn DFD . . . . . . . . . . . . . . . . . . . . 298 5.2.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 5.2.2 Data-Dependency Graph . . . . . . . . . . . . . . . . . . . . . 299 5.2.3 Initial Dataflow Diagram . . . . . . . . . . . . . . . . . . . . . . 300 5.2.4 Area Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 301 5.2.5 Assign Names to Registered Signals . . . . . . . . . . . . . . . 302 5.2.6 Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 5.2.7 State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 5.2.8 VHDL Implementation . . . . . . . . . . . . . . . . . . . . . . . 316 5.3 Design Example: Hnatyshyn with Bubbles . . . . . . . . . . . . . . . . 321 5.3.1 Adding Support for Bubbles . . . . . . . . . . . . . . . . . . . . 322 5.3.2 Control Table with Valid Bits . . . . . . . . . . . . . . . . . . . . 326 5.3.3 VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 5.4 Inter-Parcel Variables: Hnatyshyn with Internal State . . . . . . . . . . 331 5.4.1 Requirements and Goals . . . . . . . . . . . . . . . . . . . . . 332 5.4.2 Dataflow Diagrams and Waveforms . . . . . . . . . . . . . . . 333 5.4.3 Control Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 5.4.4 VHDL Implementation . . . . . . . . . . . . . . . . . . . . . . . 339 5.4.5 Summary of Bubbles and Inter-Parcel Variables . . . . . . . . 341 5.5 Design Example: Vanier . . . . . . . . . . . . . . . . . . . . . . . . . . 342 5.5.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 5.5.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 5.5.3 Initial Dataflow Diagram . . . . . . . . . . . . . . . . . . . . . . 345 5.5.4 Reschedule to Meet Requirements . . . . . . . . . . . . . . . . 346 5.5.5 Optimization: Reduce Inputs . . . . . . . . . . . . . . . . . . . 348 5.5.6 Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 5.5.7 Explicit State Machine . . . . . . . . . . . . . . . . . . . . . . . 352 5.5.8 VHDL #1: Explicit . . . . . . . . . . . . . . . . . . . . . . . . . 353 5.5.9 VHDL #2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 5.5.10 Notes and Observations . . . . . . . . . . . . . . . . . . . . . 359 5.6 Memory Operations in Dataflow Diagrams . . . . . . . . . . . . . . . . 361 5.7 Data Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 5.8 Example of DFD and Memory . . . . . . . . . . . . . . . . . . . . . . 371

CONTENTS xiii

slide-5
SLIDE 5

xiv CONTENTS

6 Optimizations 377 6.1 Pipelining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 6.1.1 Introduction to Pipelining . . . . . . . . . . . . . . . . . . . . . 379 6.1.2 Partially Pipelined . . . . . . . . . . . . . . . . . . . . . . . . . 383 6.1.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 6.1.4 Overlapping Pipeline Stages . . . . . . . . . . . . . . . . . . . 386 6.2 Staggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 6.3 Retiming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 6.4 General Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 6.4.1 Strength Reduction . . . . . . . . . . . . . . . . . . . . . . . . 402 6.4.1.1 Arithmetic Strength Reduction . . . . . . . . . . . . . 402 6.4.1.2 Boolean Strength Reduction . . . . . . . . . . . . . . 403 6.4.2 Replication and Sharing . . . . . . . . . . . . . . . . . . . . . . 404 6.4.2.1 Mux-Pushing . . . . . . . . . . . . . . . . . . . . . . . 404 6.4.2.2 Common Subexpression Elimination . . . . . . . . . . 405 6.4.2.3 Computation Replication . . . . . . . . . . . . . . . . 407 6.4.3 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 6.5 Customized State Encodings . . . . . . . . . . . . . . . . . . . . . . . 409 7 Performance Analysis 411 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 7.2 Defining Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 7.3 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 7.4 Comparing Performance . . . . . . . . . . . . . . . . . . . . . . . . . 419 7.4.1 General Equations . . . . . . . . . . . . . . . . . . . . . . . . . 419 7.4.2 Example: Performance of Printers . . . . . . . . . . . . . . . . 426 7.5 Clock Speed, CPI, Program Length, and Performance . . . . . . . . . 427 7.5.1 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 7.5.2 Example: CISC vs RISC and CPI . . . . . . . . . . . . . . . . 428 7.5.3 Effect of Instruction Set on Performance . . . . . . . . . . . . . 432 7.6 Effect of Time to Market on Relative Performance . . . . . . . . . . . 438

xv CONTENTS CONTENTS xvi

8 Timing Analysis 445 8.1 Delays and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 8.1.1 Background Definitions . . . . . . . . . . . . . . . . . . . . . . 446 8.1.2 Clock-Related Timing Definitions . . . . . . . . . . . . . . . . . 447 8.1.2.1 Clock Latency . . . . . . . . . . . . . . . . . . . . . . 448 8.1.2.2 Clock Skew . . . . . . . . . . . . . . . . . . . . . . . . 449 8.1.2.3 Clock Jitter . . . . . . . . . . . . . . . . . . . . . . . . 451 8.1.3 Storage-Related Timing Definitions . . . . . . . . . . . . . . . 453 8.1.3.1 Flops and Latches . . . . . . . . . . . . . . . . . . . . 453 8.1.3.2 Timing Parameters . . . . . . . . . . . . . . . . . . . 454 8.1.3.3 Timing Parameters for a Flop . . . . . . . . . . . . . . 455 8.1.4 Propagation Delays . . . . . . . . . . . . . . . . . . . . . . . . 456 8.1.5 Timing Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 457 8.2 Timing Analysis of Simple Latches . . . . . . . . . . . . . . . . . . . . 461 8.2.1 Review: Active-High Latch Behaviour . . . . . . . . . . . . . . 461 8.2.2 Structure and Behaviour of Multiplexer Latch . . . . . . . . . . 462 8.2.3 Strategy for Timing Analysis of Storage Devices . . . . . . . . 465 8.2.4 Clock-to-Q Time of a Latch . . . . . . . . . . . . . . . . . . . . 466 8.2.5 From Load Mode to Store Mode . . . . . . . . . . . . . . . . . 467 8.2.6 Setup Time Analysis . . . . . . . . . . . . . . . . . . . . . . . . 468 8.2.7 Hold Time of a Multiplexer Latch . . . . . . . . . . . . . . . . . 474 8.2.8 Example of a Bad Latch . . . . . . . . . . . . . . . . . . . . . . 477 8.2.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 8.3 Advanced Timing Analysis of Storage Elements . . . . . . . . . . . . 481 8.4 Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 8.4.1 Introduction to Critical and False Paths . . . . . . . . . . . . . 484 8.4.1.1 Example of Critical Path in Full Adder . . . . . . . . . 485 8.4.1.2 Longest Path and Critical Path . . . . . . . . . . . . . 487 8.4.1.3 Criteria for Critical Path Algorithms . . . . . . . . . . . 490 8.4.2 Longest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 8.4.2.1 Algorithm to Find Longest Path . . . . . . . . . . . . . 491 8.4.2.2 Longest Path Example . . . . . . . . . . . . . . . . . 492 8.4.3 Monotone Speedup . . . . . . . . . . . . . . . . . . . . . . . . 493 8.5 False Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 8.6 Analog Timing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 8.6.1 Defining Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 8.6.2 Modeling Circuits for Timing . . . . . . . . . . . . . . . . . . . . 504 8.6.2.1 Example: Two Buffers with Complex Wiring . . . . . . 507 8.6.2.2 Example: Two Buffers with Simple Wiring . . . . . . . 508 8.6.3 Calculate Delay . . . . . . . . . . . . . . . . . . . . . . . . . . 509

CONTENTS xvii

slide-6
SLIDE 6

xviii CONTENTS

8.6.4 Ex: Two Bufs with Both Caps . . . . . . . . . . . . . . . . . . . 515 8.7 Elmore Delay Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 8.7.1 Elmore Delay as an Approximation . . . . . . . . . . . . . . . . 519 8.7.2 A More Complicated Example . . . . . . . . . . . . . . . . . . 522 8.8 Practical Usage of Timing Analysis . . . . . . . . . . . . . . . . . . . . 527 9 Power 529 9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 9.1.1 Importance of Power and Energy . . . . . . . . . . . . . . . . . 530 9.1.2 Power vs.Energy . . . . . . . . . . . . . . . . . . . . . . . . . . 531 9.1.3 Batteries, Power and Energy . . . . . . . . . . . . . . . . . . . 533 9.1.3.1 Do Batteries Store Energy or Power? . . . . . . . . . 533 9.1.3.2 Battery Life and Efficiency . . . . . . . . . . . . . . . 534 9.1.3.3 Battery Life and Power . . . . . . . . . . . . . . . . . 535 9.2 Power Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 9.2.1 Switching Power . . . . . . . . . . . . . . . . . . . . . . . . . . 540 9.2.2 Short-Circuited Power . . . . . . . . . . . . . . . . . . . . . . . 541 9.2.3 Leakage Power . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 9.2.4 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 9.2.5 Note on Power Equations . . . . . . . . . . . . . . . . . . . . . 543 9.3 Overview of Power Reduction Techniques . . . . . . . . . . . . . . . . 543 9.4 Voltage, Power, and Delay . . . . . . . . . . . . . . . . . . . . . . . . . 548 9.5 Data Encoding for Power Reduction . . . . . . . . . . . . . . . . . . . 556 9.5.1 How Data Encoding Can Reduce Power . . . . . . . . . . . . . 556 9.5.2 Example Problem: Sixteen Pulser . . . . . . . . . . . . . . . . 560 9.5.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . 560 9.5.2.2 Additional Information . . . . . . . . . . . . . . . . . . 561 9.5.2.3 Answer . . . . . . . . . . . . . . . . . . . . . . . . . . 563 9.6 Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 9.6.1 Introduction to Clock Gating . . . . . . . . . . . . . . . . . . . . 570 9.6.2 Implementing Clock Gating . . . . . . . . . . . . . . . . . . . . 571 9.6.3 Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . 572 9.6.4 Effectiveness of Clock Gating . . . . . . . . . . . . . . . . . . . 573 9.6.5 Example: Reduced Activity Factor with Clock Gating . . . . . . 577 9.6.6 Calculating PctBusy . . . . . . . . . . . . . . . . . . . . . . . . 579 9.6.6.1 Valid Bits and Busy . . . . . . . . . . . . . . . . . . . 579 9.6.6.2 Calculating LenBusy . . . . . . . . . . . . . . . . . . 581 9.6.6.3 From LenBusy to PctBusy . . . . . . . . . . . . . . . 583 9.6.7 Example: Pipelined Circuit with Clock-Gating . . . . . . . . . . 584 9.6.8 Clock Gating in ASICs . . . . . . . . . . . . . . . . . . . . . . . 590

xix CONTENTS CONTENTS xx

9.6.9 Alternatives to Clock Gating . . . . . . . . . . . . . . . . . . . . 591 9.6.9.1 Use Chip Enables . . . . . . . . . . . . . . . . . . . . 591 9.6.9.2 Operand Gating . . . . . . . . . . . . . . . . . . . . . 592 10 Review 593 10.1 Overview of the Term . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 10.2 VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 10.2.1 VHDL Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 10.2.2 VHDL Example Problems . . . . . . . . . . . . . . . . . . . . 596 10.3 RTL Design Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 597 10.3.1 Design Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 10.3.2 Design Example Problems . . . . . . . . . . . . . . . . . . . . 598 10.4 Performance Analysis and Optimization . . . . . . . . . . . . . . . . 599 10.4.1 Performance Topics . . . . . . . . . . . . . . . . . . . . . . . 599 10.4.2 Performance Example Problems . . . . . . . . . . . . . . . . 600 10.5 Timing Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 10.5.1 Timing Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 10.5.2 Timing Example Problems . . . . . . . . . . . . . . . . . . . . 602 10.6 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 10.6.1 Power Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 10.6.2 Power Example Problems . . . . . . . . . . . . . . . . . . . . 604 10.7 Formulas to be Given on Final Exam . . . . . . . . . . . . . . . . . . 605

Chapter 1 Fundamentals of VHDL

21

slide-7
SLIDE 7

22 CHAPTER 1. FUNDAMENTALS OF VHDL

1.1 Introduction to VHDL 1.1.1 Levels of Abstraction

Transistor Signal values and time are continous (analog). Each transistor is modeled by a resistor-capacitor network. Switch Time is continuous, but voltage may be either continuous or discrete. Linear equations are used. Gate Transistors are grouped together into gates. Voltages are discrete values such as 0 and 1. Register transfer level Hardware is modeled as assignments to registers and combinational signals. Basic unit of time is one clock cycle. Transaction level A transaction is an operation such as transfering data across a bus. Building blocks are processors, controllers, etc. VHDL, SystemC, or SystemVerilog. Electronic-system level Looks at an entire electronic system, with both hardware and software.

1.1.2 VHDL Origins and History

VHDL = VHSIC Hardware Description Language VHSIC = Very High Speed Integrated Circuit The VHSIC Hardware Description Language (VHDL) is a formal notation intended for use in all phases of the creation of electronic systems. Because it is both machine readable and human readable, it supports the development, verification, synthesis and testing of hardware designs, the communication of hardware design data, and the maintenance, modification, and procurement of hardware. Language Reference Manual (IEEE Design Automation Standards Committee, 1993a)

VHDL is a lot more than synthesis of digital hardware

23 CHAPTER 1. FUNDAMENTALS OF VHDL 1.1.3 Semantics 24

1.1.3 Semantics

The original goal of VHDL was to simulate circuits. The semantics of the language define circuit behaviour.

a b c c <= a AND b; simulation

But now, VHDL is used in simulation and synthesis. Synthesis is concerned with the structure of the circuit. Synthesis: converts one type of description (behavioural) into another, lower level, description (usually a netlist).

a b c c <= a AND b;

synthesis

Synthesis vs Simulation

For synthesis, we want the code we write to define the structure of the hardware that is generated. The VHDL semantics define the behaviour of the hardware that is generated, not the structure of the hardware.

a b c a b c c <= a AND b; a b c

different structure same behaviour synthesis simulation

a b c

simulation synthesis

a b c

simulation same behaviour

1.1.3 Semantics 25

slide-8
SLIDE 8

26 CHAPTER 1. FUNDAMENTALS OF VHDL

1.1.4 Synthesis of a Simulation-Based Language

This section reserved for your reading pleasure

1.1.5 Solution to Synthesis Sanity

  • Pick a high-quality synthesis tool and study its documentation thoroughly
  • Learn the idioms of the tool
  • Different VHDL code with same behaviour can result in very different circuits
  • Be careful if you have to port VHDL code from one tool to another
  • KISS: Keep It Simple Stupid

– VHDL examples will illustrate reliable coding techniques for the synthesis tools from Synopsys, Mentor Graphics, Altera, Xilinx, and most other companies as well. – Follow the coding guidelines and examples from lecture – As you write VHDL, think about the hardware you expect to get. Note: If you can’t predict the hardware, then the hardware probably won’t be very good (small, fast, correct, etc)

27 CHAPTER 1. FUNDAMENTALS OF VHDL 1.1.6 Standard Logic 1164 28

1.1.6 Standard Logic 1164

std logic 1164: IEEE standard for signal values in VHDL. ’U’ uninitialized ’X’ strong unknown ’0’ strong 0 ’1’ strong 1 ’Z’ high impedance ’W’ weak unknown ’L’ weak 0 ’H’ weak 1 ’-’ don’t care The most common values are: ’U’, ’X’, ’0’, ’1’. If you see ’X’ in a simulation, it usually means that there is a mistake in your code.

1.2 Comparison of VHDL to Other Hardware Description Languages

This section reserved for your reading pleasure

1.3 Overview of Syntax 1.3.1 Syntactic Categories

This section reserved for your reading pleasure

1.3.2 Library Units

This section reserved for your reading pleasure

1.2. COMPARISON OF VHDL TO OTHER HARDWARE DESCRIPTION LANGUAGES 29

slide-9
SLIDE 9

30 CHAPTER 1. FUNDAMENTALS OF VHDL

1.3.3 Entities and Architecture

Each hardware module is described with an Entity/Architecture pair

architecture entity architecture entity

Entity and Architecture

Entity

library ieee; use ieee.std_logic_1164.all; entity and_or is port ( a, b, c : in std_logic ; z : out std_logic ); end entity; Example of an entity

31 CHAPTER 1. FUNDAMENTALS OF VHDL 1.3.3 Entities and Architecture 32

Architecture

architecture main of and_or is signal x : std_logic; begin x <= a and b; z <= x or (a and c); end architecture; <= := = Example of architecture

1.3.4 Concurrent Statements

  • An architecture contains concurrent statements
  • Concurrent statements execute in parallel

– Concurrent statements make VHDL fundamentally different from most software languages. – Hardware (gates) naturally execute in parallel — VHDL mimics the behaviour

  • f real hardware.

– At each infinitesimally small moment of time, in parallel, every gate:

  • 1. samples its inputs
  • 2. computes the value of its output
  • 3. drives the output

1.3.4 Concurrent Statements 33

slide-10
SLIDE 10

34 CHAPTER 1. FUNDAMENTALS OF VHDL

Concurrent Statements

architecture main1 of simple is begin x1 <= a AND b; x2 <= NOT x1; z <= NOT x2; end main; architecture main2 of simple is begin z <= NOT x2; x2 <= NOT x1; x1 <= a AND b; end main;

a b z x1 x2

The order of concurrent statements doesn’t matter

Types of Concurrent Statements

conditional assignment similar to conventional if-then-else c <= a+b when sel=’1’ else a+c when sel=’0’ else "0000"; selected assignment similar to conventional case/switch with color select d <= "00" when red , "01" when ...; component instantiation use a hardware module/component add1 : adder port map( a => f, b => g, s => h, co => i); for-generate create multiple pieces of hardware bgen: for i in 1 to 7 generate b(i)<=a(7-i); end generate; if-generate conditionally create some hardware

  • kgen :

if optgoal /= fast then generate result <= ((a and b) or (d and not e)) or g; end generate; fastgen : if optgoal = fast then generate result <= ’1’; end generate; process description of complex behaviour (section 1.3.6)

35 CHAPTER 1. FUNDAMENTALS OF VHDL 1.3.5 Component Declaration and Instantiations 36

1.3.5 Component Declaration and Instantiations

This section reserved for your reading pleasure

1.3.6 Processes

  • Processes are used to describe complex and potentially unsynthesizable

behaviour

  • A process is a concurrent statement (section 1.3.4).
  • The body of a process contains sequential statements (section 1.3.8)
  • Processes are the most complex and difficult to understand part of VHDL

(sections 1.5 and 1.6)

Example Process with Sensitivity List

process (a, b, c) begin y <= a and b; if (a = ’1’) then z1 <= b and c; z2 <= not c; else z1 <= b or c; z2 <= c; end if; end process;

1.3.6 Processes 37

slide-11
SLIDE 11

38 CHAPTER 1. FUNDAMENTALS OF VHDL

Example Process with Wait Statements

process begin wait until rising_edge(clk); if (a = ’1’) then z <= ’1’; y <= ’0’; else y <= a or b; end if; end process;

Sensitivity Lists and Wait Statements

  • Processes must have either a sensitivity list or at least one wait statement on

each execution path through the process.

  • Processes cannot have both a sensitivity list and a wait statement.

39 CHAPTER 1. FUNDAMENTALS OF VHDL 1.3.6 Processes 40

Sensitivity List

The sensitivity list contains the signals that are read in the process. A process is executed when a signal in its sensitivity list changes value. An important coding guideline to ensure consistent synthesis and simulation results is to include all signals that are read in the sensitivity list. There is one exception to this rule: for a process that implements a flip-flop with an if rising edge statement, it is acceptable to include only the clock signal in the sensitivity list — other signals may be included, but are not needed.

1.3.7 Generate Statements

  • Two categories of generate statements:

– if-generate: conditionally generate some hardware – for-generate: generate multiple copies of some hardware

  • Generate statements are executed during elaboration (at compile time)
  • The conditions and loop ranges must be static

– Must be able to be evaluated at elaboration – Must not depend upon the value of any signal

  • A generate statement must be preceded by a label

1.3.7 Generate Statements 41

slide-12
SLIDE 12

42 CHAPTER 1. FUNDAMENTALS OF VHDL

1.3.8 Sequential Statements

Used inside processes, functions, and procedures. wait wait until . . . ; signal assignment . . . <= . . . ; if-then-else if . . . then . . . elsif . . . end if; case case . . . is when . . . | . . . => . . . ; when . . . => . . . ; end case; loop loop . . . end loop; while loop while . . . loop . . . end loop; for loop for . . . in . . . loop . . . end loop; next next . . . ; The most commonly used sequential statements

1.3.9 A Few More Miscellaneous VHDL Features

This section reserved for your reading pleasure

1.4 Concurrent vs Sequential Statements

All concurrent assignments can be translated into sequential statements. But, not all sequential statements can be translated into concurrent statements.

43 CHAPTER 1. FUNDAMENTALS OF VHDL 1.4.1 Concurrent Assignment vs Process 44

1.4.1 Concurrent Assignment vs Process

The two code fragments below have identical behaviour:

architecture main of tiny is begin b <= a; end main; architecture main of tiny is begin process (a) begin b <= a; end process; end main;

1.4.2 Conditional Assignment vs If Statements

The two code fragments below have identical behaviour: Concurrent Statements t <= <val1> when <cond> else <val2>; Sequential Statements if <cond> then t <= <val1>; else t <= <val2>; end if

1.4.2 Conditional Assignment vs If Statements 45

slide-13
SLIDE 13

46 CHAPTER 1. FUNDAMENTALS OF VHDL

1.4.3 Selected Assignment vs Case Statement

The two code fragments below have identical behaviour Concurrent Statements with <expr> select t <= <val1> when <choices1>, <val2> when <choices2>, <val3> when <choices3>; Sequential Statements case <expr> is when <choices1> => t <= <val1>; when <choices2> => t <= <val2>; when <choices3> => t <= <val3>; end case;

1.4.4 Coding Style

Code that’s easy to write with sequential statements, but difficult with concurrent: case <expr> is when <choice1> => if <cond> then

  • <= <expr1>;

else

  • <= <expr2>;

end if; when <choice2> => . . . end case;

47 CHAPTER 1. FUNDAMENTALS OF VHDL 1.5. OVERVIEW OF PROCESSES 48

1.5 Overview of Processes

Processes are the most difficult VHDL construct to understand. This section gives an overview of processes. section 1.6 gives the details of the semantics of processes.

  • Within a process, statements are executed almost sequentially
  • Among processes, execution is done in parallel
  • Remember: a process is a concurrent statement!

Process Semantics

  • VHDL mimics hardware
  • Hardware (gates) execute in parallel
  • Processes execute in parallel with each other
  • All possible orders of executing processes must produce the same simulation

results (waveforms)

  • If a signal is not assigned a value, then it holds its previous value

All orders of executing concurrent statements must produce the same waveforms

1.5. OVERVIEW OF PROCESSES 49

slide-14
SLIDE 14

50 CHAPTER 1. FUNDAMENTALS OF VHDL

Process Semantics

architecture procA: process stmtA1; stmtA2; stmtA3; end process; procB: process stmtB1; stmtB2; end process;

execution sequence

A1 A2 A3 B1 B2

execution sequence

A1 A2 A3 B1 B2

execution sequence

A1 A2 A3 B1 B2

single threaded: procA before procB single threaded: procB before procA multithreaded: procA and procB in parallel

Process Semantics

All execution orders must have same behaviour

51 CHAPTER 1. FUNDAMENTALS OF VHDL 1.5.1 Combinational Process vs Clocked Process 52

1.5.1 Combinational Process vs Clocked Process

Each well-written synthesizable process is either combinational or clocked.

Combinational process:

  • Executing the process takes part of one clock cycle
  • Target signals are outputs of combinational circuitry
  • A combinational process must have a sensitivity list
  • A combinational process must not have any wait statements
  • A combinational process must not have any rising_edges, or

falling_edges

  • The hardware for a combinational process is just combinational circuitry

Clocked process:

  • Executing the process takes one (or more) clock cycles
  • Target signals are outputs of flops
  • Process contains one or more wait or if rising edge statements
  • Hardware contains combinational circuitry and flip flops

Note: Clocked processes are sometimes called “sequential processes”, but this can be easily confused with “sequential statements”, so in ECE-327 we’ll refer to synthesizable processes as either “combinational” or “clocked”.

1.5.1 Combinational Process vs Clocked Process 53

slide-15
SLIDE 15

54 CHAPTER 1. FUNDAMENTALS OF VHDL

Combinational or Clocked Process? (1)

process (a,b,c) p1 <= a; if (b = c) then p2 <= b; else p2 <= a; end if; end process;

Combinational or Clocked Process? (2)

process begin wait until rising_edge(clk); b <= a; end process;

55 CHAPTER 1. FUNDAMENTALS OF VHDL 1.5.1 Combinational Process vs Clocked Process 56

Combinational or Clocked Process? (3)

process (clk) begin if rising_edge(clk) then b <= a; end if; end process;

Combinational or Clocked Process? (4)

process (clk) begin a <= clk; end process;

1.5.1 Combinational Process vs Clocked Process 57

slide-16
SLIDE 16

58 CHAPTER 1. FUNDAMENTALS OF VHDL

Combinational or Clocked Process? (5)

process begin wait until rising_edge(a); c <= b; end process;

1.5.2 Latch Inference

The semantics of VHDL require that if a signal is assigned a value on some passes through a process and not on other passes, then on a pass through the process when the signal is not assigned a value, it must maintain its value from the previous pass. process (a, b, c) begin if (a = ’1’) then z1 <= b; z2 <= b; else z1 <= c; end if; end process;

a b c z1 z2

Example of latch inference

59 CHAPTER 1. FUNDAMENTALS OF VHDL 1.5.2 Latch Inference 60

Latch Inference

When a signal’s value must be stored, VHDL infers a latch or a flip-flop in the hardware to store the value. If you want a latch or a flip-flop for the signal, then latch inference is good. If you want combinational circuitry, then latch inference is bad.

Loop, Latch, Flop

b a z

Combinational loop

b z a

EN

Latch

b z a

D Q

Flip-flop Question: Write VHDL code for each of the above circuits

1.5.2 Latch Inference 61

slide-17
SLIDE 17

62 CHAPTER 1. FUNDAMENTALS OF VHDL

Review: Introduction to VHDL

  • 1. The goal of ece327 is help you think

.

  • 2. Hardware runs

Software runs

  • 3. In VHDL, the interface of a circuit is called a(n)

. In VHDL, the body of a circuit is called a(n) . The body of a circuit contains statements, which execute A process contains statements, which execute

  • 4. To simulate hardware:

At each , every gate in the circuit: 1 2 3

63 CHAPTER 1. FUNDAMENTALS OF VHDL

slide-18
SLIDE 18

1.6 VHDL Execution: Delta-Cycle Simulation 1.6.1 Simple Simulation

Hardware runs in parallel: At each infinitesimally small moment of time, in parallel, each gate:

  • 1. samples its inputs
  • 2. computes the value of its output
  • 3. drives the output

a b c d e

a b c d e 0ns 10ns 12ns 15ns

1.6. VHDL EXECUTION: DELTA-CYCLE SIMULATION 64

slide-19
SLIDE 19

65 CHAPTER 1. FUNDAMENTALS OF VHDL

1.6.2 Temporal Granularities of Simulation

register-transfer-level

  • smallest unit of time is a clock cycle
  • combinational logic has zero delay
  • flip-flops have a delay of one clock cycle

timing simulation

  • smallest unit of time is a nano, pico, or fempto second
  • combinational logic and wires have delay as computed by timing analysis

tools

  • flip-flops have setup, hold, and clock-to-Q timing parameters

delta cycles

  • units of time are artifacts of VHDL semantics and simulation software
  • simulation cycles, delta cycles, and simulation steps are infinitesimally small

amounts of time

  • VHDL semantics are defined in terms of these concepts

1.6.3 Zero-Delay Simulation

Register-transfer-level and delta-cycle simulation are both examples of zero-delay simulation. There are two fundamental rules for zero-delay simulation:

  • 1. Events appear to propagate through combinational circuitry instantaneously.
  • 2. All of the gates appear to operate in parallel

66 CHAPTER 1. FUNDAMENTALS OF VHDL 1.6.4 Intuition Behind Delta-Cycle Simulation 67

1.6.4 Intuition Behind Delta-Cycle Simulation 1.6.4.1 Introduction to Delta-Cycle Simulation

  • To make it appear that events propagate instantaneously through

combinational circuitry: VHDL introduces the delta cycle – Infinitesimally small artificial unit of time – In each delta cycle, in paralle, every gate in the circuit

  • 1. samples its input signals
  • 2. computes its result value
  • 3. drives the result value on its output signal
  • To make it appear that gates operate in parallel: VHDL introduces the

projected assignment – the effect of simulating a gate remains invisible until the beginning of the next delta cycle

1.6.4.2 Intuitive Rules for Delta-Cycle Simulation

  • 1. Simulate a gate if any of its inputs changed.

If no input changed, then the current value of the output is correct and the

  • utput can stay at the same value.
  • 2. Each gate is simulated at most once per delta cycle.
  • 3. When a gate is executed, the projected (i.e., new) value of the output remains

invisible until the beginning of the next delta cycle.

  • 4. Increment time when there is no need for another delta cycle.

No gate had an input change value in the current delta cycle.

1.6.4 Intuition Behind Delta-Cycle Simulation 68

slide-20
SLIDE 20

69 CHAPTER 1. FUNDAMENTALS OF VHDL

1.6.4.3 Example of Delta: Buffers

This section reserved for your reading pleasure

1.6.4.4 Example of Delta and Proj: Buffers

a b c 1ns 2ns Delta-cycle simulation with projected values Simple simulation 1ns a b c

Abbreviated Code proc (a) b <= a; end; proc (b) c <= b; end; Hardware

a b c

S: C: D:

1.6.4.5 Example of Proj Asn: Flip-Flops

This section reserved for your reading pleasure

70 CHAPTER 1. FUNDAMENTALS OF VHDL

slide-21
SLIDE 21

1.6.4.6 Example of Delta and Proj: Comb Loop

Truly parallel simulation:

  • multiple gates/processes execute at the same time
  • therefore no need for projected assignment.

a b c d a b c d a b c d a b c d 1ns δ 0ns

1.6.4 Intuition Behind Delta-Cycle Simulation 71

slide-22
SLIDE 22

72 CHAPTER 1. FUNDAMENTALS OF VHDL

Correct Simulation

  • Processes execute one at a time.
  • Projected assignments become visible at the beginning of the next delta cycle.

a b c d

1 1 1

a b c d

1 1 1

a b c d

1 1 1

a b c d

1 1 1 1

a b c d

1 1 1 1 1

a b c d

1 1 1

slide-23
SLIDE 23

Different Execution Orders

The order in which we execute the processes does not affect the behaviour. Execution order: b, c, d

a b c d

1 1 1

a b c d

1 1 1 1

a b c d

1 1 1 1 1

Execution order: b, d, c

a b c d

1 1 1

a b c d

1 1 1 1

a b c d

1 1 1 1 1

73 CHAPTER 1. FUNDAMENTALS OF VHDL

slide-24
SLIDE 24

1.6.4 Intuition Behind Delta-Cycle Simulation 74

Buggy Simulation

  • Processes execute one at a time.
  • Projected assignments become visible immediately.

Execution order: b, c, d

a b c d

1 1

a b c d

1

a b c d

1

Execution order: b, d, c

a b c d

1 1

a b c d

1 1 1 1

a b c d

1 1 1 1 1

slide-25
SLIDE 25

Analysis

Which values does c see? Correct Buggy b d

1.6.4 Intuition Behind Delta-Cycle Simulation 75

slide-26
SLIDE 26

76 CHAPTER 1. FUNDAMENTALS OF VHDL

Re-do with Waveforms

a b c d

Execution order: b, c, d

a b c d 1ns δ δ-cycle δ-cycle δ

1 1 1 Final value S S S S C C C C D D D D

0ns

Execution order: b, d, c

a b c d 1ns δ-cycle δ-cycle δ δ

1 1 1 Final value S S S S C C C C D D D D

0ns

slide-27
SLIDE 27

Buggy Simulation

a b c d

1

a b c d 1ns δ-cycle δ

1 Final value

δ-cycle

S S S S C C C C D D D D

0ns

Execution order: b, c, d

a b c d

1 1 1

a b c d 1ns δ δ-cycle

1 1 1 Final value

δ-cycle

S S S S C C C C D D D D

0ns

Execution order: b, d, c

77 CHAPTER 1. FUNDAMENTALS OF VHDL

slide-28
SLIDE 28

1.6.5 VHDL Delta-Cycle Simulation 78

1.6.5 VHDL Delta-Cycle Simulation

The algorithm presented here is a simplification of the actual algorithm in the VHDL Standard. This algorithm does not support:

  • delayed assignments; for example: a <= b after 2 ns;
  • resolution, which is where multiple processes write to the same signal (usually a

mistake, but useful for tri-state busses)

1.6.5.1 Informal Description of Algorithm

  • Processes have three modes:

Resumed : The process has work to do and is waiting its turn to execute. Executing : The process is running. Suspended : The process is idle and has no work to do.

  • A simulation run is initialization followed by a sequence of simulation rounds
  • Initialization:

– Each process starts off resumed. – Each signal starts off with its default value. (’U’ for std logic)

  • In each simulation round:

– Increment time – Resume all processes that are waiting for the current time – A simulation round is a sequence of simulation cycles.

  • In each simulation cycle:

– Copy projected value of signals to current value. – Resume processes based on sensitivity lists and wait conditions. – Execute each resumed process. – If no projected assignment changed the value of a signal, then increment time and start next simulation round.

1.6.5 VHDL Delta-Cycle Simulation 79 80 CHAPTER 1. FUNDAMENTALS OF VHDL

1.6.5.2 Example: VHDL Sim for Buffers

This section reserved for your reading pleasure

1.6.5.3 Definitions and Algorithm Notes on Simulation Algorithm

  • At a wait statement, the process will suspend even if the condition is true in the

current simulation cycle. The process will resume the next time that a signal in the condition changes and the condition is true.

  • If we execute multiple assignments to the same signal in the same process in

the same simulation cycle, only the last assignment actually takes effect — all but the last assignment are ignored.

  • In a simulation round, the first simulation cycle is not a delta cycle.
  • The mode of a process is determined implicitly by keeping track of the set of

processes that are resumed (the resume set) and the process(es) that is(are)

  • executing. All other processes are suspended.

81 CHAPTER 1. FUNDAMENTALS OF VHDL

slide-29
SLIDE 29

1.6.5 VHDL Delta-Cycle Simulation 82

VHDL Simulation Definitions

Definition simulation step: Executing one sequential assignment or process mode change. Definition simulation cycle: The operations that occur in one iteration of the simulation algorithm. Definition delta cycle: A simulation cycle where time did not advance at the beginning of the cycle. Definition simulation round: A sequence of simulation cycles that all have the same simulation time.

More Formal Description of Algorithm

This section reserved for your reading pleasure

1.6.5.4 Example: Delta-Cycle for Flip-Flops

This section reserved for your reading pleasure

1.6.5 VHDL Delta-Cycle Simulation 83 84 CHAPTER 1. FUNDAMENTALS OF VHDL

1.6.5.5 Ex: VHDL Sim of Comb Loop

a b c d

proc_a : process begin a <= ’0’; wait for 1 ns; a <= ’1’; wait; end process; proc_b : process (a) begin b <= not( a ); end process; proc_c : process (a,b,d) begin c <= not( a ) or b or d; end process; proc_d : process (a,c) begin d <= a and c; end process;

slide-30
SLIDE 30

a b c d a b c Time Sim rounds Sim cycles proc_a proc_b proc_c 0ns d proc_d

1.6.5 VHDL Delta-Cycle Simulation 85

slide-31
SLIDE 31

86 CHAPTER 1. FUNDAMENTALS OF VHDL

1.6.5.6 Rules and Observations for Drawing Delta-Cycle Simulations

The VHDL Language Reference Manual gives only a textual description of the VHDL semantics. The conventions for drawing the waveforms are just our own.

  • Each column is a simulation step.
  • In a simulation step, either exactly one process changes mode or exactly one

signal changes value, except in the first two simulation steps of each simulation cycle, when multiple current values may be updated and multiple processes may resume.

  • If a projected assignment assigns the same value as the signal’s current

projected value, the projected assignment must still be shown, because this assignment will force another simulation cycle in the current simulation round.

  • If a signal’s visible value is updated with the same value as it currently has, this

assignment is not shown, because it will not trigger any sensitivity lists.

  • Assignments to signals may be denoted by either the number/letter of the new

value or one of the edge symbols:

U 1 new value

  • ld value

U 1

Some observations about delta-cycle simulation waveforms that can be helpful in checking that a simulation is correct:

  • In the first simulation step of the first simulation cycle of a simulation round (i.e.,

the first simulation step of a simulation round), at least one process will resume. This is contrast to the first simulation step of all other simulation cycle, where current values of signals are updated with projected values.

  • At the end of a simulation cycle all processes are suspended.
  • In the last simulation cycle of a simulation round either no signals change value,
  • r any signal that changes value is not in the sensitivity list of any process.

87 CHAPTER 1. FUNDAMENTALS OF VHDL 1.6.6 External Inputs and Flip-Flops 88

1.6.6 External Inputs and Flip-Flops

Question: Do the signals b1 and b2 have the same behaviour from 10–20 ns?

architecture mathilde of sauv´ e is signal clk, a, b : std_logic; begin process begin clk <= ’0’; wait for 10 ns; clk <= ’1’; wait for 10 ns; end process; process begin wait for 10 ns; a1 <= ’1’; end process; process begin wait until rising_edge(clk); a2 <= ’1’; end process; process begin wait until rising_edge( clk ); b1 <= a1; b2 <= a2; end process; end architecture;

Review: Delta-Cycle Simulation

A delta-cycle is a at the beginning of which . The two illusions of zero-delay simulation: 1. propagate 2.

  • perate

VHDL achieves the illusions by: 1. 2.

1.6.6 External Inputs and Flip-Flops 89

slide-32
SLIDE 32

90 CHAPTER 1. FUNDAMENTALS OF VHDL

1.7 Register-Transfer-Level Simulation 1.7.1 Overview

  • Much simpler than delta cycle
  • Columns are real time: clock cycles, nanoseconds, etc.
  • Can simulate both synthesizable and unsynthesizable code
  • Cannot simulate combinational loops
  • Same values as delta-cycle at end of simulation round

process begin a <= ’0’; wait for 10 ns; a <= ’1’; ... end process; process begin b <= ’0’; wait for 10 ns; b <= a; ... end process; Question: In this code, what value should b have at 10 ns — does it read the new value of a or the old value?

91 CHAPTER 1. FUNDAMENTALS OF VHDL 1.7.2 Technique for Register-Transfer Level Simulation 92

1.7.2 Technique for Register-Transfer Level Simulation

  • 1. Pre-processing

(a) Separate processes into timed, clocked, and combinational (b) Decompose each combinational process into separate processes with one target signal per process (c) Sort combinational processes into topological order based on dependencies

  • 2. For each moment of real time:

(a) Run timed processes in any order, reading old values of signals. (b) Run clocked processes in any order, reading new values of timed signals and old values of registered signals. (c) Run combinational processes in topological order, reading new values of signals.

1.7.3 Examples of RTL Simulation 1.7.3.1 RTL Simulation Example 1

We revisit an earlier example from delta-cycle simulation, but change the code slightly and do register-transfer-level simulation. proc1: process (a, b, c) begin d <= NOT c; c <= a AND b; end process; proc2: process (b, d) begin e <= b AND d; end process; proc3: process begin a <= ’1’; b <= ’0’; wait for 3 ns; b <= ’1’; wait for 99 ns; end process;

1.7.3 Examples of RTL Simulation 93

slide-33
SLIDE 33

94 CHAPTER 1. FUNDAMENTALS OF VHDL

Decompose and sort comb procs

proc1d: process (c) begin d <= NOT c; end process; proc1c: process (a, b) begin c <= a AND b; end process; proc2: process (b, d) begin e <= b AND d; end process; proc1c: process (a, b) begin c <= a and b; end process; proc1d: process (c) begin d <= not c; end process; proc2: process (b, d) begin e <= b and d; end process;

Decomposed Sorted

Waveforms

a b c d e U U U U U

0ns 1ns 2ns 3ns 102ns

proc3: process begin a <= ’1’; b <= ’0’; wait for 3 ns; b <= ’1’; wait for 99 ns; end process; proc1c: process (a, b) begin c <= a and b; end process; proc1d: process (c) begin d <= not c; end process; proc2: process (b, d) begin e <= b and d; end process;

95 CHAPTER 1. FUNDAMENTALS OF VHDL 1.7.3 Examples of RTL Simulation 96

Combinational Loops

Why is RTL-simulation unable to support combinational loops?

a b c

process (a, c) begin b <= a xor c; end process; process (b) begin c <= not b; end process;

Decomposing if-then-else Clauses

This example illustrates how to decompose a combinational process that contains assignments to multiple variables and if-then-else clauses. Original process (a, b, c) begin if a = ’1’ then y <= b; z <= c; else y <= not b; z <= not c; end if; end process; Decomposed

1.7.3 Examples of RTL Simulation 97

slide-34
SLIDE 34

98 CHAPTER 1. FUNDAMENTALS OF VHDL

Review: RTL Simulation

  • 1. Algorithm for RTL simulation:

Preprocessing (a) Separate processes into two groups: and (b) the processes so that each process (c) Sort the processes into

  • rder

Running For each moment in time or clock cycle: (a) Run the processes in

  • rder.

Processs read the value of signals. (b) Run the processes in

  • rder.

Processes read the value of signals.

  • 2. What are the defining characteristics of zero-delay simulation?

(a)

  • perate

(b) propagate

  • 3. Comparing delta-cycle and RTL simulation:

Illusion #1 Illusion #2 Delta cyle RTL

99 CHAPTER 1. FUNDAMENTALS OF VHDL 1.8. SIMPLE RTL SIMULATION IN SOFTWARE 100

1.8 Simple RTL Simulation in Software

This is an advanced section. It is not covered in the course and will not be tested.

1.9 Variables in VHDL

This is an advanced section. It is not covered in the course and will not be tested.

1.10 Delta-Cycle Simulation with Delays

This is an advanced section. It is not covered in the course and will not be tested.

1.11 VHDL and Hardware Building Blocks 1.11.1 Basic Building Blocks

This section reserved for your reading pleasure

1.11.2 Deprecated Building Blocks for RTL

This section reserved for your reading pleasure

1.11. VHDL AND HARDWARE BUILDING BLOCKS 101

slide-35
SLIDE 35

102 CHAPTER 1. FUNDAMENTALS OF VHDL

1.11.3 Hardware and Code for Flops 1.11.3.1 Flops with Waits and Ifs

This section reserved for your reading pleasure

1.11.3.2 Flops with Synchronous Reset

process (clk) begin if rising_edge(clk) then if (reset = ’1’) then q <= ’0’; else q <= d; end if; end if; end process;

103 CHAPTER 1. FUNDAMENTALS OF VHDL 1.11.3 Hardware and Code for Flops 104

Flop with Synchronous Reset: Wait-Style

process begin wait until rising_edge(clk); if reset = ’1’ then q <= ’0’; else q <= d; end if; end process;

Variation on a Floppy Theme

Question: What is this? process (clk, reset) begin if reset = ’1’ then q <= ’0’; else if rising_edge(clk) then q <= d; end if; end if; end process;

1.11.3 Hardware and Code for Flops 105

slide-36
SLIDE 36

106 CHAPTER 1. FUNDAMENTALS OF VHDL

Flop with Chip-Enable

process (clk) begin if rising_edge(clk) then if ce = ’1’ then q <= d; end if; end if; end process; Wait-style flop with chip-enable included in course notes

Q: Flop with a Mux on the Input?

D Q

d0 d1 sel q clk

107 CHAPTER 1. FUNDAMENTALS OF VHDL 1.11.3 Hardware and Code for Flops 108

Q: Flops with a Mux on the Output?

D Q q0

q1 sel clk

D Q

clk d1 d0 q

Behavioural Comparison

D Q

d0 d1 sel q clk

D Q

d0 d1 sel q1 clk

D Q

q0 q

Question: For the two circuits above, does q have the same behaviour in both circuits? Mux on input

clk sel d0 d1 q

Mux on output

clk sel d0 d1 q

1.11.3 Hardware and Code for Flops 109

slide-37
SLIDE 37

110 CHAPTER 1. FUNDAMENTALS OF VHDL

1.11.3.3 Flop with Chip-Enable and Mux on Input

Hint: Chip Enable process (clk) begin if rising_edge(clk) then if ce = ’1’ then q <= d; end if; end if; end process;

1.11.3.4 Flops with Chip-Enable, Muxes, and Reset

This section reserved for your reading pleasure

1.11.4 Example Coding Styles

This section reserved for your reading pleasure

111 CHAPTER 1. FUNDAMENTALS OF VHDL 1.12. SYNTHESIZABLE VS NON-SYNTHESIZABLE CODE 112

1.12 Synthesizable vs Non-Synthesizable Code

For us to consider a VHDL progam synthesizable, all of the conditions below must be satisfied:

  • the program must be theoretically implementable in hardware
  • the hardware that is produced must be consistent with the structure of the

source code

  • the source code must be portable across a wide range of synthesis tools, in that

the synthesis tools all produce correct hardware Synthesis is done by matching VHDL code against templates or patterns. It’s important to use idioms that your synthesis tools recognize. Think like hardware: when you write VHDL, you should know what hardware you expect to be produced by the synthesizer.

1.12.1 Wait For

Wait for length of time (UNSYNTHESIZABLE) wait for 10 ns; Reason: Delays through circuits are dependent upon both the circuit and its

  • perating environment, particularly supply voltage and temperature. For example,

imagine trying to build an AND gate that will have exactly a 2ns delay in all environments.

1.12.2 Initial Values

Initial values on signals (UNSYNTHESIZABLE) signal bad_signal : std_logic := ’0’; Reason: At powerup, the values on signals are random (except for some FPGAs).

1.12.1 Wait For 113

slide-38
SLIDE 38

114 CHAPTER 1. FUNDAMENTALS OF VHDL

1.12.3 Assignments before Wait Statement

If a synthesizable clocked process has a wait statement, then the process must begin with a wait statement. process c <= a; d <= b; wait until rising edge(clk); end process; Unsynthesizable process wait until rising edge(clk); c <= a; d <= b; end process; Synthesizable Reason: Cannot synthesize reasonble hardware that has the correct behavior. In simulation, any assignments before the first wait statement will be executed in the first delta-cycle. In the synthesized circuit, the signals will be outputs of flip-flops and will first be assigned values after the first rising-edge.

1.12.4 “if rising edge” and “wait” in Same Process

An if rising edge statement and a wait statement in the same process (UNSYNTHESIZABLE) process begin if rising_edge(clk) then q0 <= d0; end if; wait until rising_edge(clk); q0 <= d1; end process; Reason: The idioms for synthesis tools generally expect just a single type of flop-generating statement in each process.

115 CHAPTER 1. FUNDAMENTALS OF VHDL 1.12.5 “if rising edge” with “else” Clause 116

1.12.5 “if rising edge” with “else” Clause

The if statement has a rising edge condition and an else clause (UNSYNTHESIZABLE). process (clk) begin if rising_edge(clk) then q0 <= d0; else q0 <= d1; end if; end process; Reason: The idioms for the synthesis tools expect a signal to be either registered

  • r combinational, not both.

1.12.6 While Loop with Dynamic Condition and Combinational Body

A while loop where the condition is dynamic (depends upon a signal value) and the body is combinational is unsynthesizable. The loop below is unsynthesizable: process (a,b,c) begin while a = ’1’ loop z <= b and c; end loop; end process; This loop is designed to be very small, but illustrate the problem. The loop itself is non-sensical.

1.12.6 While Loop with Dynamic Condition and Combinational Body 117

slide-39
SLIDE 39

118 CHAPTER 1. FUNDAMENTALS OF VHDL

For Loop with Combinational Body

A for-loop with a combinational body is synthesizable, because the loop condition can be evaluated statically (at compile/elaboration time). The loop below is synthesizable: process ( b, c ) begin for i in 0 to 3 loop z(i) <= b(i) and c(i); end loop; end process; An equivalent while loop would require variables, which are an advanced topic (section 1.9). While loops with dynamic conditions and clocked bodies are synthesizable, but are an example of an implicit state machine and are an advanced topic.

1.13 Guidelines for Desirable Hardware

Code that is synthesizable, but undesirable (i.e., bad coding practices):

  • latches
  • combinational loops
  • multiple drivers for a signal
  • asynchronous resets
  • using a data signal as a clock
  • using a clock signal as data

To prevent undesireable hardware, some synthesis tools will flag some of these problems as “unsynthesizable”.

119 CHAPTER 1. FUNDAMENTALS OF VHDL 1.13. GUIDELINES FOR DESIRABLE HARDWARE 120

Know Your Hardware

The most important guideline is: know what you want the synthesis tool to build for you.

  • For every signal in your design, know whether it should be a flip-flop or
  • combinational. Check the output of the synthesis tool see if the flip flops in your

circuit match your expectations, and to check that you do not have any latches in your design.

  • If you cannot predict what hardware the synthesis tool will generate, then you

probably will be unhappy with the result of synthesis.

1.13.1 Latches

Difference between a flip-flop and a latch: flip-flop Edge sensitive: output only changes on rising (or falling) edge of clock latch Level sensitive: output changes whenever clock is high (or low) A common implementation of a flip-flop is a pair of latches (Master/Slave flop). Latches are sometimes called “transparent latches”, because they are transparent (input directly connected to output) when the clock is high. The clock to a latch is sometimes called the “enable” line. There is more information in the course notes on timing analysis for storage devices (section 8.3).

1.13.1 Latches 121

slide-40
SLIDE 40

122 CHAPTER 1. FUNDAMENTALS OF VHDL

Latch: Combinational if-then without else

process (a, b) begin if (a = ’1’) then c <= b; end if; end process;

  • For a combinational process, every signal that is assigned to, must be assigned

to in every branch of if-then and case statements. reason If a signal is not assigned a value in a path through a combinational process, then that signal will be a latch. note For a clocked process, if a signal is not assigned a value in a clock cycle, then the flip-flop for that signal will have a chip-enable pin. Chip-enable pins are fine; they are available on flip-flops in essentially every cell library.

Signals Missing from Sensitivity List

process (a) begin c <= a and b; end process;

  • For a combinational process, the sensitivity list should contain all of the signals

that are read in the process. reason Gives consistent results across different tools. Many synthesis tools will implicitly include all signals that a process reads in its sensitivity list. This differs from the VHDL Standard. A synthesis tool that adheres to the standard will either generate an error or will create hardware with latches or flops clocked by data sigansl if not all signals that are read from are included in the sensitivity list. exception In a clocked process using an if rising edge, it is acceptable to have only the clock in the sensitivity list

123 CHAPTER 1. FUNDAMENTALS OF VHDL 1.13.2 Combinational Loops 124

1.13.2 Combinational Loops

A combinational loop is a cyclic path of dependencies through one or more combinational processes. process (a, b, c) begin if a = ’0’ then d <= b; else d <= c; end if; end process; process (d, e) begin b <= d and e; end process;

b d c e a

  • If you need a signal to be dependent on itself, you must include a register

somewhere in the cyclic path.

  • Some FPGA synthesis tools consider a combinational loop to be
  • unsynthesizable. We consider it to be synthesizable and bad-hardware,

because the hardware is obvious and is obviously bad.

1.13.3 Multiple Drivers

z <= a and b; z <= c;

a b c z

  • Each signal should be assigned to in only one process. This is often called the

“single assignment rule”. reason Multiple processes driving the same signal is the same as having multiple gates driving the same wire. This can cause contention, tri-state values, and other bad things.

1.13.3 Multiple Drivers 125

slide-41
SLIDE 41

126 CHAPTER 1. FUNDAMENTALS OF VHDL

Multiple Drivers Example

The example below shows how a “software style” structure that puts the reset code in one process will cause multiple drivers for the signals y and z.

process begin wait until rising edge(clk); if reset = ’1’ then y <= ’0’; z <= ’0’; end if; end process; process begin wait until rising edge(clk); if reset = ’0’ then if a = ’1’ then z <= b and c; else z <= d; end if; end if; end process; process begin wait until rising edge(clk); if reset = ’0’ then if b = ’1’ then y <= c; end if; end if; end process;

1.13.4 Asynchronous Reset

In an asynchronous reset, the test for reset occurs outside of the test for the clock edge. process (reset, clk) begin if (reset = ’1’) then q <= ’0’; elsif rising_edge(clk) then q <= d; end if; end process;

  • All reset signals should be synchronous.

reason If a reset occurs very close to a clock edge, some parts of the circuit might be reset in one clock cycle and some in the subsequent clock cycle. This can lead the circuit to be out of sync as it goes through the reset sequence, potentially causing erroneous internal state and output values.

127 CHAPTER 1. FUNDAMENTALS OF VHDL 1.13.5 Using a Data Signal as a Clock 128

1.13.5 Using a Data Signal as a Clock

process begin wait until rising_edge(clk); count <= count + 1; end process; process begin waiting until rising_edge( count(5) ); b <= a; end process;

count 1

D Q

clk

D Q

a (5) b

  • Data signals should be used only as data.

reason All data assignments should be synchronized to a clock. This ensures that the timing analysis tool can determine the maximum clock speed

  • accurately. Using a data signal as a clock clock signals can lead to

unpredictable delays between different assignments, which makes it infeasible to do an accurate timing analysis.

1.13.6 Using a Clock Signal as Data

process begin wait until rising_edge(clk); count <= count + 1; end process; b <= a and clk;

  • Clock signals should be used only as clocks.

reason Clock signals have two defined values in a clock cycle and transition in the middle of the clock cycle. At the register-transfer level, each signal has exactly one value in a clock cycle and signals transition between values only at the boundary between clock cycles.

1.13.6 Using a Clock Signal as Data 129

slide-42
SLIDE 42

130 CHAPTER 1. FUNDAMENTALS OF VHDL

1.14 Bad VHDL Coding

This section lists some coding practices to avoid in VHDL unless you have a very good reason.

1.14.1 Tri-State Buffers and Signals ‘Z’ as a Signal Value

process (sel, a0) b <= a0 when sel = ’0’ else ’Z’; end process; process (sel, a1) b <= a1 when sel = ’1’ else ’Z’; end process;

  • Use multiplexers, not tri-state buffers.

reason Multiplexers are more robust than tri-state buffers, because tri-state buffers rely on analog effects such as drive-strength and voltages that are between ’0’ and ’1’. Multiplexers require more area than tri-state buffers, but for the size of most busses, the advantage in a more robust design is worth the cost in extra area.

131 CHAPTER 1. FUNDAMENTALS OF VHDL 1.14.1 Tri-State Buffers and Signals 132

Inout and Buffer Port Modes

entity bad is port ( io_bad : inout std_logic; buf_bad : buffer std_logic ); end entity;

  • Use in or out, do not use inout or buffer

reason inout and buffer signals are tri-state. note If you have an output signal that you also want to read from, you might be tempted to declare the mode of the signal to be inout. A better solution is to create a new, internal, signal that you both read from and write to. Then, your

  • utput signal can just read from the internal signal.

1.14.2 Variables in Processes

process variable bad : std_logic; begin wait until rising_edge(clk); bad := not a; d <= bad and b; e <= bad or c; end process;

  • In a process, use signals; do not use variables

reason The intention of the creators of VHDL was for signals to be wires and variables to be just for simulation. Some synthesis tools allow some uses of variables, but when using variables, it is easy to create a design that works in simulation but not in real hardware. (section 1.9)

1.14.2 Variables in Processes 133

slide-43
SLIDE 43

134 CHAPTER 1. FUNDAMENTALS OF VHDL

1.14.3 Bits and Booleans as Signals

signal bad1 : bit; signal bad2 : boolean;

  • Use std_logic signals, do not use bit or Boolean signals.

reason std_logic is the most commonly used signal type across synthesis tools and simulation tools.

Review: Synthesizable, Good, and Bad VHDL

For each code fragment below, answer whether it is synthesizable. If the code is synthesizable, answer whether it follows good coding practices for synthesizable hardware. 1. process (clk) begin if rising_edge(clk) then q <= a; else q <= b; end if; end proces; Yes No Synth? Good? 2. process (clk) begin if rising_edge(clk) then q1 <= d1; end if; if rising_edge(clk) then q2 <= d2; end if; end proces; Yes No Synth? Good?

135 CHAPTER 1. FUNDAMENTALS OF VHDL 1.14.3 Bits and Booleans as Signals 136

3. process (a,b) begin if a = ’1’ then q <= b; end if; end proces; Yes No Synth? Good? 4. process (a, b) begin if a = ’1’ then q <= b; else q <= not q; end if; end proces; Yes No Synth? Good?

Chapter 2 Additional Features of VHDL

137

slide-44
SLIDE 44

138 CHAPTER 2. ADDITIONAL FEATURES OF VHDL

2.1 Literals 2.1.1 Numeric Literals

Description Type Example 1 Example 2 Decimal Integer 17 1023 Decimal Real 17.0 1023.1 Hexadecimal Integer 16#FF# 16#2F190# Hexadecimal Real 16#FF.F# 16#2F1.90# Binary Integer 2#1101# 2#011101# Binary Real 2#1101.111# 2#0111.01# Exponent Integer 17E+3 2#111#E3 Exponent Real 17.1E+3 2#11.1#E3 Underscore Integer 123 45 67 16#FF 3A#

2.1.2 Bit-String Literals

Binary B"1101010" B"1101 1010" Octal O"3470100" O"45 23" Hexadecimal X"FF2300" X"Ff3dbF 23" Note: Array literals are called “aggregates” and are described in section 2.2.2.

139 CHAPTER 2. ADDITIONAL FEATURES OF VHDL 2.2. ARRAYS AND VECTORS 140

2.2 Arrays and Vectors 2.2.1 Declarations

VHDL arrays have:

  • direction (to or downto)
  • upper bound
  • lower bound

signal a : std_logic_vector( 3 downto 0 ); signal b : std_logic_vector( 0 to 3 ); signal c : std_logic_vector( 1 to 4 );

Constant Arrays

To define a constant array: constant a : array( 0 to 3 ) of integer := ( 10, 17, -31, 23 ); constant b : array( 0 to 3 ) of integer := ( 0 => 10, 1 => 17, 2 => -31, 3 => 23 ); constant c : array( 0 to 3 ) of integer := ( 0 => 10, 1 => 17, others => 23 );

2.2.1 Declarations 141

slide-45
SLIDE 45

142 CHAPTER 2. ADDITIONAL FEATURES OF VHDL

2.2.2 Indexing, Slicing, Concatenation, Aggregates Operations

Indexing an array to reference a single element a(0) A slice or “discrete subrange” of an array a( 3 downto 2) Concatenating an element onto an array, or concatenation two ar- rays ’1’ & a b & a Array literals or “aggregates” ( ’0’, ’0’, ’1’ ) ( a(0), b(2), a(3) ) Aggregate with positional indices ( 0=>’0’, 2=>’X’, 1=>’U’ ) Aggregate with “others” key- word ( 0=>’0’, 3=>’1’, others=>’X’ )

Assignments

  • 1. The ranges on both sides of the assignment must be the same.
  • 2. The direction (downto or to) of each slice must match the direction of the

signal declaration.

  • 3. The direction of the target and expression may be different.

143 CHAPTER 2. ADDITIONAL FEATURES OF VHDL 2.2.2 Indexing, Slicing, Concatenation, Aggregates 144

Assignments (cont’d)

Declarations a , b : std_logic_vector(15 downto 0); ax, bx : std_logic_vector(0 to 15); Legal code b (3 downto 0) <= a(15 downto 12); bx(0 to 3) <= a(15 downto 12); ( b(3) , b(4) ) <= a(13 downto 12); ( bx(4), b(4) ) <= a(13 downto 12); Illegal code bx(0 to 3) <= a(12 to 15);

  • - slice dirs must be same as decl, fails for a

c (3 downto 0) <= (a & b)( 3 downto 0);

  • - may not index an expression

b(3) & b(2) <= a(12 to 13);

  • - & may not be used on lhs

2.3 Arithmetic

VHDL includes all of the common arithmetic operators and relations. Use the VHDL arithmetic operators and let the synthesis tool choose the best implementation for you.

2.3.1 Arithmetic Packages

To do arithmetic with signals, use the numeric_std package. numeric std supersedes earlier arithmetic packages, such as std logic arith. Use only one arithmetic package, otherwise the different definitions will clash and you can get strange error messages. We will describe arithmetic with the numeric std package.

2.3. ARITHMETIC 145

slide-46
SLIDE 46

146 CHAPTER 2. ADDITIONAL FEATURES OF VHDL

2.3.2 Arithmetic Types

Arithmetic may be done on three types of expressions: integers Numeric values, such as 17 unsigned Unsigned vectors, such as signals defined as type unsigned( 7 downto 0). signed Signed vectors, such as signals defined as type signed( 7 downto 0). The types signed and unsigned are std_logic vectors on which you can do signed or unsigned arithmetic and all of the operations that are supported by std logic vectors.

2.3.3 Overloading of Arithmetic

The arithmetic operators +, -, and * are overloaded on signed vectors, unsigned vectors, and integers. Declarations u1, u2, u3 : unsigned( 7 downto 0); s1, s2, s3 : signed( 7 downto 0); Target Src1/2 Src2/1 Example unsigned unsigned unsigned u3 <= u1 + u2; OK unsigned unsigned integer u3 <= u1 + 17; OK signed signed signed s3 <= s1 + s2; OK signed signed integer s3 <= s1 + -17; OK — unsigned signed u3 <= u1 + s2; Fail

147 CHAPTER 2. ADDITIONAL FEATURES OF VHDL 2.3.4 Widths for Addition and Subtraction 148

2.3.4 Widths for Addition and Subtraction

  • Sources may have different widths
  • The target must be the same width as the widest source

Declarations w1, w2, w3 : unsigned(7 downto 0) – wide n1, n2, n3 : unsigned(3 downto 0) – narrow Target Src1/2 Src2/1 Example wide wide wide w3 <= w1 + w2; OK wide wide narrow w3 <= w1 + n2; OK wide wide int w3 <= w1 + 17; OK narrow narrow narrow n3 <= n1 + n2; OK narrow narrow int n3 <= n1 + 17; OK narrow wide — n3 <= w1 + n2; Fail These failures are caught at elaboration, which happens after typechecking.

Widths for Multiplication

  • The sources may be different widths
  • the width of the result must be the sum of the widths of the sources

Declarations v4a, v4b, v4c : unsigned( 3 downto 0 ); v8 : unsigned( 7 downto 0 ); v12 : unsigned( 11 downto 0 ); Target Src1/2 Src2/1 Example 8-bits 4-bits 4-bits v8 <= v4a * v4b; OK 12-bits 4-bits 8-bits v12 <= v4a * v8; OK 4-bits 4-bits 4-bits v4c <= v4a * v; Fail

2.3.4 Widths for Addition and Subtraction 149

slide-47
SLIDE 47

150 CHAPTER 2. ADDITIONAL FEATURES OF VHDL

2.3.5 Overloading of Comparisons

  • Comparisons are overloaded on arrays and integers.
  • If both operands are arrays, both must be of the same type.

Declarations u1, u2 : unsigned( 7 downto 0); s1, s2 : signed( 7 downto 0); Src1/2 Src2/1 Example unsigned unsigned u1 >= u2 OK unsigned integer u1 >= 17 OK signed signed s1 >= s2 OK signed integer s1 >= 17 OK unsigned signed u1 >= s1 Fail

2.3.6 Widths for Comparisons

  • Sources may have different widths

Declarations w1, w2 : unsigned(7 downto 0) – wide n1, n2 : unsigned(3 downto 0) – narrow Src1/2 Src2/1 Example wide — w1 >= n1 OK narrow — n1 >= w2 OK

151 CHAPTER 2. ADDITIONAL FEATURES OF VHDL 2.3.7 Type Conversion 152

2.3.7 Type Conversion

If you convert between two types of the same width, then no additional hardware will be generated. Use: typecast a signal unsigned ( val : std_logic_vector ) return unsigned; signed ( val : std_logic_vector ) return signed; Use: assign an integer literal to a signal to_unsigned( val : integer; width : natural) return unsigned; to_signed ( val : integer; width : natural) return signed; Use: use a signal as an index into an array to_integer ( val : signed ) return integer; to_integer ( val : unsigned ) return integer;

Examples of Conversions

Declarations u1, u2, u3 : unsigned( 7 downto 0); sn1, sn2, sn3 : signed( 7 downto 0); sw1, sw2, sw3 : signed( 8 downto 0); Examples u3 <= to unsigned( 17, 8 ); OK sn3 <= to signed( 17, 8 ); OK sw3 <= signed( "0" & u1 ); OK sn3 <= signed( u1 ); Bad sw3 <= signed( "0" & u1) - signed( "0" & u2); OK sw3 <= signed( "0" & (u1 + u2)); OK sw3 <= signed( "0" & (u1 - u2)); Bad The Bad examples above will typecheck and elaborate without any errors, but they potentially will produce incorrect results.

2.3.7 Type Conversion 153

slide-48
SLIDE 48

154 CHAPTER 2. ADDITIONAL FEATURES OF VHDL

Resizing and Sign Extension

The function resize resizes vectors, performing sign extension if necessary, based upon the type of the argument. It is overloaded for different types of arguments.

resize( v : std_logic_vector; width : natural ) return std_logic_vector; resize( u : unsigned ; width : natural ) return unsigned; resize( s : signed ; width : natural ) return signed;

Declarations un1, un2 : unsigned( 4 downto 0); uw1, uw2 : unsigned( 7 downto 0); sn1, sn2 : signed( 4 downto 0); sw1, sw2 : signed( 7 downto 0); Examples uw1 <= resize( un1, 8 ); OK un1 <= resize( uw1, 4 ); OK sw1 <= resize( sn1, 8 ); OK sn1 <= resize( sw1, 4 ); OK sw1 <= resize( un1, 8 ); Fail uw1 <= resize( sn1, 8 ); Fail

Type Conversion and Array Indices

To use a signal as an index into an array, you must convert the signal into an integer using the function to_integer. Declarations signal u : unsigned( 3 downto 0); signal v : std logic vector( 3 downto 0); signal a : std logic vector(15 downto 0); Examples a( to integer(u) ) Ok a( to integer( unsigned(v) ) ) Ok v(u) Fail a( unsigned(v) ) Fail

155 CHAPTER 2. ADDITIONAL FEATURES OF VHDL 2.3.8 Shift and Rotate Operations 156

2.3.8 Shift and Rotate Operations

Shift and rotate operations are described with three character acronyms: shift left/right arithmetic/logical rotate left/right The shift right arithmetic (sra) operation preserves the sign of the operand, by copying the most significant bit into lower bit positions. The shift left arithmetic (sla) does the analogous operation, except that the least significant bit is copied. a sra 2 -- arithmetic shift of a by 2 bits

2.3.9 Arithmetic Optimizations

Multiply by a constant power of two wired shift logical left Multiply by a power of two shift logical left Divide by a constant power of two wired shift logical right Divide by a power of two shift logical right Question: How would you implement: z <= a * 3?

2.3.9 Arithmetic Optimizations 157

slide-49
SLIDE 49

158 CHAPTER 2. ADDITIONAL FEATURES OF VHDL

2.4 Types 2.4.1 Enumerated Types

VHDL supports enumerated types: type color is (red, green, blue);

2.4.2 Defining New Array Types

When defining a new array type, the range may be left unconstrained: type color is (red, green, blue); type color_vector is array ( natural range <> ) of color; We may then use the unconstrained array type as the basis for defining a constrained array subtype: subtype few_colors is color_vector( 0 to 3 ); subtype many_colors is color_vector( 0 to 1023 ); Note the use of subtype above. It is illegal to use type to define a constrained array in terms of an unconstrained array. We can use type to define a constrained array directly: type few_colors is array ( 0 to 3 ) of color;

159 CHAPTER 2. ADDITIONAL FEATURES OF VHDL 2.4.2 Defining New Array Types 160

Chapter 3 Overview of FPGAs

3.1 Generic FPGA Hardware

  • This section: generic FPGA with 4 inputs per lookup table.
  • Many real FPGAs have more (e.g., 6) inputs per lookup table.
  • Principles described here are applicable in general, even as details differ.

161

slide-50
SLIDE 50

162 CHAPTER 3. OVERVIEW OF FPGAS

3.1.1 Generic FPGA Cell

FPGA “Cell” = “Logic Element” (LE) in Altera = “Configurable Logic Block” (CLB) in Xilinx “LUT” = “lookup table” = PLA (programmable logic array)

CE S R D Q

comb_data_in ctrl_in carry_in carry_out flop_data_out LUT comb_data_out flop_data_in configurable 4:1 lookup table configurable multiplexer

Separate Comb and Flop

CE S R D Q

comb_data_in ctrl_in carry_in carry_out flop_data_out comb comb_data_out flop_data_in

163 CHAPTER 3. OVERVIEW OF FPGAS 3.1.1 Generic FPGA Cell 164

Connect Comb and Flop

CE S R D Q

comb_data_in ctrl_in carry_in carry_out flop_data_out comb comb_data_out flop_data_in

Flopped and Unflopped Outputs

CE S R D Q

comb_data_in ctrl_in carry_in carry_out flop_data_out comb comb_data_out flop_data_in

3.1.1 Generic FPGA Cell 165

slide-51
SLIDE 51

166 CHAPTER 3. OVERVIEW OF FPGAS

3.1.2 Lookup Table

A 4:1 lookup table is usually implemented as a memory array with 16 1-bit elements. z = (a AND b)

OR

(b AND NOT c) OR (c AND NOT d) z = NOT a

4-bit address 1-bit data d c b a z 0 0 0 0 0 0 1 0 0 1 1 0 0 1 1 1 0 1 0 1 . . . 1 0 0 1 1 0 1 1 1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 d c b a z 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 0 1 1 0 1 . . . 1 0 1 1 0 1 1 0 0 1 1 1 0 1 0 1 1 1 0 1 1 1 1 1 0

3.1.3 Interconnect for Generic FPGA Local Connections

Note: In these pictures, the space between tightly grouped wires sometimes disappears, making a group of wires appear to be a single large wire.

167 CHAPTER 3. OVERVIEW OF FPGAS 3.1.3 Interconnect for Generic FPGA 168

Local Connections (Zoom Out) General-Purpose Wires and Carry Chains

General purpose interconnect configurable, slow Carry chains and cascade chains vertically adjacent cells, fast

3.1.3 Interconnect for Generic FPGA 169

slide-52
SLIDE 52

170 CHAPTER 3. OVERVIEW OF FPGAS

3.1.4 Blocks of Cells for Generic FPGA

Column of cells in blocks Two rows of blocks Path to connect cells in different rows

Connecting Through Cells

Cells that are not used for computation can be used as “wires” to shorten length of path between cells.

171 CHAPTER 3. OVERVIEW OF FPGAS 3.1.5 Special Circuitry in FPGAs 172

3.1.5 Special Circuitry in FPGAs

Memory Since the mid 1990s, almost all FPGAs have had special circuits for RAM and ROM. These special circuits are possible because many FPGAs are fabricated on the same processes as SRAM chips. So, the FPGAs simply contain small chunks of SRAM. Microprocessors In 2001, some high-end FPGAs had one or more hardwired microprocessors on the same chip as programmable hardware. In 2005, the Xilinx-II Pro had 4 Power PCs and enough programmable hardware to implement the first-generation Intel Pentium microprocessor. Arithmetic Circuitry In 2001, FPGAs began to have hardwired circuits for multipliers and adders. Using these resources can improve significantly both the area and performance of a design. Input / Output Some FPGAs include special circuits to increase the bandwidth

  • f communication with the outside world.

3.2 Area Estimation for FPGAs

This section describes three methods to estimate the number of FPGA cells required to implement a circuit: section 3.2.1 Rough estimate based simply upon the number of flip-flops and primary inputs that are in the fanin of each flip-flop or output. section 3.2.2 A more accurate, and more complex, technique that uses a greedy algorithm to allocates as many gates as possible into the lookup table of each FPGA cell. section 3.2.3 A technique to estimate the area for arithmetic circuits with registers. Each cell:

  • LUT for any combinational function with up to four inputs and one output
  • Carry-in and carry-out signals used only for arithmetic carries
  • Flip-flop can be driven by LUT or separate input

3.2. AREA ESTIMATION FOR FPGAS 173

slide-53
SLIDE 53

174 CHAPTER 3. OVERVIEW OF FPGAS

3.2.1 Area for Circuit with one Target

This section gives a technique to esti- mate the number of FPGA cells required for a purely combinational circuit with

  • ne output.

Question: What is the maximum number of inputs for a function that can be implemented with

  • ne LUT?

Question: Number of inputs for two LUTs? Question: Three LUTs? Question: Four LUTs?

Single Target vs Multiple Targets

For a single target signal, this technique gives a lower bound on the number of LUTs needed. For multiple target signals, this technique might be an overestimate, because a single LUT can be used in the logic for multiple target cells.

175 CHAPTER 3. OVERVIEW OF FPGAS 3.2.1 Area for Circuit with one Target 176

4:1 Mux in Two FPGA Cells

A 4:1 mux has 6 inputs, so it should fit into two FPGA cells.

d0 d1 d2 d3 sel(0) sel(1) z

But, there is no partitioning of the gates into two groups such that each group has at most 4 inputs and 1 output.

sel1 sel0 d0 d1 d2 d3 z

But, with some clever tricks, a 4:1 mux can be implemented in two FPGA cells:

sel1 sel0 d0 d1 d2 d3 i j k l m n

  • z

3.2.2 Algorithm to Allocate Gates to Cells

This section presents an algorithm to allocate gates to FPGA cells for circuits with:

  • multiple outputs
  • combinational gates
  • flip-flops

The algorithm mimics what a synthesis tool does in transforming a netlist of generic gates into an FPGA: Technology map Map groups of generic combinational gates into LUTs Placement Assign each LUT and flip-flop to an FPGA cell In addition to above, synthesis tools do the step of routing: connecting the signals between FPGA cells. Because we are working with general-purpose combinational gates, we cannot use the carry-in and carry-out signals with the LUTs.

3.2.2 Algorithm to Allocate Gates to Cells 177

slide-54
SLIDE 54

178 CHAPTER 3. OVERVIEW OF FPGAS

Overview of Algorithm

For each flip-flop and output: traverse backward through the fanin gathering as much combinational circuitry as possible into the FPGA cell. Stopping conditions:

  • flip-flop
  • more than four inputs — However, have more than four signals as input, then

further back in the fanin, the circuit will collapse back to four or fewer signals.

Number of FPGA Cells (1)

Question: Map the circuit below onto generic FPGA cells. Do not perform any algebraic optimizations. Use NC (no connect) for any unused pins on the cells.

a b c d z

179 CHAPTER 3. OVERVIEW OF FPGAS 3.2.2 Algorithm to Allocate Gates to Cells 180

Number of FPGA Cells (2)

Question: Map the circuit below onto generic FPGA cells.

a b c d z y x e f g h i

Extra copy:

a b c d z y x e f g h i

Number of FPGA Cells (3)

In this question, the signal i becomes a new output. Question: Map the circuit below onto generic FPGA cells.

a b c d z x e f g h y i

Extra copy:

a b c d z x e f g h y i

3.2.2 Algorithm to Allocate Gates to Cells 181

slide-55
SLIDE 55

182 CHAPTER 3. OVERVIEW OF FPGAS

3.2.3 Area for Arithmetic Circuits

For arithmetic circuits, we take into account inputs, outputs, carry-in, and carry-out signals. 1 lookup table can implement one 1-bit full-adder

ci a b co sum

NC NC

d0 d1 d2 d3

n lookup tables can implement one n- bit full-adder

a0 b0 ci sum0 a1 b1 sum1 a2 b2 sum2 a3 b3 sum3 co

Two-Bit Adder

Question: How many lookup tables are needed for a two-bit adder?

a0 b0 a1 b1 ci co sum0 sum1

183 CHAPTER 3. OVERVIEW OF FPGAS 3.2.3 Area for Arithmetic Circuits 184

Adder with a Multiplexer

Question: How many lookup tables for an adder with a 2:1 mux on one input?

sel a b c ci co sum sel a b c ci co sum

Arithmetic VHDL Code

Question: How many cells are needed for each of the code fragments below? All signals are 8 bits.

z <= a + b; z <= a + b + c; process begin wait until rising_edge(clk); z <= a + b + c; end process;

3.2.3 Area for Arithmetic Circuits 185

slide-56
SLIDE 56

186 CHAPTER 3. OVERVIEW OF FPGAS

Arithmetic VHDL Code (Cont’d)

process begin wait until rising_edge(clk); a <= i_a; b <= i_b; c <= i_c; z <= a + b + c; end process; a <= i_a; b <= i_b; c <= i_c; process begin wait until rising_edge(clk); z <= a + b + c; end process; m <= a when sel=’0’ else b; process begin wait until rising_edge(clk); z <= m + c; end process;

Other Arithmetic Operations

Code Number of LUTs per bit z <= a + 1; z <= a = b; z <= a = 0;

187 CHAPTER 3. OVERVIEW OF FPGAS 3.2.3 Area for Arithmetic Circuits 188

Area Optimizations Example: Area Optimization

3.2.3 Area for Arithmetic Circuits 189

slide-57
SLIDE 57

190 CHAPTER 3. OVERVIEW OF FPGAS

Chapter 4 State Machines

191

4.1. NOTATIONS 192

4.1 Notations

We will use a variety of notations to model our hardware: Pseudocode For algorithms. Used early in the design process for sequential behaviour and high-level optimizations. Dataflow diagrams Models the structure and behaviour of datapath-intensive circuits. State machines A variation on the conventional bubble-and-arrow style state machines. VHDL code For the real implementation.

4.2 Finite State Machines in VHDL 4.2.1 HDL Coding Styles for State Machines

Explicit VHDL code contains a state signal. At most one wait statement per process. Explicit-Current The state signal represents the current state of the machine and the signal is assigned its next value in a clocked process. Explicit-Current+Next there is a signal for the current state and another signal for the next state. The next-state signal is assigned its value in a combinational process or concurrent statement and is dependent upon the current state and the inputs. The current-state signal is assigned its value in a clocked process and is just a flopped copy of the next-state signal. (“three-process” style) Implicit There is no explicit state signal. At least one process has multiple wait

  • statements. Each wait statement corresponds to a single state (Advanced

topic not covered in this course).

4.2. FINITE STATE MACHINES IN VHDL 193

slide-58
SLIDE 58

194 CHAPTER 4. STATE MACHINES

4.2.2 State Encodings

Explicit state machines require a state signal. Before we can define a state signal, we must define values for the names of the states. For example, we might define S0 to be "000" and S1 to be "001". The value for the name of state is called the “encoding” of the state. In hardware, each value is a bit-vector. There are a variety common encodings for states: binary, one-hot, Gray, and thermometer. We can either define the encoding ourselves, or let the synthesis tool choose the encoding for us. If we define the encoding, then the type for the states is std logic vector. To let the synthesis choose the encoding, we create an enumerated type to the states, where each state is an element of the type. The synthesis tool then chooses a specific binary value for each state. Usually, the synthesis tool has heuristics to choose either a binary or one-hot encoding. This section reserved for your reading pleasure

4.2.3 Traditional State-Machine Notation

This section reserved for your reading pleasure

195 CHAPTER 4. STATE MACHINES 4.2.4 Our State-Machine Notation 196

4.2.4 Our State-Machine Notation

A simple extension to Mealy machines, allow both:

  • combinational assignments z

= 0

  • registered assignments

z’ = 0 Combinational assignments

s0 s1 s2 s3 z=1 !a a z=0 z=0 z=0 z=0

Combinational assignments

1 2 3 4 5 a state z S0 1 1 S1 S3 S0 S2 S3 6 S0

Registered assignments

1 2 3 4 5 a state z S0 1 1 S1 S3 S0 S2 S3 6 S0

Registered assignments

s0 s1 s2 s3 !a z’=1 !a z’=0 z’=0 z’=0 z’=0 a

4.2.5 Bounce Example

Combinational Assignments

1 2 3 4 a state z S0 1 1 S1 S0 S2 S0 1 s0 s1 s2 z=1 a z=0 !a z=0 z=1

Registered Assignments

1 2 3 4 a state z S0 1 1 S1 S0 S2 S0 1 s0 s1 s2 z’=1 a z’=0 !a z’=0 z’=1

Explicit-Current Coding Style

4.2.5 Bounce Example 197

slide-59
SLIDE 59

198 CHAPTER 4. STATE MACHINES

Combinational Assignments

s0 s1 s2 z=1 a z=0 !a z=0 z=1

process (clk) begin if rising_edge(clk) then case state is when S0 => if a = ’1’ then state <= S1; else state <= S2; end if; when others => state <= S0; end case; end if; end process; process (state, a) begin if (state = S0 and a = ’1’)

  • r (state = S2)

then z <= ’1’; else z <= ’0’; end if end process;

Registered Assignments

s0 s1 s2 z’=1 a z’=0 !a z’=0 z’=1

process (clk) begin if rising_edge(clk) then case state is when S0 => if a = ’1’ then state <= S1; else state <= S2; end if; when others => state <= S0; end case; end if; end process; process begin wait until rising_edge(clk); if (state = S0 and a = ’1’)

  • r (state = S2)

then z <= ’1’; else z <= ’0’; end if end process;

Additional Coding Options

Combinational Assignments

s0 s1 s2 z=1 a z=0 !a z=0 z=1

process (clk) begin if rising_edge(clk) then case state is when S0 => if a = ’1’ then state <= S1; else state <= S2; end if; when others => state <= S0; end case; end if; end process; z <= ’1’ when (state = S0 and a = ’1’)

  • r state = S2

else ’0’;

Registered Assignments

s0 s1 s2 z’=1 a z’=0 !a z’=0 z’=1

process (clk) begin if rising_edge(clk) then case state is when S0 => if a = ’1’ then z <= ’1’; state <= S1; else z <= ’0’; state <= S2; end if; when S1 => z <= ’0’; state <= S0; when others => z <= ’1’; state <= S0; end case; end if; end process;

199 CHAPTER 4. STATE MACHINES 4.2.5 Bounce Example 200

Explicit-Current+Next

Combinational Assignments

s0 s1 s2 z=1 a z=0 !a z=0 z=1

process (clk) begin if rising_edge(clk) then st <= next_st; end if; end process; next_st <= S1 when st = S0 and a = ’1’ else S2 when st = S0 else S0; z <= ’1’ when (st = S0 and a = ’1’)

  • r (st = S2)

else ’0’;

Registered Assignments

s0 s1 s2 z’=1 a z’=0 !a z’=0 z’=1

process (clk) begin if rising_edge(clk) then st <= next_st; end if; end process; next_st <= S1 when st = S0 and a = ’1’ else S2 when st = S0 else S0; process (clk) begin if rising_edge(clk) then if (st = S0 and a = ’1’)

  • r (st = S2)

then z <= ’1’; else z <= ’0’; end if; end if; end process;

Implicit

Combinational Assignments

s0 s1 s2 z=1 a z=0 !a z=0 z=1

Note: Implicit state machines do not support combinational assignments, because an implicit state machine is a clocked process and in a clocked process, all assignments are registered. Note: Implicit state machines are an advanced topic and are not covered in ECE-327. Registered Assignments

s0 s1 s2 z’=1 a z’=0 !a z’=0 z’=1

process begin wait until rising_edge(clk); -- S0 if a = ’1’ then z <= ’1’; wait until rising_edge(clk); -- S1 z <= ’0’; else z <= ’0’; wait until rising_edge(clk); -- S2 z <= ’1’; end if; end process;

4.2.5 Bounce Example 201

slide-60
SLIDE 60

202 CHAPTER 4. STATE MACHINES

4.2.6 Registered Assignments

Combinational assignments Appear to happen instantaneously. Registered assigments Clock-cycle boundary between when inputs are sampled and when target signal is driven. VHDL and FSMs use different techniques to achieve the same behaviour. Use a registered assignment based on the state to illustrate.

S0 S1 z’ = 1; z’ = 0;

process begin wait until re(clk); if state = S0 then z <= 1; else z <= 0; end if; end process; FSM Assignment is executed before the clock edge. Delay driving the output until after clock edge. VHDL Assignment is executed after the clock edge. Sample the old (visible) value of registered inputs from before the clock edge.

Registered Assignments in State Machines

S0 S1 z’ = 1; z’ = 0; state’ = S0; z’ = 1; z’ = 0; state’ = S1; state z S0 S1 S0 S1 S0 1 1 clk

10ns 30ns 50ns 70ns 90ns

state z S1 S0 1 clk state asn z asn 50ns

203 CHAPTER 4. STATE MACHINES 4.2.6 Registered Assignments 204

Registered Assignments in VHDL

p_z1 : process begin wait until re(clk); if state = S0 then z <= 1; else z <= 0; end if; end process; p_z2 : process begin if re(clk) then if state = S0 then z <= 1; else z <= 0; end if; end if; end process;

state z S1 S0 1 clk proc_state p_z1, p_z2 50ns

+1δ +2δ

S1

Delta-cycle simulation

state z S0 S1 S0 S1 S0 1 1 clk

10ns 30ns 50ns 70ns 90ns

RTL simulation

4.2.7 More Notation 4.2.7.1 Extension: Transient States

a y = 1; z = 2; !a y = 1; z = 3; S0 S1 S2 a z = 2; !a z = 3; S0 S1 S2 y = 1;

With transient-state, write y = 1 just

  • nce.

4.2.7 More Notation 205

slide-61
SLIDE 61

206 CHAPTER 4. STATE MACHINES

Transient States with Registered Assignments

  • Syntactically, registered assigments may appear before combinational

assignments.

  • Semantically, the effect of the registered assignments occurs after the

combinational assignments.

a y’ = 1; z = 2; !a y’ = 1; z = 3; S0 S1 S2 a z = 2; !a z = 3; S0 S1 S2 y’ = 1; 1 1 a state y S0 1 1 S1 S0 S2 1 z 2 3

4.2.7.2 Assignments within States

Assignments may appear within states.

  • 1. If all outgoing edges have the same assignment, then the assignment may be

moved into the state. The three state machines below all have the same behaviour.

s1 s2 s3 w = 0; x = 1; y = 2; z’= 3; x = 1; y = 5; z’= 3; w = 0; y = 2; y = 5; x = 1; z’= 3; s1 s2 s3 s1 x = 1; w = 0; y = 2; y = 5; z’ = 3; s2 s3

207 CHAPTER 4. STATE MACHINES 4.2.7 More Notation 208

Assignments within States (Cont’d)

  • 2. If all incoming edges have the same registered assignment, then the

assignment may be transformed into a combinational assignment and moved into the state. The three state machines below all have the same behaviour.

s1 s2 s3 w = 0; x = 1; y = 2; z’= 3; x = 1; y = 5; z’= 3; s1 s2 w = 0; y = 2; y = 5; x = 1; z’= 3; s3 s1 s2 s3 w = 0; y = 2; z = 3; y = 5; x = 1;

Assignments within States (Cont’d)

As another example to illustrate moving assignments between edges and states, the three machines below have the same behaviour:

s1 s2 x’ = 1; x’ = 1; s3 s4 s5 s1 s2 s3 x = 1; s4 s5 s1 s2 x = 1; x = 1; s3 s4 s5

4.2.7 More Notation 209

slide-62
SLIDE 62

210 CHAPTER 4. STATE MACHINES

4.2.7.3 Conditional Expressions

The FSMs below have the same behaviour:

S0 S1 z = b a z = c !a S0 S1 if a then z = b else z = c

The FSMs below have the same behaviour:

S0 S1 z = b a !a S0 S1 if a then z = b

4.2.7.4 Default Values Combinational

default: z=0 S0 S2 S1 z=1 a !a

With default values

S0 S2 S1 z=1 a !a

Equivalent FSM without default values

211 CHAPTER 4. STATE MACHINES 4.2.7 More Notation 212

Default Values (Cont’d)

Intuition: a signal is defined its default value in any state-to-state transition where it is not explicitly assigned a value.

S0 S2 S1 z=1 a !a default: z=0

With default values

S0 S2 S1 z=1 a !a

Equivalent FSM without default values

More Examples

S0 S2 S1 z=1 a !a default: z=0 z=2

With default values

S0 S2 S1 z=1 a !a z=2

Equivalent FSM without default values

S0 S2 S1 z=1 a !a default: z=0 z=2

With default values

S0 S2 S1 z=1 a !a z=2

Equivalent FSM without default values

4.2.7 More Notation 213

slide-63
SLIDE 63

214 CHAPTER 4. STATE MACHINES

Default Values: Registers

The semantics define that if a registered variable is not assigned a value in a clock cycle, then it holds its previous value. Default expression Behaviour when not assigned a value none z holds its previous value. z’ = a z is assigned a. z’ = ’-’ z is unconstrained.

Default Value: Registered Assignment

default: z’ = 99 S0 S1 S2 z’=a z’=b

With default values

S0 S1 S2 z’=a z’=b

Equivalent FSM without default values

1 a state b S0 S1 S0 S1 z S2 2 3 4 5 S0 S2 6 7 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17

215 CHAPTER 4. STATE MACHINES 4.2.7 More Notation 216

Default Value: Unconstrained Register

default: z’ = ’−’ S0 S1 S2 z’=a z’=a

With default values

S0 S1 S2 z’=a z’=a

Optimized FSM

S0 S1 S2

Simplified FSM

Questions to Answer and Ponder

Question: Why do combinational variables not need a “don’t care” default statement? Question: Why do register variables need a “don’t care” default statement?

4.2.7 More Notation 217

slide-64
SLIDE 64

218 CHAPTER 4. STATE MACHINES

4.2.8 Semantic and Syntax Rules Inputs, Combinational, Registered

There are three categories of variables in FSMs. Each category has its own rules for how and when the variables are updated. Inputs Values are updated every clock cycle. Combinational If a variable is not assigned a value in a clock cycle, then its value is unconstrained. Registered If a variable is not assigned a value in a clock cycle, then it holds its previous value. If there is any ambiguity about whether a signal is an input, then it should be declared as an input.

Multiple Assignments to Same Signal

For a sequence of transitions within the same clock cycle, only the last assigment to each signal is visible.

S0 S1 y = 1; z’ = 3 y = 2; z’ = 5 1 state y S0 5 2 S1 z

219 CHAPTER 4. STATE MACHINES 4.2.8 Semantic and Syntax Rules 220

Summary of Semantic Rules

  • 1. Signals take on the value of the last assignment that is executed in a clock

cycle.

  • 2. Combinational assignments become visible immediately.
  • 3. Registered assignments become visible in the next clock cycle.
  • 4. If a combinational signal is not assigned to in a given clock cycle, then the value
  • f that signal is unconstrained (in other words, arbitrary, non-deterministic, or

don’t-care).

Syntax Rules

Our state machines are designed to match closely with VHDL code and hardware. The state machine notation is equivalent to synthesizable hardware that satisfies

  • ur rules for good coding practices, with the addition that we also support

non-determinism. Non-determinism is not synthesizable, but is often useful in specifications for state machines.

  • 1. For a given signal, it must be that either all assignments are combinational or all

assignments are registered. It is illegal to have both combinational and registered assignments to the same

  • signal. The reason is that this will lead to unsynthesizable code, because a

signal cannot be both combinational and registered.

  • 2. Within a clock cycle, a combinational signal must not be written to after it has

been read. Violating this rule will lead to combinational loops.

4.2.8 Semantic and Syntax Rules 221

slide-65
SLIDE 65

222 CHAPTER 4. STATE MACHINES

  • 3. Completness of transitions: The conditions on the outgoing edges from a state

must cover all possibilities. That is, from a given state, it must always be possible to make a transition. This includes a self-looping transition back to the state itself. Additional guidelines:

  • 1. Within a clock cycle, a combinational signal should be assigned to before it is

read. Violating this guideline will lead to non-deterministic behaviour, because the value of a combinational signal is unconstrained in a clock cycle until it has been written to.

Deterministic vs Non-Deterministic

deterministic Exactly one outgoing transition is enabled (condition is true) non-deterministic Multiple outgoing transitions are enabled; machine randomly chooses which transtion to take

  • Our state machines may be non-deterministic.
  • Non-determinism happens when multiple outgoing transitions are enabled at

the same time.

  • Non-determinism is sometimes useful in specifications and high-level models.
  • Real hardware is deterministic

(unless you are building a quantum computer)

  • For real hardware, your transitions must be mutually exclusive.

223 CHAPTER 4. STATE MACHINES 4.2.9 Reset 224

4.2.9 Reset

All circuits should have a reset signal that puts the circuit back into a good initial

  • state. However, not all flip flops within the circuit need to be reset. In a circuit that

has a datapath and a state machine, the state machine will probably need to be reset, but datapath may not need to be reset. This section reserved for your reading pleasure

Reset with Explicit-Current

process (clk) begin if rising_edge(clk) then case state is when S0 => if a = ’1’ then z <= ’1’; state <= S1; else z <= ’0’; state <= S2; end if; when S1 => z <= ’0’; state <= S0; when others => z <= ’1’; state <= S0; end case; end if; end process; process (clk) begin if rising_edge(clk) then if reset = ’1’ then state <= S0; else case state is when S0 => if a = ’1’ then z <= ’1’; state <= S1; else z <= ’0’; state <= S2; end if; when S1 => z <= ’0’; state <= S0; when others => z <= ’1’; state <= S0; end case; end if; end if; end process;

4.2.9 Reset 225

slide-66
SLIDE 66

226 CHAPTER 4. STATE MACHINES

Reset with Explicit-Current+Next

Without Reset

process (clk) begin if rising_edge(clk) then st <= next_st; end if; end process; next_st <= S1 when st = S0 and a = ’1’ else S2 when st = S0 else S0; z <= ’1’ when (st = S0 and a = ’1’)

  • r (st = S2)

else ’0’;

With Reset

process (clk) begin if rising_edge(clk) then if reset = ’1’ then st <= S0; else st <= next_st; end if; end if; end process; next_st <= S1 when st = S0 and a = ’1’ else S2 when st = S0 else S0; z <= ’1’ when (st = S0 and a = ’1’)

  • r (st = S2)

else ’0’;

Review: Introduction to State Machines

Do the state-machine fragments below have the same behaviour?

S1 a !a b = 1 b = 0 S0 c’ = b c’ = b S1 a !a b = 1 b = 0 S0 c = b

227 CHAPTER 4. STATE MACHINES 4.3. LEBLANC FSM DESIGN EXAMPLE 228

4.3 LeBlanc FSM Design Example 4.3.1 State Machine and VHDL

S0 S1 S2 a !a S3 z’ = b - c z’ = b + c

type state_ty is (S0, S1, S2, S3); signal state : state_ty; process begin wait until rising_edge(clk); if z <= b - c; z <= b + c; end if; end process; process begin wait until rising_edge(clk); if reset = ’1’ then state <= S0; else case state is when S0 => if a = ’0’ then st <= S1; else st <= S2; end if; when st <= S3; when S3 => st <= S0; end case; end if; end process;

4.3.1 State Machine and VHDL 229

slide-67
SLIDE 67

230 CHAPTER 4. STATE MACHINES

Datapath + Control

next state dp ctrl state Ctrl Datapath ≥ ≥

  • Control circuitry

– Compute next state (sequencing between states) – Drive control inputs to datapath

  • From datapath to control:

– Usually 1-bit signals – Outputs of comparators – External inputs – etc.

  • From control to datapath:

– Multiplexer select lines – Chip-enables for registers – Operations for multifunction datap- ath components.

Hardware

S0 S1 S2 a !a S3 z’ = b - c z’ = b + c a b c next state dp ctrl state Ctrl

CE D Q

z reset

231 CHAPTER 4. STATE MACHINES 4.3.2 State Encodings 232

4.3.2 State Encodings

With 7 states Binary One-Hot 000 1000000 1 001 0100000 2 010 0010000 3 011 0001000 4 100 0000100 5 101 0000010 6 110 0000001

Le Blanc in Binary

S0 S1 S2 a !a S3 z’ = b - c z’ = b + c default: z’ = ’-’

______type state_ty is signal state : state_ty; S0 : S1 : S2 : S3 : process begin wait until rising_edge(clk); if z <= b - c; z <= b + c; end if; end process; process begin wait until rising_edge(clk); if reset = ’1’ then state <= else

4.3.2 State Encodings 233

slide-68
SLIDE 68

234 CHAPTER 4. STATE MACHINES

LeBlanc in Optimized Binary

State encodings affect the amount of circuitry needed to:

  • test conditions that drive the control signals for the datapath.
  • choose the next state

Define a custom encoding to simplify the circuitry needed to recognize the condition that the system is in either S1 or S2.

S0 S1 S2 a !a S3 z’ = b - c z’ = b + c default: z’ = ’-’

signal state : S0 : S1 : S2 : S3 : process begin wait until rising_edge(clk); if z <= b - c; z <= b + c; end if; end process; process begin wait until rising_edge(clk); if reset = ’1’ then state <= else

235 CHAPTER 4. STATE MACHINES 4.3.2 State Encodings 236

Optimized Binary Le Blanc in Hardware

a b c state z reset

One-Hot LeBlanc

S0 S1 S2 a !a S3 z’ = b - c z’ = b + c default: z’ = ’-’

signal state : S0 : S1 : S2 : S3 : process begin wait until rising_edge(clk); if z <= b - c; z <= b + c; end if; end process; process begin wait until rising_edge(clk); if reset = ’1’ then state <= else

4.3.2 State Encodings 237

slide-69
SLIDE 69

238 CHAPTER 4. STATE MACHINES

One-Hot Le Blanc in Hardware

S0 S1 S2 a !a S3 z’ = b - c z’ = b + c default: z’ = ’-’ a b c reset z

4.4 Parcels

  • “Parcel” = basic unit of data in a system
  • Examples

System Parcel Microprocessor Instruction Car factory Car

  • A parcel flows through a system
  • A parcel may be composed of multiple components

Parcel Components Instruction Opcode, operands, result Car Doors, windows, engine, etc.

239 CHAPTER 4. STATE MACHINES 4.4.1 Bubbles and Throughput 240

4.4.1 Bubbles and Throughput

  • Between each pair of parcels is a sequence of zero or more bubbles

α β γ bubbles bubbles parcel parcel parcel

Bubble : invalid or garbage data that must be ignored

  • Each system has a requirement for minimum number of bubbles between

parcels

  • Throughput: number of parcels per clock cycle

α β γ δ ε 2 bubbles 2 bubbles 2 bubbles 2 bubbles

throughput = 1 parcel / 3 clock cycles = 1/3 parcels per clock cycle

α β γ δ 2 bubbles 4 bubbles 3 bubbles 12 clock cycles

throughput = 3 parcels / 12 clock cycles = 1/4 parcels per clock cycle

Maximum and Actual Throughput

Maximum Throughput The maximum rate of parcels per cycle (minimum number of bubbles) at which the system will work correctly. usually: max throughput = 1/(minimum number of bubbles + 1) Actual Throughput The actual rate at which the environment sends parcels to the system. Actual throughput must be less-than-or-equal-to maximum throughput. Actual number of bubbles must be greater-than-or-equal-to minimum number of bubb

4.4.1 Bubbles and Throughput 241

slide-70
SLIDE 70

242 CHAPTER 4. STATE MACHINES

Max Tput: Pipelining and Superscalar

Question: Label each of the arrows and dots below with one of: Unpipelined, Pipelined, Fully-pipelined, or Superscalar

1 1/latency Maximum throughput

As an advanced topic, some systems with both combinational inputs and outputs use an area optimization that reduces the maximum throughput of an unpipelined system to be 1/(latency+1).

FSMs, Latency, and Tput

Question: What are the latency and maximum throughput of the FSM below?

S0 S1 S2 a !a S3 p’ = b - c p’ = b + c S3 z’ = p + c

Answer:

b a p c z 1 2 3 4 5 6 7 8 9

Latency Throughput

243 CHAPTER 4. STATE MACHINES 4.4.1 Bubbles and Throughput 244

Actual Throughput: Constant and Variable

Two categories of actual throughput: Constant Throughput Always the same number of bubbles between parcels. Often actual number of bubble is the minimum number of bubbles. Choose actual throughput = maximum throughput.

α β γ δ ε 2 bubbles 2 bubbles 2 bubbles 2 bubbles

Variable Throughput The number of bubbles changes over time. Usually the number of bubbles is unpredictable. Actual number of bubbles must be at least as great as minimum required.

α β γ δ 2 bubbles 4 bubbles 3 bubbles

4.4.2 Parcel Schedule Actual Throughput and Parcel Schedule

To reduce confusion about the meaning of “throughput”, we will use:

  • “throughput” means “maximum possible throughput”
  • “parcel schedule” means “actual throughput”
  • “as soon as possible (ASAP) parcel schedule” means actual throughput is

constant and is the maximum possible

  • “unpredictable number of bubbles” means actual throughput is variable

4.4.2 Parcel Schedule 245

slide-71
SLIDE 71

246 CHAPTER 4. STATE MACHINES

Parcel Schedule and FSM Patterns

"Trunk" derived from computation for

  • ne parcel

Outer loop derived from parcel schedule S0 bubble parcel Outer loop derived from parcel schedule "Trunk" derived from computation for

  • ne parcel

ASAP parcels Unpredictable number of bubbles

4.4.3 Valid Bits

When the parcel schedule is unpredictable number of bubbles, we need a mechanism to distinguish between a parcel and a bubble. Most common solution: valid bit protocol.

α β γ δ i_data i_valid α β γ

  • _data
  • _valid

247 CHAPTER 4. STATE MACHINES 4.4.3 Valid Bits 248

State Encodings and Parcel Schedule

ASAP Parcels One-hot, binary, or custom. Bubbles Valid bits

c = i_a + i_b

  • _z = p + q

Core

c = i_a + i_b

  • _z = p + q

ASAP parcels

c = i_a + i_b

  • _z = p + q

Unpredictable parcels

One-Hot State Encoding

Waveform for one-hot:

α i_data

  • _data

state(0) state(1) state(2)

Hardware implementation of one-hot:

reset

4.4.3 Valid Bits 249

slide-72
SLIDE 72

250 CHAPTER 4. STATE MACHINES

Valid-Bit State Encoding

Waveform for valid bits:

α i_data

  • _data

v(0) v(1) v(2) β i_valid v(3)

  • _valid

γ

Hardware implementation of valid bits:

i_valid

  • _valid

reset

4.5 LeBlanc with Bubbles

Le Blanc with a parcel schedule of unpredictable number of bubbles.

a !a z’ = b - c z’ = b + c

251 CHAPTER 4. STATE MACHINES 4.5. LEBLANC WITH BUBBLES 252

VHDL Code

process begin wait until rising_edge(clk); if reset = ’1’ then state <= else

a !a z’ = b - c z’ = b + c S1 S2 S3 S0 !i_v i_v

process begin wait until rising_edge(clk); if z <= b - c; z <= b + c; end if; end process;

4.6 Pseudocode

We use pseudocode to describe multi-step computation (e.g., algorithms). Declarations We must declare “special” variables. Inputs Value might change in each clock cycle. Interpcl section 4.7 Used to communi- cate between parcels Outputs If-then-else While loop For loop Repeat-until loop Assignments Expressions Arithmetic, logical, arrays, etc. Example input: a, b;

  • utput: z;

p = a + b; for i in 0 to 3 { p = p + b; } z = p;

4.6. PSEUDOCODE 253

slide-73
SLIDE 73

254 CHAPTER 4. STATE MACHINES

Pseudocode Semantics

  • Idential to conventional software semantics
  • Executed sequentially: target is updated when the assignment is executed.
  • All assignments are instantenous: no reg vs comb.
  • Variables hold value until assigned a new value.
  • No notion of time or clock cycles.

Core vs System

  • Pseudocode describes the core of a computation. It does not show the parcel

schedule.

  • But, with a finite sequence of parcels, the pseudocode may show more than 1

parcel.

  • FSM for core does not show i valid and o valid.
  • FSM for system (including parcel schedule) does show i valid and o valid if

needed (i.e., if the parcel schedule is unpredicable number of bubbles).

4.7 Interparcel Variables and Loops 4.7.1 Introduction to Looping Le Blanc

Two new concepts:

  • Inter-parcel variables
  • Outer loop around “core”

Inter-parcel variables are used to communicate data between parcels. Until now, all of our variables have been intra-parcel: used within a single parcel: All intra-parcel vars z = a + b + c “Total is an inter-parcel variable Total = Total + a + b

255 CHAPTER 4. STATE MACHINES 4.7.2 Pseudo-Code 256

4.7.2 Pseudo-Code

We add variable declarations to distinguish inputs, outputs, and interparcel variables. Below, “T” stands for “total”. Simple inputs a, b, c;

  • utputs z;

if a then { z = b + c; } else { z = b - c; } Inter-parcel var T inputs b, c;

  • utputs

z; interpcl T; if a then { T = T + b + c; } else { T = T + b - c; } Loop and inter-parcel var inputs b, c;

  • utputs

z; interpcl T; T = 0; for i in 0 to 127 { if a then { T = T + b + c; } else { T = T + b - c; } } z = T;

4.7.2 Pseudo-Code 257

slide-74
SLIDE 74

258 CHAPTER 4. STATE MACHINES

4.7.3 State Machine Design Patterns

ASAP Parcels Unpredictable number of bubbles

State Machine

ASAP Parcels

S1 S2 a !a S3 i’=i+1 S4 Total’ = Total + b + c Total’ = Total + b - c i < 128 i ≥ 128 Total’ = 0 i’ = 0 z = Total

Unpredictable number of bubbles

a !a i’=i+1 Total’ = Total + b + c Total’ = Total + b - c

259 CHAPTER 4. STATE MACHINES 4.7.4 VHDL Code for Loop and Bubbles 260

4.7.4 VHDL Code for Loop and Bubbles

v(0) <= i_v; process begin wait until re(clk); if reset = ’1’ then v(1 to 4) = (others => ’0’); else end if; end process; process begin wait until re(clk); if reset = ’1’ then total = (others => ’0’); elsif v(4) and i >= 128 then total = (others => ’0’); elsif v(1)=’1’ then total = total + b - c; elsif v(2) then total = total + b + c; end if; end process; process begin wait until re(clk); if reset = ’1’ then i = (others => ’0’); elsif v(4) and i >= 128 then i = (others => ’0’); elsif v(3)=’1’ then i = i + 1; end if; end process;

  • _valid <= v(4) and i >= 128;

z <= total;

4.7.4 VHDL Code for Loop and Bubbles 261

slide-75
SLIDE 75

262 CHAPTER 4. STATE MACHINES

4.8 Memory Arrays and RTL Design 4.8.1 Memory Operations Read of Memory

Hardware

WE A DI DO

a do M clk we

Behaviour

clk αa a M(αa) we do αd

FSM

Write to Memory

Hardware

WE A DI DO

a M clk di we do

Behaviour

clk αa a M(αa) αd we di do

FSM

263 CHAPTER 4. STATE MACHINES 4.8.1 Memory Operations 264

Dual-Port Memory

Hardware

a0 M clk di0 we

WE A0 DI0 DO0 A1 DO1

a1 do1 do0

Behaviour

clk αa a0 M(αa) αd we di0 βa a1 do0 M(βa) βd do1

FSM

4.8.2 Memory Arrays in VHDL

entity mem is generic ( data_width : natural := 8; addr_width : natural := 7 ); port ( clk : in std_logic; wr_en : in std_logic

  • - write enable

addr : in unsigned( add_width - 1 downto 0);

  • - address

i_data : in data;

  • - input data
  • _data : out data
  • - output data

); end mem; architecture main of mem is type mem_type is array (2**addr_width-1 downto 0) of std_logic_vector(data_width - 1 downto 0) ; signal mem : mem_type ; begin process (clk) begin if rising_edge(clk) then if wr_en = ’1’ then mem( to_integer( addr) ) <= i_data ; end if ;

  • _data <= mem( to_integer( addr ));

end if ; end process; end main;

4.8.2 Memory Arrays in VHDL 265

slide-76
SLIDE 76

266 CHAPTER 4. STATE MACHINES

4.8.3 Using Memory

Pseudocode M[i] = a; p = M[i+1]; FSM

S2 S1

Both vars are VHDL

u_mem : entity work.mem port map ( clk => clk, wr_en => addr => i_data =>

  • _data =>

); mem_wr_en <= ’1’ when else ’0’; mem_addr <= i when else i + 1;

Hardware:

WE A DI DO

1 i ctrl p M

4.8.3.1 Writing from Multiple Vars

FSM

S1 S2 M’[i] = a M’[i+1] = b

Hardware

WE A DI DO

1 i ctrl p M a b

u_mem : entity work.mem port map ( clk => clk, wr_en => mem_wr_en, addr => mem_addr, i_data => mem_i_data;

  • _data => p;

); mem_wr_en <= ’1’ when state = S1

  • r state = S2;

mem_addr <= i when state = S1 else i + 1; mem_i_data <= a when state = S1 else b;

267 CHAPTER 4. STATE MACHINES 4.8.3 Using Memory 268

4.8.3.2 Reading from Memory to Multiple Variables

Pseudocode p = M[i] q = a ... p = b q = M[i+1] FSM

S1 p’ = M[i] q’ = a S2 q’ = M[i+1] p’ = b

Hardware

WE A DI DO

1 i M p q a b ctrl

Question: How should we connect memory to p and q?

Multivar Reading (cont’d)

S2 mem_o_data’ = M[i] mem_o_data’ = M[i+1] p = mem_o_data q = a S1 S3 q = mem_o_data p = b

u_mem : entity work.mem port map ( clk => clk, wr_en => mem_wr_en, addr => mem_addr, i_data => mem_i_data;

  • _data => mem_o_data;

); mem_wr_en <= ’0’; mem_addr <= i when state = S1 else i + 1; p <= mem_o_data when state = S2 else b; q <= a when state = S2 else mem_o_data;

4.8.3 Using Memory 269

slide-77
SLIDE 77

270 CHAPTER 4. STATE MACHINES

4.8.3.3 Example: Maximum Value Seen so Far

Design an FSM that iterates through a memory array, replacing each value with the maximum value seen so far. Example execution:

Initial value of M 4 3 2 6 7 3 5 Final value of M

Pseudocode #1 i = 0 max = M[i] while i < 128 { i = i + 1 b = M[i] if max < b { max = b } else { M[i] = max } } Pseudocode #2 i = 0 while i < 128 { if max < b { max = b } else { M[i] = max } }

271 CHAPTER 4. STATE MACHINES 4.8.3 Using Memory 272

FSM #1

S0 i’=0 max’ = M[i] i < 128 i ≥ 128 i’=i+1 S1 b’ = M[i] S2 max < b max ≥ b max’=b M’[i]=max

FSM #2

i’=0 i < 128 i ≥ 128 S2 max < b max ≥ b max’=b M’[i]=max S0

4.8.4 Build Larger Memory from Slices

This section reserved for your reading pleasure

4.8.4 Build Larger Memory from Slices 273

slide-78
SLIDE 78

274 CHAPTER 4. STATE MACHINES

4.8.5 Memory Arrays in High-Level Models

This section reserved for your reading pleasure

Chapter 5 Dataflow Diagrams

275

5.1. DATAFLOW DIAGRAMS 276

5.1 Dataflow Diagrams 5.1.1 Dataflow Diagrams Overview

  • Dataflow diagrams are data-dependency graphs where the computation is

divided into clock cycles.

  • Purpose:

– Provide a disciplined approach for designing datapath-centric circuits – Guide the design from algorithm, through high-level models, and finally to register transfer level code for the datapath and control circuitry. – Estimate area and performance – Make tradeoffs between different design options

  • Background

– Based on techniques from high-level synthesis tools – Some similarity between high-level synthesis and software compilation – Each dataflow diagram corresponds to a basic block in software compiler terminology.

Data-Dependency Graphs and Dataflow Diagrams

Models for z = a + b + c + d + e + f

a b c d e f

+ + + + +

z

Data-dependency graph

a b c d e f

+ + + + +

z

Dataflow diagram

5.1.1 Dataflow Diagrams Overview 277

slide-79
SLIDE 79

278 CHAPTER 5. DATAFLOW DIAGRAMS

a b c d e f

+ + + + +

z

Horizontal lines mark clock cycle boundaries Unconnected signal tails are inputs Signals crossing clock boundaries are flip-flops Blocks in clock cycles are datapath components Unconnected signal heads are outputs

5.1.2 Dataflow Diagram Execution

a b c d e f

+ + + + +

x1 x2 x3 x4 z clk a x1 x2 x3 x4 x5 z

1 2 3 4 5 0 1 2 3 4 5 6

x5

284 CHAPTER 5. DATAFLOW DIAGRAMS 5.1.2 Dataflow Diagram Execution 285

Latency

Definition Latency: Number of clock cycles from inputs to outputs.

  • A combinational circuit has latency of zero.
  • A single register has a latency of one.
  • A chain of n registers has a latency of n.

+ + + + +

Latency =

+ + + + +

Latency =

5.1.3 Dataflow Diagrams, Hardware, and Behaviour Primary Input

Dataflow Diagram

i x

Hardware

i x

Behaviour

clk i x

5.1.3 Dataflow Diagrams, Hardware, and Behaviour 286

slide-80
SLIDE 80

287 CHAPTER 5. DATAFLOW DIAGRAMS

Register Signal

Dataflow Diagram

i1 x

+

i2

Hardware +

i2 x i1

Behaviour

clk i1 i2 x

Combinational-Component Output

Dataflow Diagram

i1 x

+

i2

Hardware +

i2 i1 x

Behaviour

clk i1 i2 x

288 CHAPTER 5. DATAFLOW DIAGRAMS 5.1.3 Dataflow Diagrams, Hardware, and Behaviour 289

Reuse a Component

Dataflow Diagram

i1

+

i2

+

r1 r2 r1 r2 r1 i2

  • 1

Hardware

i2 i1

+

  • 1

r1 r2

Behaviour

clk i1 i2

  • 1

5.1.4 Performance Estimation Performance Equations

Performance ∝ 1 TimeExec TimeExec = Latency ×ClockPeriod

Performance of Dataflow Diagrams

  • Latency: count horizontal lines in diagram
  • Min clock period (Max clock speed) limited by longest path in a clock cycle

5.1.4 Performance Estimation 290

slide-81
SLIDE 81

291 CHAPTER 5. DATAFLOW DIAGRAMS

5.1.5 Area Estimation

  • Maximum number of blocks in a clock cycle is total number of that

component that are needed

  • Maximum number of signals that cross a cycle boundary is total number of

registers that are needed

  • Maximum number of unconnected signal tails in a clock cycle is total number
  • f inputs that are needed
  • Maximum number of unconnected signal heads in a clock cycle is total

number of outputs that are needed

  • These estimates are just approximations. Does not take into account:

– Area and delay of control circuitry – Multiplexers on registers and datapath components – Relative area and delay of different components – Technology-specific features, constraints, and costs

  • These estimates give lower bounds.
  • Other constraints or design goals might force you to use more components.

Examples: – Decreasing latency = ⇒ larger area – Constraint on max number of registers = ⇒ more datapath components

Area Estimation

Implementation-technology factors, such as the relative size of registers, multiplexers, and datapath components, might force you to make tradeoffs that increase the number of datapath components to decrease the overall area of the circuit.

  • With some FPGA chips, a 2:1 multiplexer has the same area as an adder.
  • With some FPGA chips, a 2:1 multiplexer can be combined with an adder into
  • ne FPGA cell per bit.
  • In FPGAs, registers are usually “free”, in that the area consumed by a circuit is

limited by the amount of combinational logic, not the number of flip-flops.

292 CHAPTER 5. DATAFLOW DIAGRAMS 5.1.6 Design Analysis 293

5.1.6 Design Analysis

a b c d e f

+ + + + +

z

num inputs num outputs num registers num adders min clock period latency

Design Analysis 2

a b c d e f

+ + + + +

z

num inputs num outputs num registers num adders min clock period latency

5.1.6 Design Analysis 294

slide-82
SLIDE 82

295 CHAPTER 5. DATAFLOW DIAGRAMS

Design Analysis 2 (Cont’d)

a b c d e f

+ + + + +

x1 x2 x3 x4 z

1 2

clk a x1 x2 x3 x4 x5 z

0 1 2 3 4 5 6 3

x5

Design Analysis 3

a b c d e f

+ + + + +

z

num inputs num outputs num registers num adders min clock period latency

296 CHAPTER 5. DATAFLOW DIAGRAMS 5.1.6 Design Analysis 297

Review: Dataflow Diagrams

For each of the diagrams below, calculate the latency, minimum clock period, and minimum number of adders required. Latency Clock period Adders

5.2 Design Example: Hnatyshyn DFD 5.2.1 Requirements

  • Functional requirement:

– Compute the following formula: z = a + b + c

  • Performance requirements:

– Max clock period: flop plus (1 add) – Max latency: 2

  • Cost requirements

– Maximum of two adders – Unlimited registers – Maximum of three inputs and one output – Maximum of 5000 student-minutes of design effort

  • Combinational inputs, registered outputs
  • Parcels arrive as-soon-as-possible (ASAP)

5.2. DESIGN EXAMPLE: HNATYSHYN DFD 298

slide-83
SLIDE 83

299 CHAPTER 5. DATAFLOW DIAGRAMS

5.2.2 Data-Dependency Graph

Requirements and algorithm: z = a + b + c Create a data-dependency graph for the algorithm. Data-dependency graph

z a c b

5.2.3 Initial Dataflow Diagram

Schedule operations into clock cycles

z a c b

Area and performance analysis latency clock period inputs

  • utputs

registers adders

  • Best-case analysis for a theoretical design
  • No guarantee that we will achieve best-case (optimal) design
  • Design process: systematic method to try to come close close to optimal design
  • Start with sub-optimal, but obviously correct, design
  • Series of optimizations to improve area and speed while avoiding bugs

300 CHAPTER 5. DATAFLOW DIAGRAMS 5.2.4 Area Optimization 301

5.2.4 Area Optimization

z a b clock cycle 1 2

latency clock period inputs

  • utputs

registers adders

5.2.5 Assign Names to Registered Signals

We start our initial (sub-optimal) design. Before we can write VHDL code for our dataflow diagram, we must assign a name to each internal registered value. Optionally, we may assign names to combinational values.

z a c b clock cycle 1 2

5.2.5 Assign Names to Registered Signals 302

slide-84
SLIDE 84

303 CHAPTER 5. DATAFLOW DIAGRAMS

Behaviour and Analysis

c x2 1 2 3 4 5 a b x1 clock cycle z z a c b x2 1 2 x1

latency clock period inputs

  • utputs

registers adders

Use ASAP Parcel Schedule

S1 default z = x2; x1’ = a+b; x2’ = x1 + c; S2

Question: When to start parcel β?

1 2 3 4 5 a, b, a1 x2, z 6 α α 7 8 state c, x1, a2 α

Question: What is the maximum throughput that this system supports?

304 CHAPTER 5. DATAFLOW DIAGRAMS 5.2.6 Allocation 305

5.2.6 Allocation

Allocation is the area optimization of mapping a large number of objects in current design to smaller number of

  • bjects.

Design Analysis Current Optimum Inputs 3 2 Registers 2 1 Adders 2 1 Outputs 1 1

  • Example: allocate both xi registers to the same register
  • Similar to register allocation in software
  • This design is so simple that allocation is trivial. For real designs, finding the

best allocation is very difficult. Many different heuristics for how to do allocation.

  • We will allocate inputs, outputs, registers, and datapath components.
  • We will work clock-cycle by clock-cycle.
  • Annotate dataflow diagram and fill in cells in I/O schedule and control table.

i1 i2

  • 1

clock cycle 1 2 r1 ce d a1 src1 src2 clock cycle 1 const

  • 1

I/O Schedule Control Table

Allocate Clock Cycle 0: Inputs and Datapath

i1 i2

  • 1

clock cycle 1 2 z a c b r1 ce d a1 src1 src2 clock cycle 1 const

  • 1

I/O Schedule Control Table

5.2.6 Allocation 306

slide-85
SLIDE 85

307 CHAPTER 5. DATAFLOW DIAGRAMS

Allocate Clock Cycle 0: Regs

z a c b i1 i2 i1 i2 a1 i1 i2

  • 1

clock cycle 1 2 r1 ce d a1 src1 src2 clock cycle 1 const

  • 1

I/O Schedule Control Table a b

Allocate Clock-Cycle 1: Inputs and Datapath

z a c b i1 i2 a1 r1 i1 i2

  • 1

clock cycle 1 2 i1 i2 r1 ce d a1 src1 src2 clock cycle 1 const

  • 1

I/O Schedule Control Table a1 1 a b

308 CHAPTER 5. DATAFLOW DIAGRAMS 5.2.6 Allocation 309

Allocate Clock-Cycle 1: Regs

c z a c b x1 i1 i2 i2 r1 a1 a1 i1 i2

  • 1

clock cycle 1 2 a1 1 i1 i2 r1 ce d a1 src1 src2 clock cycle 1 const

  • 1

I/O Schedule Control Table r1 i2 a b

Allocate Output

With registered outputs, each output port must be connected directly to a register.

z a c b x2 x1 i1 i2 i2 r1 r1 a1 a1 i1 i2

  • 1

clock cycle 1 2 a1 1 r1 i2 i1 i2 r1 ce d a1 src1 src2 clock cycle 1 const

  • 1

I/O Schedule Control Table a1 1 a b c

5.2.6 Allocation 310

slide-86
SLIDE 86

311 CHAPTER 5. DATAFLOW DIAGRAMS

Behaviour post Allocation

clock cycle a1 1 2 i1 i2 r1

  • 1

α α α α α α α α 1 2 z a c b x2 x1 i1 i2 i2 r1 r1

  • 1

a1 a1

5.2.7 State Machine

  • Done with datapath design and optimization
  • Now build the control circuitry

Control-circuit optimizations:

  • Choose state encoding
  • Design state machine
  • Design control circuitry that drives datapath

– Multiplexer select lines – Chip enables – Operation selection

312 CHAPTER 5. DATAFLOW DIAGRAMS 5.2.7 State Machine 313

Control Table for Explicit State Machine

Transform control table:

  • Label rows by state
  • Add next-state column
  • Identify “don’t-care” values

Labeled by clock cycle

r1 ce d a1 src1 src2

  • 1

a1 1 r1 i2 a1 1 r1 clock cycle 1 const i1 i2

Labeled by state

r1 ce d a1 src1 src2

  • 1

a1 1 r1 i2 a1 1 r1 i1 i2 state next state const

Find Constants

If all of the cells in a column have the same value, then that column can be reduced to a constant.

r1 ce d a1 src1 src2

  • 1

a1 1 r1 i2 a1 1 S0 state r1 S1 i1 i2 S0 next state S1 const

5.2.7 State Machine 314

slide-87
SLIDE 87

315 CHAPTER 5. DATAFLOW DIAGRAMS

Control Table, State Machine, Hardware

r1 ce d a1 src1 src2

  • 1

r1 S0 state S1 i1 S0 next state S1 r1 i2 a1 1 const 1 1 a1 a1 i2 i2

Control table for entire system

S0 S1

State machine for entire system

i2 i1 Ctrl

  • 1

r1 next state dp ctrl state

Hardware for entire system

5.2.8 VHDL Implementation

architecture main of hnatyshyn is signal r1, a1, a1_src1 : unsigned(7 downto 0); type state_ty is (S0, S1); signal state : state_ty; begin

  • - control

process (clk) begin if rising_edge(clk) then if reset = ’1’ then state <= S0; else case state is when S0 => state <= S1; when S1 => state <= S0; end case; end if; end if; end process;

316 CHAPTER 5. DATAFLOW DIAGRAMS 5.2.8 VHDL Implementation 317

a1_src1 <= i1 when state = S0 else r1;

  • - registers

process (clk) begin if rising_edge(clk) then r1 <= a1; end if; end process;

  • - datapath

a1 <= a1_src1 + i2;

  • 1 <= r1;
  • end architecture;

VHDL Implementation #2

  • One-hot encoding for state
  • Define constants for S0, S1
  • Replace state = S0 with state(0) = ’1’.

r1 ce d a1 src1 src2

  • 1

r1 state i1 S0 next state S1 r1 i2 a1 1 const 1 1 a1 a1 i2 i2

5.2.8 VHDL Implementation 318

slide-88
SLIDE 88

319 CHAPTER 5. DATAFLOW DIAGRAMS

architecture main of hnatyshyn is signal r1, a1 : unsigned(7 downto 0); subtype state_ty is std_logic_vector(1 downto 0); constant S0 : state_ty := "01"; constant S1 : state_ty := "10"; signal state : state_ty; begin

  • - control

process (clk) begin if rising_edge(clk) then if reset = ’1’ then state = S0; else state <= state rol 1; end if; end if; end process; a1_src1 <= i1 when state(0) = ’1’ else r1;

  • - registers

process (clk) begin if rising_edge(clk) then r1 <= a1; end if; end process;

  • - datapath

a1 <= a1_src1 + i2;

  • 1 <= r1;
  • end architecture;

320 CHAPTER 5. DATAFLOW DIAGRAMS 5.3. DESIGN EXAMPLE: HNATYSHYN WITH BUBBLES 321

5.3 Design Example: Hnatyshyn with Bubbles

  • section 5.2: Hnatyshyn with ASAP parcels
  • This section: Hnatyshyn with unpredictable number of bubbles
  • Key feature: valid bits for control circuitry

5.3.1 Adding Support for Bubbles

  • No change to dataflow diagram (dataflow diagrams are independent of parcel

schedule)

  • Add i valid and o valid to denote whether input or output is parcel or bubble
  • Add idle state to state machine for when there is not a parcel in the system

5.3.1 Adding Support for Bubbles 322

slide-89
SLIDE 89

323 CHAPTER 5. DATAFLOW DIAGRAMS

Add Valid Bits

a1 1 2 3 4 5 i1 i2 r1

  • 1

6 α α α α α α α α β β β β β β β β γ γ γ γ γ γ γ γ 7 8 9 10 11 12 i_valid

  • _valid

i_valid

  • _valid

Use Valid Bits as Control

i2 i1 Ctrl

  • 1

r1

  • _valid

reset i_valid dp ctrl

324 CHAPTER 5. DATAFLOW DIAGRAMS 5.3.1 Adding Support for Bubbles 325

Behaviour

v1 v2 ’0’ ’1’ S0 S1 S2 a1 1 2 3 4 5 i1 i2 r1

  • 1

6 α α α α α α α α β β β β β β β β γ γ γ γ γ γ γ γ 7 8 i_valid

  • _valid

state valid bits 9

v0 v1 v2

5.3.2 Control Table with Valid Bits Initial Table

  • Label the rows of the control table by valid bits, instead of by states.
  • Do not include a row for the last valid bit.

– We have registered outputs – Therefore, no control decisions are made in the last clock cycle – Therefore, the last valid bit does not affect the datapath

r1 ce d a1 src1 src2

  • 1

clock cycle 1 2 z a c b x2 x1 i1 i2 i2 r1 r1

  • 1

a1 a1

5.3.2 Control Table with Valid Bits 326

slide-90
SLIDE 90

327 CHAPTER 5. DATAFLOW DIAGRAMS

Constants

valid bits v(0) v(1) r1 ce d a1 src1 src2

  • 1

i1 a1 r1 clock cycle 1 2 z a c b x2 x1 i1 i2 i2 r1 r1

  • 1

a1 a1 const i2 1 a1 1 r1 i2

5.3.3 VHDL

The only difference between the VHDL code for Hnatyshyn with bubbles and Hnatyshyn with ASAP parcels is the control circuitry. The datapath is exactly the same for both designs.

entity hnatyshyn_bubble is port ( clk : in std_logic; i_valid : in std_logic; i1, i2 : in unsigned(7 downto 0);

  • _valid : out std_logic;
  • 1

: out unsigned(7 downto 0) ); end entity; architecture main of hnatyshyn_bubble is signal r1, a1, a1_src1 : unsigned(7 downto 0); signal v : std_logic_vector(0 to 2); begin

328 CHAPTER 5. DATAFLOW DIAGRAMS 5.3.3 VHDL 329

  • - control

v(0) <= i_valid; process (clk) begin if rising_edge(clk) then if reset = ’1’ then v(1 to 2) <= (others => ’0’); else v(1 to 2) <= v(0 to 1); end if; end if; end process; a1_src1 <= i1 when v(0) = ’1’ else r1;

  • - registers

process (clk) begin if rising_edge(clk) then r1 <= a1; end if; end process;

  • - datapath

a1 <= a1_src1 + i2;

  • _valid <= v(2);
  • 1

<= r1;

  • end architecture;

5.3.3 VHDL 330

slide-91
SLIDE 91

331 CHAPTER 5. DATAFLOW DIAGRAMS

5.4 Inter-Parcel Variables: Hnatyshyn with Internal State

Inter-parcel variables are used to communicate data between parcels. Previous systems z = a + b + c “Sum” is an inter-parcel variable Sum = Sum + a + b intra-parcel variables The type of variables and signals that we have used until now

  • Also called “temporary values”
  • Stores intermediate data from clock-cycle to clock-clock cycle
  • Each value is read only by the same parcel that wrote the value

inter-parcel variables The new type of variables and signals

  • Also called “programmer-visible”, “internal-state”, or “visible-state” variables
  • Stores data that is used to communicate between parcels
  • Each value is written by one parcel and then read by other parcels

5.4.1 Requirements and Goals

  • Functional requirements: compute the following formula: Sum = Sum + a + b
  • Performance requirement:

– Max clock period: flop plus (1 add) – Max latency: 3

  • Cost requirements

– Maximum of two adders – Unlimited registers – Maximum of three inputs and one output – Maximum of 5000 student-minutes of design effort

  • Combinational inputs
  • Registered outputs
  • Parcel schedule is “Unpredictable number of bubbles”

332 CHAPTER 5. DATAFLOW DIAGRAMS 5.4.2 Dataflow Diagrams and Waveforms 333

5.4.2 Dataflow Diagrams and Waveforms

a b Sum Sum a b Sum Sum a b Sum i1 i2 a1 r1 r2 a1 1 2 clock cycle Sum r1 r2 a1 1 2 3 4 5 i1 i2 r1 α α α α α α β β β β β 6 β γ γ γ γ γ γ α β γ

Bad DFD

Question: What is wrong with the dataflow diagram below?

a b Sum i1 i2 a1 r1 a1 1 2 clock cycle Sum r1

5.4.2 Dataflow Diagrams and Waveforms 334

slide-92
SLIDE 92

335 CHAPTER 5. DATAFLOW DIAGRAMS

States and Bubbles

Question: Label the states on the DFD and execution. Complete the FSM DFD

a b Sum i1 i2 a1 r1 r2 a1 Sum r1

FSM

S0 S1 S2

Execution

α α α α α α β β β β β α β 1 2 3 4 5 α α β β 6 state S2 γ γ γ S1 7 8 9 10 r2 a1 i1 i2 r1 γ γ γ γ S1 β δ δ δ δ δ δ

Reset

γ α α α α α α β β β β β γ γ α β 1 2 3 4 5 α α β β 6 state S2 γ γ reset δ δ ε ε δ S1 7 8 9 10 r2 a1 i1 i2 r1 δ ε δ ε δ ε ε δ S1 β

336 CHAPTER 5. DATAFLOW DIAGRAMS 5.4.3 Control Tables 337

5.4.3 Control Tables Initial Control Table

S0 S1 S2 state S0 S1 S2 S1 S0 S0 S0 S1 valid bits S2 S2 S1

v0 v1 v2

γ α α α α α α β β β β β γ γ α β 1 2 3 4 5 α α β β 6 γ γ reset δ δ ε ε δ 7 8 9 10 r2 a1 i1 i2 r1 δ ε δ ε δ ε ε δ β

VHDL Code for Control Circuitry

The VHDL code for just the control circuitry is below. In section 5.4.4, we show the complete code. a1_src1 <= i1 when v(0) = ’1’ else r1; a1_src2 <= i2 when v(0) = ’1’ else r2; r1_ce <= v(0) or v(1);

5.4.3 Control Tables 338

slide-93
SLIDE 93

339 CHAPTER 5. DATAFLOW DIAGRAMS

5.4.4 VHDL Implementation

  • - valid bits

v(0) <= i_valid; process begin wait until rising_edge(clk); if reset = ’1’ then v( 1 to 2 ) <= (others => ’0’); else v( 1 to 2 ) <= v(0 to 1); end if; end process;

  • - a1

a1_src1 <= i1 when v(0) = ’1’ else r1; a1_src2 <= i2 when v(0) = ’1’ else r2; a1 <= a1_src1 + a1_src2;

  • - r1

process begin wait until rising_edge(clk); if reset = ’1’ then r1 <= (others => ’0’); elsif v(0)=’1’ or v(1)=’1’ then r1 <= a1; end if; end process;

  • - r2

process begin wait until rising_edge(clk); r2 <= r1; end process;

  • - outputs
  • _valid <= v(2);
  • 1

<= r1;

340 CHAPTER 5. DATAFLOW DIAGRAMS 5.4.5 Summary of Bubbles and Inter-Parcel Variables 341

5.4.5 Summary of Bubbles and Inter-Parcel Variables

Options for state encoding: Systen has Inter-pcl vars No Yes State encoding ASAP FSM has idle state Ctrl table hhas idle row State encoding Bubbles FSM has idle state Ctrl table hhas idle row

5.5 Design Example: Vanier Design Process

  • 1. Requirements
  • 2. Algorithm
  • 3. Data-dependency graph
  • 4. Schedule
  • 5. Allocate I/O ports, datapath

components, registers

  • 6. Separate datapath and control
  • 7. Connect datapath, add muxes
  • 8. Block-diagram of datapath
  • 9. Control-table for state machine
  • 10. Don’t-care assignments
  • 11. VHDL code #1 (core)
  • 12. Parcel schedule
  • 13. State encoding
  • 14. VHDL code #2 (system)

5.5. DESIGN EXAMPLE: VANIER 342

slide-94
SLIDE 94

343 CHAPTER 5. DATAFLOW DIAGRAMS

5.5.1 Requirements

  • Functional requirements: compute the following formula:

z = (a × d) + c + (d × b) + b for sixteen-bit unsigned data.

  • Performance requirement:

– Max clock period: flop plus (2 adds or 1 multiply) – Max latency: 4

  • Cost requirements

– Maximum of two adders – Maximum of two multipliers – Unlimited registers – Maximum of three inputs and one output – Maximum of 5000 student-minutes of design effort

  • Combinational inputs
  • Registered outputs
  • ASAP parcel schedule

5.5.2 Algorithm

z = (a × d) + c + (d × b) + b Create a data-dependency graph for the algorithm.

z a d

+ + +

b c

344 CHAPTER 5. DATAFLOW DIAGRAMS 5.5.3 Initial Dataflow Diagram 345

5.5.3 Initial Dataflow Diagram

Schedule operations into clock cycles. Requirement for max clock period: max( 2 add, mul) + flop.

z a d

+ + +

b c

Area and performance analysis latency clock period inputs

  • utputs

registers adders multipliers

5.5.4 Reschedule to Meet Requirements

Requirement: no more than 3 inputs.

z a d

+ + +

b c z d b c a

5.5.4 Reschedule to Meet Requirements 346

slide-95
SLIDE 95

347 CHAPTER 5. DATAFLOW DIAGRAMS

Fix Clock Period Violation

z d

+ + +

b c a z d

+ + +

b c a

5.5.5 Optimization: Reduce Inputs

Assume that inputs are much more ex- pensive than other resources.

z d

+ + +

b c a z d b c a

348 CHAPTER 5. DATAFLOW DIAGRAMS 5.5.5 Optimization: Reduce Inputs 349

Analysis

z a d

+ + +

b c 1 2 3

latency clock period inputs

  • utputs

registers adders multipliers Question: Should we move the second addition from clock-cycle 2 up to 1?

slide-96
SLIDE 96

5.5.6 Allocation 350

5.5.6 Allocation

1 2 3 i1 i2 r1 ce d r2 ce d r3 ce d a1

sc1

a2

sc2 sc1 sc2

  • 1

m1

sc1 sc2

needs mux needs ce

  • 1

Ι/Ο 1 2 const z a

+ + +

c d b

slide-97
SLIDE 97

Alternative Allocation

z a

+ + +

c i1 i2 r1 r2 r3 m1 m1 a1 i1 i2 i1 i2 r1 ce d r2 ce d r3 ce d a1

sc1

a2

sc2 sc1 sc2

  • 1

m1

sc1 sc2

1 2 3 needs mux needs ce 5/9 0/3

  • 1

Ι/Ο d b d b a c z 1 i1 1 m1 1 i2 i1 i2 i1 r1 r3 i2 1 2 const i1

5.5.6 Allocation 351

slide-98
SLIDE 98

352 CHAPTER 5. DATAFLOW DIAGRAMS

5.5.7 Explicit State Machine From Clock Cycles to States

  • ASAP parcel schedule
  • Latency is 3, therefore 3 states (S0, S1, S2)
  • State machine iterates through states, with S2 looping back to S0.

5.5.8 VHDL #1: Explicit

architecture main of vanier is signal r1, r2, r3, a1, a1_src1, a1_src2, a2, m1, m1_src2 : unsigned(15 downto 0); type state_ty is (S0, S1, S2); signal state : state_ty; begin

  • - control

process (clk) begin if rising_edge(clk) then if reset = ’1’ then state <= S0; else case state is when S0 => state <= S1; when S1 => state <= S2; when S2 => state <= S0; end case; end if; end if; end process;

  • - datapath

m1_src2 <= i2 when state = S0 else r1; m1 <= i1(7 downto 0) * m1_src2(7 downto 0); a1_src1 <= r3 when state = S1 else r2; a1_src2 <= i1 when state = S1 else a2; a1 <= a1_src1 + a1_src2; a2 <= r1 + r3;

353 CHAPTER 5. DATAFLOW DIAGRAMS 5.5.8 VHDL #1: Explicit 354

  • - registers

process (clk) begin if rising_edge(clk) then if state = S0 then r1 <= i1; else r1 <= r2; end if; end if; end process; process (clk) begin if rising_edge(clk) then r2 <= m1; end if; end process; process (clk) begin if rising_edge(clk) then if state = S0 then r3 <= i2; else r3 <= a1; end if; end if; end process;

  • 1 <= r3;

end architecture;

State Encoding

Use a one-hot state encoding.

5.5.8 VHDL #1: Explicit 355

slide-99
SLIDE 99

356 CHAPTER 5. DATAFLOW DIAGRAMS

Don’t Care: Encoding-Based Instantations

For this simple example, the encoding-based instantiations are trivial.

z a

+ + +

c i1 i2 r1 r2 r3 r3 r2 m1 m1 a1 a2 a1 i1 i2 i1 i2 r1 ce d r2 ce d r3 ce d a1

sc1

a2

sc2 sc1 sc2

  • 1

m1

sc1 sc2

1 2 3 needs mux needs ce 5/9 0/3 r3

  • 1

r1

  • 1

Ι/Ο d b d b a c z 1 i1 1 m1 1 i2 i1 i2 1 r2 1 m1 1 a1 i1 r1 r3 i2 r1 r3 r2 a2 S0 const r3 1 a1 i1 r1 r3 1 1 m1 1 r3 i2 r2 S1 S2

5.5.9 VHDL #2

architecture main of vanier is signal r1, r2, r3, a1, a1_src1, a1_src2, a2, m1, m1_src2 : unsigned(15 downto 0); subtype state_ty is std_logic_vector(2 downto 0); constant s0 : state_ty := "001"; constant s1 : state_ty := "010"; constant s2 : state_ty := "100"; signal state : state_ty; begin

  • - control

process (clk) begin if rising_edge(clk) then if reset = ’1’ then state <= S0; else

  • - rotate 1-bit to left

state <= state( 1 downto 0) & state( 2 ); end if; end if; end process;

  • - datapath

m1_src2 <= i2 when state = S0 else r1; m1 <= i1(7 downto 0) * m1_src2(7 downto 0); a1_src1 <= r3 when state = S1 else r2; a1_src2 <= i1 when state = S1 else a2; a1 <= a1_src1 + a1_src2; a2 <= r1 + r3;

357 CHAPTER 5. DATAFLOW DIAGRAMS 5.5.9 VHDL #2 358

  • - registers

process (clk) begin if rising_edge(clk) then if state(0) = ’1’ then r1 <= i1; else r1 <= r2; end if; end if; end process; process (clk) begin if rising_edge(clk) then r2 <= m1; end if; end process; process (clk) begin if rising_edge(clk) then if state(0) = ’1’ then r3 <= i2; else r3 <= a1; end if; end if; end process;

  • 1 <= r3;

end architecture;

5.5.10 Notes and Observations

Our functional requirement was written as: z = (a × d) + (d × b) + b + c If we had been given the functional requirement: z = (a × d) + b + (d × b) + c we could have used the same design, because the two equations are equivalent.

5.5.10 Notes and Observations 359

slide-100
SLIDE 100

360 CHAPTER 5. DATAFLOW DIAGRAMS

Data Dependency Graphs: Clean vs Ugly

The naive data dependency graph for the second formulation is much messier than the data dependency graph for the original formulation: Original (a × d) + (d × b) + b + c

z a d

+ + +

b c

Alternative (a × d) + b + (d × b) + c

z a b

+ + +

c d

5.6 Memory Operations in Dataflow Diagrams

Read Write Inputs Output Operation Location

361 CHAPTER 5. DATAFLOW DIAGRAMS 5.6. MEMORY OPERATIONS IN DATAFLOW DIAGRAMS 362

Memory Read

Hardware

WE A DI DO

a do M clk we

Behaviour

clk αa a M(αa) we do

  • αd

Dataflow diagram FSM

Memory Write

Hardware

WE A DI DO

a M clk di we do

Behaviour

clk αa a M(αa) αd we di

  • do

Dataflow diagram FSM

5.6. MEMORY OPERATIONS IN DATAFLOW DIAGRAMS 363

slide-101
SLIDE 101

364 CHAPTER 5. DATAFLOW DIAGRAMS

Dual-Port Memory

Hardware

a0 M clk di0 we

WE A0 DI0 DO0 A1 DO1

a1 do1 do0

Behaviour

clk αa a0 M(αa) αd we di0

  • βa

a1 do0

  • M(βa)

βd do1

Dataflow diagram FSM

Sequence of Memory Operations

Hardware

a0 M clk di0 we

WE A0 DI0 DO0 A1 DO1

a1 do1 do0

Behaviour

clk αa a0 M(γa) αd we di0 βa a1 do0 M(θa) do1 γa γd2 θa

  • M(αa)

M(βa) βd γd1 θd

Dataflow diagram FSM

365 CHAPTER 5. DATAFLOW DIAGRAMS 5.7. DATA DEPENDENCIES 366

5.7 Data Dependencies Definition of Three Types of Dependencies

M[i] := := M[i] := M[i] := := M[i] := M[i] := := M[i] :=

Read after Write Write after Write Write after Read (True dependency) (Load dependency) (Anti dependency) Instructions in a program can be reordered, so long as the data dependencies are preserved.

Purpose of Dependencies

R3 := ...... ... := ... R3 ... producer consumer W1 R1 R3 := ...... W0 W2 WAW ordering prevents W0 from happening after W1 WAR ordering prevents W2 from happening before R1 RAW ordering prevents R1 from happening before W1 R3 := ......

5.7. DATA DEPENDENCIES 367

slide-102
SLIDE 102

368 CHAPTER 5. DATAFLOW DIAGRAMS

Ordering of Memory Operations Data Dependencies

M[2] M[3] M[3] M[0] := A B 21 31 32 01 := := := M[2] M[0] := := M[3] M[2] M[1] M[0] 30 20 10 M[3] C := 21

Initial Program

Data Dependencies (Cont’d)

M[2] M[3] M[3] M[0] := A B 21 31 32 01 := := := M[2] M[0] := := M[3] C :=

Initial Program

M[2] := 21 M[3] 31 := A := M[2] B := M[0] M[3] 32 := M[0] 01 := C := M[3]

Valid Modification

369 CHAPTER 5. DATAFLOW DIAGRAMS 5.7. DATA DEPENDENCIES 370

Data Dependencies (Cont’d)

M[2] M[3] M[3] M[0] := A B 21 31 32 01 := := := M[2] M[0] := := M[3] C :=

Initial Program

M[2] := 21 M[3] 31 := A := M[2] B := M[0] M[3] 32 := M[0] 01 := C := M[3]

Valid (or Bad?) Modification

5.8 Example of DFD and Memory

This section examines the implementation of the pseudocode specification: M[a+1] = b; M[a] = M[a+1]; M[c] = M[c] - M[a]; z = M[c]

5.8. EXAMPLE OF DFD AND MEMORY 371

slide-103
SLIDE 103

372 CHAPTER 5. DATAFLOW DIAGRAMS

NOTES:

  • 1. Inputs shall be combinational
  • 2. Outputs shall be registered
  • 3. The system shall support an unpredictable number of bubbles
  • 4. Memory has combinational inputs and registered outputs (same as in class)
  • 5. The memory may be either dual-ported or single-ported.
  • 6. Optimization goals in order of decreasing importance:

(a) minimize latency to z (b) minimize clock period (c) minimize area

  • i. input ports
  • ii. adders and subtracters
  • iii. registers
  • iv. output ports
  • v. use single-ported memory instead of dual-ported memory
  • 7. Input values may be read in any clock cycle, but each input value shall be read

exactly once.

  • 8. Optimizations to the pseudocode are allowed, as long as the final values of z

and M are correct.

  • 9. You do not need to do allocation.

Pseudocode Optimization

Original M[a+1] = b; M[a] = M[a+1]; M[c] = M[c] - M[a]; z = M[c] Optimization

373 CHAPTER 5. DATAFLOW DIAGRAMS 5.8. EXAMPLE OF DFD AND MEMORY 374

Dataflow Diagram Memory Ports

How many ports does your memory have? Briefly justify that your choice of number of memory ports produced the most

  • ptimal design.

5.8. EXAMPLE OF DFD AND MEMORY 375

slide-104
SLIDE 104

376 CHAPTER 5. DATAFLOW DIAGRAMS

Chapter 6 Optimizations

377

6.1. PIPELINING 378

6.1 Pipelining

  • Exploit “hardware runs in parallel”
  • Performance optimization at cost of increased area
  • Overlap the execution of multiple parcels
  • Divide design into stages
  • Maximum of one parcel executing per stage
  • No sharing of hardware between stages

6.1.1 Introduction to Pipelining

Unpipelined

a b c

+ +

a1

d

+

a1

e

+

a1

f

+

a1

z

a1 r1 r1 r1 r1 r1 i2 i2 i1

  • 1

1 2 3 4 5

i2 i2 i2

clk a r1 z

0 1 2 3 4 5 6 α α α 7 8 9 10 11 12 13 α α α α

Pipelined

a b c

+ +

d

+

a3

e

+

f

+

z

clk a z

0 1 2 3 4 5 6 α α α α α α α 7 8 9 10 11 12 13

(stage1) r1 (stage2) r2 (stage3) r3 (stage4) r4 (stage5) r5

6.1.1 Introduction to Pipelining 379

slide-105
SLIDE 105

380 CHAPTER 6. OPTIMIZATIONS

Unpipelined

a b c

+ +

a1

d

+

a1

e

+

a1

f

+

a1

z

a1 r1 r1 r1 r1 r1 i2 i2 i1

  • 1

1 2 3 4 5

i2 i2 i2

Unpipelined Pipelined Latency Bubbles Throughput Clock period Inputs Adders Registers Pipelined

a b c

+ +

a2

d

+

a3

e

+

a4

f

+

a5

z

a1 r1 r2 r3 r4 r5 i3 i2 i1

  • 1

1 2 3 4 5

i4 i5 i6

stage 1 stage 2 stage 3 stage 4 stage 5

Pipelining does not change Pipelining does not change Pipelining does change Pipelining does change

Sequential (Unpipelined) Hardware

State(1) State(2) State(3)

reset

State(0) State(4)

a1 r1 i1 i2

  • 1

Question: Parcel schedule? Question: State encoding? Question: Control circuitry?

Pipelined Hardware and VHDL Code

381 CHAPTER 6. OPTIMIZATIONS 6.1.1 Introduction to Pipelining 382

a1 r1 i1 i2

  • 1

a2 r2 i3 a3 r3 i4 a4 r4 i5 a5 r5 i6 stage 1 stage 2 stage 3 stage 4 stage 5

  • - stage 1

process begin wait until rising_edge(clk); r1 <= i1 + i2; end process;

  • - stage 2

process begin wait until rising_edge(clk); r2 <= r1 + i3; end process;

  • - stage 3

process begin wait until rising_edge(clk); r3 <= r2 + i4; end process;

  • - stage 4

process begin wait until rising_edge(clk); r4 <= r3 + i5; end process;

  • - stage 5

process begin wait until rising_edge(clk); r5 <= r4 + i6; end process;

  • - output
  • 1 <= r5;

6.1.2 Partially Pipelined

  • Fully pipelined: throughput is one parcel per clock cycle
  • Partially pipelined: throughput is less than one parcel per clock cycle.
  • Superscalar: throughput is more than one parcel per clock cycle.

a b c

+ +

d

+

e

+

f

+

z 1 2 3 4 5

Question: How do we execute α followed by β?

clk a z

0 1 2 3 4 5 6 7 8 9 10 11 12 13

(stage1) r1 (stage2) r2 (stage3) r3

Latency Bubbles Throughput Clock period Registers Adders

6.1.2 Partially Pipelined 383

slide-106
SLIDE 106

384 CHAPTER 6. OPTIMIZATIONS

Hardware for Partially Pipelined

State(1)

reset

State(0)

a1 r1 i1 i2

  • 1

a2 r2 i3 a3 r3 i4 stage 1 stage 2 stage 3

Question: How do we determine the number of states?

6.1.3 Terminology

Definition Depth: The depth of a pipeline is the number of stages on the longest path through the pipeline. Definition Latency: The latency of a pipeline is measured the same as for an unpipelined circuit: the number of clock cycles from inputs to outputs. Definition Throughput: The number of parcels consumed or produced per clock cycle. Definition Upstream/downstream: Because parcels flow through the pipeline analogously to water in a stream, the terms upstream and downstream are used respectively to refer to earlier and later stages in the pipeline. For example, stage1 is upstream from stage2. Definition Bubble: When a pipe stage is empty (contains invalid data), it is said to contain a “bubble”.

385 CHAPTER 6. OPTIMIZATIONS 6.1.4 Overlapping Pipeline Stages 386

6.1.4 Overlapping Pipeline Stages

  • A single parcel may be in multiple stages at the same time

Example Store instruction in a microprocessor uses separate stages for address and data

  • Transfering a parcel between stages may require multiple clock cycles

Example 16×16 macroblock of pixels in video processing Illustrate overlapping pipe stages with a simple example.

c

F F

a b d

F

e

F G G G G

z 1 2 3 4 5 6

r1 r1 r2 r1 r2 r1 r2 r2 r2 i1 i2 i1 i1 i1

  • 1

Externally visible behavior:

α input 1 2 3 4 5 6 7 8 9 10 11 12

  • utput

α α α α β β β β β 13 α α α β β β α β system α α β β

System Inputs 2 Registers 2 F 1 G 1 Total area 4 Latency 6 Throughput 1/6 Internal behaviour:

α input 1 2 3 4 5 6 7 8 9 10 11 12 r1

  • utput

α α α α β β β β β 13 α α α β β β r2 α α α α α α β β β β β β

Unpipelined implementation

6.1.4 Overlapping Pipeline Stages 387

slide-107
SLIDE 107

388 CHAPTER 6. OPTIMIZATIONS

Design Space Exploration

Fully pipelined

c

F F

a b d

F

e

F G G G G

z 1 2 3 4 5 6 #regs #F #G #r+F+G tput #regs #F #G #r+F+G tput

Throughput=1/2

c

F F

a b d

F

e

F G G G G

z 1 2 3 4 5 6

Design Space Exploration (Cont’d)

Goal: Maximum throughput using just 1 F and 1 G

#regs #F #G #r+F+G tput c

F F

a b d

F

e

F G G G G

z #regs #F #G #r+F+G tput c

F F

a b d

F

e

F G G G G

z

389 CHAPTER 6. OPTIMIZATIONS 6.1.4 Overlapping Pipeline Stages 390

Design Comparison

c

F F

a b d

F

e

F G G G G

z 1 2 3 4 5 6

r1 r1 r2 r1 r3 r1 r3 r3 r3 i1 i2 i1 i2 i3

  • 1

stage 1 stage 2 i1 r1 r3

  • 1

α 1 2 3 4 5 6 7 8 9 10 11 12 α α β β β 13 α β α α α β β β α α α α β β α α β β β β c

F F

a b d

F

e

F G G G G

z 1 2 3 4 5 6

r1 r1 r2 r1 r2 r1 r2 r2 r2 i1 i2 i1 i2 i2

  • 1

stage 1 stage 2 i1 r1 r2

  • 1

α 1 2 3 4 5 6 7 8 9 10 11 12 α α β 13 α β α α α β β β α α α α β β α β β β β β α α β β β

Implementation of Overlapping Stages

c

F F

a b d

F

e

F G G G G

z 1 2 3 4 5 6

r1 r1 r2 r1 r3 r1 r3 r3 r3 i1 i2 i1 i2 i3

  • 1

stage 1 stage 2

  • - valid bits

v(0) <= i_valid; process begin wait until rising_edge(clk); v(6 downto 1) <= v(5 downto 0); end process;

  • - stage 1

f1_src2 <= i2 when else r1; process begin wait until rising_edge(clk); r1 <= f( i1, f1_src2 ); r2 <= r1; end process;

6.1.4 Overlapping Pipeline Stages 391

slide-108
SLIDE 108

392 CHAPTER 6. OPTIMIZATIONS

Implementation of Overlapping (Cont’d)

c

F F

a b d

F

e

F G G G G

z 1 2 3 4 5 6

r1 r1 r2 r1 r3 r1 r3 r3 r3 i1 i2 i1 i2 i3

  • 1

stage 1 stage 2

  • - stage 2

g1_src1 <= (others => ’0’) when else r1; g1_src2 <= r2 when else r3; process begin wait until rising_edge(clk); r3 <= g( g1_src1, g1_src2); end process;

  • 1 <= r3;

Review: Pipelining

Analyze the dataflow diagram below.

F G F G F F F G F G F a b c d z

Stages Latency Clock period Throughput Inputs Registers F G

393 CHAPTER 6. OPTIMIZATIONS 6.2. STAGGERING 394

6.2 Staggering

This is an advanced section. It is not covered in the course and will not be tested.

6.3 Retiming

Goal: decrease clock period without changing input-to-output behaviour of the system. Technique: move registers “through” gates to balance delay between registers. Retime to balance delays Push flop through

AND

Push flop through wire fork

6.3. RETIMING 395

slide-109
SLIDE 109

396 CHAPTER 6. OPTIMIZATIONS

Example

Question: Do the two circuits below have the same behaviour?

a b c d r1 e z r2 a b c d r3 e z r1 r2

Extra copy for scratch work:

a b c d e z

Example with State Machine

state a b c sel x y z critical path 10ns 2ns 7ns state S0 S1 S2 S3 S0 S1 S2 S3 a b c sel x y z α β γ 1 α α+γ α+γ

process begin wait until rising_edge(clk); if state = S1 then z <= a + c; else z <= b + c; end if; end process;

397 CHAPTER 6. OPTIMIZATIONS 6.3. RETIMING 398

Retimed Circuit and Waveform

state a b c sel x y z sel_d 10ns 2ns 7ns state S0 S1 S2 S3 S0 S1 S2 S3 a b c sel_d x y z α β γ sel α+γ

Original behaviour

process (state) begin if state = S1 then sel = ’1’ else sel = ’0’ end if; end process; process begin wait until rising_edge(clk); if sel = ’1’ then ...

  • - code for z

end if; end process;

Retimed

process begin wait until rising_edge(clk); if state = then sel = ’1’ else sel = ’0’ end if; end process; process begin wait until rising_edge(clk); if sel = ’1’ then ... -- code for z end if; end process;

6.3. RETIMING 399

slide-110
SLIDE 110

400 CHAPTER 6. OPTIMIZATIONS

Review: Retiming

For each of the example circuits below, answer whether it is correct with respect to the specification circuit. Specification circuit

a b c d e f z

Example circuit 1

a b c d e z e2

Specification circuit

a b c d e f z

Example circuit 2

a b c d z a2 a3 b2 c2

401 CHAPTER 6. OPTIMIZATIONS 6.4. GENERAL OPTIMIZATIONS 402

6.4 General Optimizations 6.4.1 Strength Reduction

Strength reduction replaces one operation with another that is simpler.

6.4.1.1 Arithmetic Strength Reduction

Multiply by a constant power of two wired shift logical left Multiply by a power of two shift logical left Divide by a constant power of two wired shift logical right Divide by a power of two shift logical right Multiply by 3 wired shift and addition

6.4.1.2 Boolean Strength Reduction

Boolean tests that can be implemented as wires

  • is odd, is even
  • is neg, is pos

By choosing your encodings carefully, you can sometimes reduce a vector comparison to a wire. For example if your state uses a one-hot encoding, then the comparison state = S3 reduces to state(3) = ’1’. You might expect a reasonable logic-synthesis tool to do this reduction automatically, but most tools do not do this reduction. When using encodings other than one-hot, Karnaugh maps can be useful tools for

  • ptimizing vector comparisons. By carefully choosing our state assignments,

when we use a full binary encoding for 8 states, the comparison: (state = S0 or state = S3 or state = S4) = ’1’ can be reduced from looking at 3 bits, to looking at just 2 bits. If we have a condition that is true for four states, then we can find an encoding that looks at just 1 bit.

6.4.1 Strength Reduction 403

slide-111
SLIDE 111

404 CHAPTER 6. OPTIMIZATIONS

6.4.2 Replication and Sharing 6.4.2.1 Mux-Pushing

Pushing multiplexors into the fanin of a signal can reduce area. Before z <= a + b when (w = ’1’) else a + c; After tmp <= b when (w = ’1’) else c; z <= a + tmp; The first circuit will have two adders, while the second will have one adder. Some synthesis tools will perform this optimization automatically, particularly if all of the signals are combinational.

6.4.2.2 Common Subexpression Elimination

Introduce new signals to capture subexpressions that occur multiple places in the code. Before y <= a + b + c when (w = ’1’) else d; z <= a + c + d when (w = ’1’) else e; After tmp <= a + c; y <= b + tmp when (w = ’1’) else d; z <= d + tmp when (w = ’1’) else e;

405 CHAPTER 6. OPTIMIZATIONS 6.4.2 Replication and Sharing 406

Subexpression Elimination

Note: Clocked subexpressions Care must be taken when doing common subexpression elimination in a clocked process. Putting the “temporary” signal in the clocked process will add a clock cycle to the latency of the computation, because the tmp signal will be flip-flop. The tmp signal must be combinational to preserve the behaviour of the circuit.

6.4.2.3 Computation Replication

  • To improve performance

– If same result is needed at two very distant locations and wire delays are significant, it might improve performance (increase clock speed) to replicate the hardware

  • To reduce area

– If same result is needed at two different times that are widely separated, it might be cheaper to reuse the hardware component to repeat the computation than to store the result in a register Note: Muxes are not free Each time a component is reused, multiplexors are added to inputs and/or outputs. Too much sharing of a component can cost more area in additional multiplexors than would be spent in replicating the component

6.4.2 Replication and Sharing 407

slide-112
SLIDE 112

408 CHAPTER 6. OPTIMIZATIONS

6.4.3 Arithmetic

Perform arithmetic on the minimum number of bits needed. If you only need the lower 12 bits of a result, but your input signals are 16 bits wide, trim your inputs to 12 bits. This results in a smaller and faster design than computing all 16 bits of the result and trimming the result to 12 bits.

6.5 Customized State Encodings

This is an advanced section. It is not covered in the course and will not be tested.

409 CHAPTER 6. OPTIMIZATIONS 6.5. CUSTOMIZED STATE ENCODINGS 410

Chapter 7 Performance Analysis

411

slide-113
SLIDE 113

412 CHAPTER 7. PERFORMANCE ANALYSIS

7.1 Introduction

Hennessey and Patterson’s Quantitative Computer Achitecture (textbook for E&CE 429) has good information on performance. We will use some of the same definitions and formulas as Hennessey and Patterson, but we will move away from generic definitions of performance for computer systems and focus on performance for digital circuits.

7.2 Defining Performance

Performance = Work Time You can double your performance by: doing twice the work in the same amount of time OR doing the same amount of work in half the time

413 CHAPTER 7. PERFORMANCE ANALYSIS 7.2. DEFINING PERFORMANCE 414

Benchmarking

Performance = Work Time Measuring time is easy, but how do we accurately measure work? The game of benchmarketing is finding a definition of work that makes your system appear to get the most work done in the least amount of time. Measure of Work Measure of Performance clock cycle MHz instruction MIPs synthetic program Whetstone, Dhrystone, D-MIPs (Dhrystone MIPs) real program SPEC (PCs), EEMBC (Embedded) travel 1/4 mile drag race

Throughput vs Latency

Two common measures of performance: Latency Response time Throughput Bandwidth Often there is a tradeoff between latency and bandwidth

  • For general-purpose systems, throughput is usually most important.
  • For real-time systems, latency is often most important.

7.2. DEFINING PERFORMANCE 415

slide-114
SLIDE 114

416 CHAPTER 7. PERFORMANCE ANALYSIS

7.3 Benchmarks Historical Benchmarks

MIPS Millions of instructions per second (My NOP instruction is faster than yours) Whetstone

  • First general-purpose benchmark for computer perforamnce
  • Synthetic: an artificial program designed to be quick and easy to run, but

reflects the performance of a real program.

  • H. J. Curnow and B. A. Wichmann. A synthetic benchmark. The Computer

Journal, 19(1):43–49, Feb. 1976.

  • Based on the Algol-60 compiler developed by Atomic Power Division of the

English Electric Company, Whetstone, Leicester, England, for the KDF9 Computer. Dhrystone pun on Whetstone D-MIPS MIPS using Dhrystone mix of instructions

  • Synthetic benchmarks worked well for computers in 1970s and 1980s.
  • As caches became larger, entire synthetic program could fit in first-level cache,

which resulted in unrealistic performance.

SPEC Benchmarks

The Spec Benchmarks are among the most respected and accurate predictions of real-world performance for desktop PCs and servers. Definition SPEC: Standard Performance Evaluation Corporation MISSION: “To establish, maintain, and endorse a standardized set of relevant benchmarks and metrics for performance evaluation of modern computer systems http://www.spec.org.” The Spec organization has different benchmarks for integer software, floating-point software, web-serving software, etc.

417 CHAPTER 7. PERFORMANCE ANALYSIS 7.3. BENCHMARKS 418

EEMBC Benchmarks

Embeded Microrprocessor Benchmark Consortium A variety of benchmarks (Android, web browsing, multicore, etc.) to evaluate microprocessors used in smartphones, tablets, and firewall appliances.

7.4 Comparing Performance 7.4.1 General Equations

7.4. COMPARING PERFORMANCE 419

slide-115
SLIDE 115

420 CHAPTER 7. PERFORMANCE ANALYSIS

Speedup

Example sentences:

  • A new system has n-times the performance of the old system.
  • This optimization provides a n× speedup.

Speedup(New,Old) = PerfNew PerfOld Using speedup to calculate performance: PerfNew = PerfOld = Using Perf High and Perf Low: Speedup = Perf High Perf Low

Performance vs Time

Performance is inversely proportional to time: Perf = 1 Time Using time to measure performance, the equation for speedup is: Speedup(New,Old) = PerfNew PerfOld = 1/TimeNew 1/TimeOld = TimeOld TimeNew Using TimeSlow and TimeFast: Speedup = TimeSlow TimeFast

421 CHAPTER 7. PERFORMANCE ANALYSIS 7.4.1 General Equations 422

Bigger Than and Smaller Than

Equation for “New is n% bigger than Old”: PctBigger = New −Old Old New is n% smaller than Old: PctSmaller = Old −New Old Derive n% bigger from speedup: PctBigger = Speedup −1 = New Old −1 = New Old − Old Old = New −Old Old

Bigger Than (Cont’d)

The performance of New is n% bigger than the performance of Old: PctBigger = PerfNew −PerfOld PerfOld Use percentage-bigger to write equation for PerfNew in terms of PerfOld: PerfNew = (PctBigger +100%)PerfOld OR, Equivalently: PerfNew = PctBigger×PerfOld +PerfOld

7.4.1 General Equations 423

slide-116
SLIDE 116

424 CHAPTER 7. PERFORMANCE ANALYSIS

Converting between Bigger and Smaller Than

Question: If A is n% bigger than B, how smaller is B than A?

Average Performance of Multiple Tasks

Another useful formula is the average time to do one of k different tasks, each of which happens %i of the time and takes an amount of time Ti to do each time it is done . TAvg =

k

i=1

(%i)(Ti)

425 CHAPTER 7. PERFORMANCE ANALYSIS 7.4.2 Example: Performance of Printers 426

7.4.2 Example: Performance of Printers

This section reserved for your reading pleasure

7.5 Clock Speed, CPI, Program Length, and Performance 7.5.1 Mathematics

CPI Cycles per instruction NumInsts Number of instructions ClockSpeed Clock speed ClockPeriod Clock period Time = NumInsts×CPI×ClockPeriod Time = NumInsts×CPI

ClockSpeed

7.5. CLOCK SPEED, CPI, PROGRAM LENGTH, AND PERFORMANCE 427

slide-117
SLIDE 117

428 CHAPTER 7. PERFORMANCE ANALYSIS

7.5.2 Example: CISC vs RISC and CPI

Clock Speed SPECint AMD Athlon 1.1GHz 409 Fujitsu SPARC64 675MHz 443 The AMD Athlon is a CISC microprocessor (it uses the IA-32 instruction set). The Fujitsu SPARC64 is a RISC microprocessor (it uses Sun’s Sparc instruction set). Assume that it requires 20% more instructions to write a program in the Sparc instruction set than the same program requires in IA-32.

SPECint and Performance

Clock Speed SPECint AMD Athlon 1.1GHz 409 Fujitsu SPARC64 675MHz 443 Question: Which of the two processors has higher performance?

429 CHAPTER 7. PERFORMANCE ANALYSIS 7.5.2 Example: CISC vs RISC and CPI 430

Relative CPI

Question: What is the ratio between the CPIs of the two microprocessors?

Absolute CPI

Question: Can you determine the absolute (actual) CPI of either microprocessor?

7.5.2 Example: CISC vs RISC and CPI 431

slide-118
SLIDE 118

432 CHAPTER 7. PERFORMANCE ANALYSIS

7.5.3 Effect of Instruction Set on Performance

In this section we examine how changing the instructions that a processor performs can effect its performance. Your group designs a microprocessor and you are considering adding a fused multiply-accumulate to the instruction set.

  • Fused multiply-accumulate instruction does a multiply and an add
  • p

src1 src2 dst MAC R1, R2, R4 = MUL R1, R2, R3 ADD R3, R4, R4

  • Often used in digital signal processing. See the multiply-accumulate pattern in

the finite-impulse-response filter below:

C1 C2 C3 C4 C4 a z

  • First added to RISC instruction sets by IBM with its POWER processor family:

“Performance With Enhanced Risc”.

Using MAC Instruction

Original program MUL R1, R2, R3 ADD R4, R3, R4 SUB R5, R7, R9 MUL R1, R2, R3 ADD R5, R3, R5 SUB R1, R2, R3 MUL R2, R3, R5 ADD R5, R2, R5 Using MAC

433 CHAPTER 7. PERFORMANCE ANALYSIS 7.5.3 Effect of Instruction Set on Performance 434

Problem Statement

Your studies have shown that, on average, half of the multiply operations are followed by an add instruction that could be done with a fused multiply-add. Additionally, you know: cpi % ADD 0.8 15% MUL 1.2 5% Other 1.0 80%

Options

You have three options:

  • ption 1 : no change
  • ption 2 : add the MAC instruction, increase the clock period by 20%, and MAC

has the same CPI as MUL.

  • ption 3 : add the MAC instruction, keep the clock period the same, and the CPI
  • f a MAC is 50% greater than that of a multiply.

Question: Which option will result in the highest overall performance?

7.5.3 Effect of Instruction Set on Performance 435

slide-119
SLIDE 119

436 CHAPTER 7. PERFORMANCE ANALYSIS

Review: Performance of Programs

Which option is better:

  • 1. 90% performance improvement to 10% of instructions
  • 2. 10% performance improvement to 90% of instructions

437 CHAPTER 7. PERFORMANCE ANALYSIS 7.6. EFFECT OF TIME TO MARKET ON RELATIVE PERFORMANCE 438

7.6 Effect of Time to Market on Relative Performance

The performance of digital-hardware based system has grown historically at an exponential rate. To illustrate this concept, imagine com- panies A and B release competing prod- ucts (A1, B1, A2, B2, A3, B3, ...) over a series of years where the performance

  • f the average product in this category

doubles every year.

2010 2011 2012 2013 2014 1 2 4 6 8 9 12 14 16 3 5 7 9 11 13 15 17

Performance

A1 A2 A3 A4 B1 B2 B3

Performance doubles every year Performance of average system

Equation for exponential growth, where P increases by a factor of n every k units of time: P(t1) = P(t0)×n(t1−t0)/k

7.6. EFFECT OF TIME TO MARKET ON RELATIVE PERFORMANCE 439

slide-120
SLIDE 120

440 CHAPTER 7. PERFORMANCE ANALYSIS

Example Problem

Assume that performance of the average product in your market segment doubles every 18 months. You are considering an optimization that will improve the performance of your product by 7%. Question: If you add the optimization, how much can you allow your schedule to slip before the delay hurts your relative performance compared to not doing the optimization and launching the product according to your current schedule?

Performance of average system

441 CHAPTER 7. PERFORMANCE ANALYSIS 7.6. EFFECT OF TIME TO MARKET ON RELATIVE PERFORMANCE 442 7.6. EFFECT OF TIME TO MARKET ON RELATIVE PERFORMANCE 443

slide-121
SLIDE 121

444 CHAPTER 7. PERFORMANCE ANALYSIS

Chapter 8 Timing Analysis

445

8.1. DELAYS AND DEFINITIONS 446

8.1 Delays and Definitions

In this section we will look at the different timing parameters of circuits. Our focus will be on those parameters that limit the maximum clock speed at which a circuit will work correctly.

8.1.1 Background Definitions

This section reserved for your reading pleasure

8.1.2 Clock-Related Timing Definitions

At the register transfer level, we think of the system as having a single, global clock. On the chip, the single clk signal in our source code is implemented as a clock tree containing many buffers and individual wires. On the physical chip, each flip-flop has its own clock signal. begin process wait until rising_edge(clk); c <= a + b; e <= c - d; end process;

clk0 clk1 clk2 clk1.0 clk1.1 clk1.2 clk1.0.1 clk1.1.1 clk1.2.1 clk2.0 clk2.1 clk2.2 clk2.0.1 clk2.1.1 clk2.2.1

8.1.2 Clock-Related Timing Definitions 447

slide-122
SLIDE 122

448 CHAPTER 8. TIMING ANALYSIS

8.1.2.1 Clock Latency

clk0 clk1 clk2 clk1.0 clk1.1 clk1.2 clk1.0.1 clk1.1.1 clk1.2.1 clk2.0 clk2.1 clk2.2 clk2.0.1 clk2.1.1 clk2.2.1 latency from clk0 to clk1.2.1 clk0 clk1 clk1.2 clk1.2.1

Definition Clock Latency: The delay from the source (oscillator) to a point in the clock tree. Note: Clock latency Clock latency does not affect the limit on the minimim clock period.

8.1.2.2 Clock Skew

clk0 clk1 clk2 clk1.0 clk1.1 clk1.2 clk1.0.1 clk1.1.1 clk1.2.1 clk2.0 clk2.1 clk2.2 clk2.0.1 clk2.1.1 clk2.2.1 skew between clk1.0.1 and clk2.1.1 clk0 clk1 clk1.0 clk1.0.1 clk2 clk2.1 clk2.1.1

Definition Clock Skew: The difference in arrival times for the same clock edge at different flip-flops. Clock skew is caused by the difference in interconnect delays to different points on the chip. Skew(clk1.0.1,clk2.1.1) = |Latency(clk1.0.1)−Latency(clk2.1.1)|

449 CHAPTER 8. TIMING ANALYSIS 8.1.2 Clock-Related Timing Definitions 450

Clock Skew (Cont’d)

clk0 clk1 clk2 clk1.0 clk1.1 clk1.2 clk1.0.1 clk1.1.1 clk1.2.1 clk2.0 clk2.1 clk2.2 clk2.0.1 clk2.1.1 clk2.2.1 skew for circuit clk0 clk1.0.1 clk1.1.1 clk1.2.1 clk2.0.1 clk2.1.1 clk2.2.1

The clock skew for a circuit is the maximum skew between any two flip-flops.

8.1.2.3 Clock Jitter

jitter ideal clock clock with jitter 10 10 10 10 10 11 8 9

Definition Clock Jitter: Difference between actual clock period and ideal clock period. Clock jitter is caused by:

  • temperature and voltage variations over time
  • temperature and voltage variations across different locations on a chip
  • manufacturing variations between different parts

8.1.2 Clock-Related Timing Definitions 451

slide-123
SLIDE 123

452 CHAPTER 8. TIMING ANALYSIS

Clock Tree Design

Clock tree design is critical in high-performance designs to minimize clock skew. Sophisticated synthesis tools put lots of effort into clock tree design, and the techniques for clock tree design still generate PhD theses.

8.1.3 Storage-Related Timing Definitions 8.1.3.1 Flops and Latches

d clk q

Flop Behaviour

d clk q

Latch Behaviour Storage devices have two modes: load mode and store mode. Flops are edge sensitive; they are in load mode just before the clock edge. Latches are level sensitive; they are in load mode while their enable signal is asserted high (low for active low latches).

453 CHAPTER 8. TIMING ANALYSIS 8.1.3 Storage-Related Timing Definitions 454

8.1.3.2 Timing Parameters

In the pictures below, the goal is to load the data α into the flip-flop or latch.

α d clk q Clock-to-Q Hold Setup ω α

Flip-flop

d clk q Clock-to-Q Hold Setup ω α ω α

Active-high latch

d clk Hold Setup ω α q Clock-to-Q ω α

Active-low latch Setup and hold define the window in which input data are required to be constant in order to guarantee that the storage device will store data correctly. Clock-to-Q defines the delay from the clock edge to when the output is guaranteed to be stable.

8.1.3.3 Timing Parameters for a Flop Good, Slow, and Fast

Good timing

b c a d clk b c α TSU THO α

Too slow = setup violation

b c a d clk b c α TSU THO

Too fast = hold violation

b c a d clk b c α TSU THO

8.1.3 Storage-Related Timing Definitions 455

slide-124
SLIDE 124

456 CHAPTER 8. TIMING ANALYSIS

8.1.4 Propagation Delays

Propagation delay time it takes a signal to travel from the source (driving) flop to the destination flop propagation delay = load delay + interconnect delay Load delay combinational gates between the flops Interconnect delay wires between gates and flops

8.1.5 Timing Constraints Minimum Clock Period

b c clk1 clk2 a d clk0

signal may change signal is stable signal may rise signal may fall

clk1 clk2 b c clock period

ClockPeriod >

457 CHAPTER 8. TIMING ANALYSIS 8.1.5 Timing Constraints 458

Hold Constraint

Circuit

c b

Simple clock-to-q

clk b c α tco β β

Realistic clock-to-q

clk b c α tco.min β β tco.max

Hold violation

clk a b α TSU THO α β

Hold constraint

Review: Timing Parameters

  • 1. Setup: Time

transition that input data is to being stable

  • 2. Hold: time

transition that input data is to stable

  • 3. Clock-to-Q-Min: Time

when output data is to stable with old data

  • 4. Clock-to-Q-Max: Time

when output data is to stable with new data

8.1.5 Timing Constraints 459

slide-125
SLIDE 125

460 CHAPTER 8. TIMING ANALYSIS

  • 5. How do you fix a setup violation?
  • 6. How do you fix a hold violation?
  • 7. Draw a timing diagram for a flip-flop with the following timing parameters:

setup 2 ns hold 1 ns clock-to-Q 3 ns

clk d q 1ns

  • 8. Draw a timing diagram for an active-high latch with the following timing

parameters: setup 2 ns hold 1 ns clock-to-Q 3 ns

clk d q 1ns

8.2 Timing Analysis of Simple Latches

In this section, each gate has a delay of 1 time unit.

8.2.1 Review: Active-High Latch Behaviour

461 CHAPTER 8. TIMING ANALYSIS 8.2.2 Structure and Behaviour of Multiplexer Latch 462

8.2.2 Structure and Behaviour of Multiplexer Latch

i

  • ’1’

i

  • ’0’

Load mode Store mode

a b s

  • a

sel b

  • d

clk

  • Multiplexer: symbol and implementation

Latch implementation

Latch Glitching

d clk

  • d

clk

  • Correct latch

Buggy latch The functionality of storage devices depends on timing.

  • For functionality at the register transfer level, we ignore timing (combinational

logic has zero delay).

  • For storage devices, functionality depends on timing (delays through

combinational logic).

  • Ignoring delays, the circuits above are equivalent, but the circuit on the right is

actually incorrect.

  • The pair of inverters on the clk signal are needed. Together, they prevent a

glitch on the OR gate when clk is deasserted. If there was only one inverter, a glitch would occur. For more on this, see section 8.2.8.

8.2.2 Structure and Behaviour of Multiplexer Latch 463

slide-126
SLIDE 126

464 CHAPTER 8. TIMING ANALYSIS

Loading and Storing Values

1 1 1 d=’0’ clk=’1’

  • 1

Loading ’0’

1 d=’1’ clk=’1’

  • 1

Loading ’1’

1 1 1 d clk=’0’

  • =’0’

1

Storing ’0’

1 1 d clk=’0’

  • =’1’

Storing ’1’

8.2.3 Strategy for Timing Analysis of Storage Devices

The key to calculating setup and hold times of a latch, flop, etc is to identify:

  • 1. how the data is stored when in storage mode (often a combinational loop with a

pair of inverters)

  • 2. the gate(s) that the clock uses to turn on the load path (allow the input to affect

the internals of the storage element)

  • 3. the gate(s) that the clock uses to turn on the storage loop (allow the stored data

continue to circulate through the storage loop)

  • 4. the gate where the load path and storage loop join

465 CHAPTER 8. TIMING ANALYSIS

slide-127
SLIDE 127

8.2.4 Clock-to-Q Time of a Latch 466

8.2.4 Clock-to-Q Time of a Latch

clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2

slide-128
SLIDE 128

8.2.5 From Load Mode to Store Mode

clk d α

1 1 α α α α α α α

Circuit is stable in load mode clk d α

1 α α α α α α 1

t=3: l2 is set to 0, because c2 turns off AND gate

α

clk d α

1 α α α α α α α

t=0: Clk transitions from load to store clk d α

1 α α α α α α 1

t=4: α from store path propagates to q

α

clk d α

1 1 α α α α α α α 1

t=1: Clk edge propagates through inverter clk d α

1 α α α α α α 1

t=5: α from store path completes cycle

α

clk d α

1 α α α α α α α 1

t=2: s1 propagates to s2, because cn turns on AND gate

α

8.2.5 From Load Mode to Store Mode 467

slide-129
SLIDE 129

468 CHAPTER 8. TIMING ANALYSIS

8.2.6 Setup Time Analysis

  • 1. When the latch is in store mode, there must be consistent values in the storage
  • loop. Otherwise, the loop will be metastable.
  • 2. As the circuit transitions from load mode to store mode, there must be

consistent values at the point where the load and store paths join.

  • 3. We must saturate the storage loop with the current value (α) before we turn on

the storage loop, otherwise some instances of the old value (ω) will remain in the loop.

  • 4. Setup time is the time before the clock edge that the d-input must be stable

with the current value (α).

  • 5. The setup time must be sufficient to saturate the storage loop with the current

value (α) and flush out all of the old values (ω) before the storage loop is turned on.

  • 6. Paths for this specific circuit:
  • Path to saturate storage loop = d → s2
  • Path to turn on storage loop = clk → s2
  • 7. Equation for this specific circuit:

TSU = delay(d → s2)−delay(clk → s2) = 6−2 = 4 Setup is the that needs so that it can before .

469 CHAPTER 8. TIMING ANALYSIS

slide-130
SLIDE 130

8.2.6 Setup Time Analysis 470

Setup Violation

clk d

1 1 ω ω ω ω ω ω ω

Circuit is stable in load mode with ω

ω

clk d α

α α ω ω ω ω ω

t=1: α propagates through AND gate for load path ω is on input to AND gate for storage loop Clk propagates through inverter

1 1 1

clk d α

1 1 ω ω ω ω ω ω ω

t=-1: D transitions from ω to α Trouble: inconsistent values on load path and store path. Old value (ω) still in store path when store path is enabled. clk d α

α α α ω ω ω ω ω

t=2: old ω propagates through AND

1 1

clk d α

1 α ω ω ω ω ω ω

t=0: α propagates through inverter Clk transitions from load to store

α

clk d α

α α α ω ω

t=3: l2 is set to 0, because c2 turns off AND gate

ω 1 1 ω/α

slide-131
SLIDE 131

clk d α

α ω ω/α ω/α α α ω 1 1

t=4: ω/α from store path propagates to q clk d α=1

1 1 1 1 1 1

t=5: Illustrate instability with ω=0, α=1 clk d α

1 α ω ω ω/α ω/α 1 α

t=5: ω/α from store path completes cycle

ω

d ω l1 l2 qn q s1 s2 clk cn ω ω ω ω ω α α α ω α ω ω ω ω setup with negative margin c2 ω ω ω ω ω ω

α/ω

α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α

  • 3 -2 -1

1 2 3 4 5 6

8.2.6 Setup Time Analysis 471

slide-132
SLIDE 132

472 CHAPTER 8. TIMING ANALYSIS

We now repeat the analysis of setup violation, but illustrate the minimum violation (input transitions from ω to α 3 time-units before the clock edge).

clk d

1 1 ω ω ω ω ω ω ω

Circuit is stable in load mode with ω

ω

clk d α

1 1 α α ω ω ω ω ω

t=-1: α propagates through AND clk d α

1 1 ω ω ω ω ω ω ω

t=-3: D transitions from ω to α clk d α

1 α α α ω ω ω ω

t=0: Clk transitions from load to store clk d α

1 1 α ω ω ω ω ω ω

t=-2: α propagates through inverter

α

clk d α

1 1 α α α α α ω ω 1

t=1: Clk propagates through inverter

slide-133
SLIDE 133

clk d α

1 α α α α α α α 1

t=2: old ω propagates through AND

ω

Trouble: inconsistent values on load path and store path. Old value (ω) still in store path when store path is enabled. clk d α

1 α α α α ω/α ω/α 1

t=5: ω/α from store path completes cycle

α

clk d α

1 α ω/α α α α α 1

t=3: l2 is set to 0, because c2 turns off AND gate

α

clk d α=1

1 1 1 1 1 1

t=5: Illustrate instability with ω=0, α=1 clk d α

1 α α ω/α ω/α α α 1

t=4: ω/α from store path propagates to q

α

d ω l1 l2 qn q s1 s2 clk cn ω ω ω ω ω α α α ω α α α α α setup with negative margin c2 α α α α ω ω

α/ω

α

α/ω

α

α/ω

α

α/ω

α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α α/ω α

  • 3 -2 -1

1 2 3 4 5 6

473 CHAPTER 8. TIMING ANALYSIS

slide-134
SLIDE 134

8.2.7 Hold Time of a Multiplexer Latch Hold Time Behaviour

clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2 clk d l1 l2 qn q s2 s1 cn c2

8.2.7 Hold Time of a Multiplexer Latch 474

slide-135
SLIDE 135

475 CHAPTER 8. TIMING ANALYSIS

Analysis

  • 1. When the latch is in store mode, there must be consistent values in the storage
  • loop. Otherwise, the loop will be metastable.
  • 2. As the circuit transitions from load mode to store mode, there must be

consistent values at the point where the load and store paths join.

  • 3. We must turn off the load path before the next value (β) affects the storage

loop.

  • 4. Hold time is the time after the clock edge that the d-input must be stable with

the current value (α).

  • 5. The hold time must be sufficient to turn off the load path before the next data

value (β) is able to affect the internal circuitry such that it will affect storage loop.

  • 6. Paths for this specific circuit:
  • Path to turn off load path = clk → l2
  • Path to affect internals = d → l2
  • 7. Equation for this specific circuit:

THO = delay(clk → l2)−delay(d → l2) = 2−1 = 1

476 CHAPTER 8. TIMING ANALYSIS 8.2.8 Example of a Bad Latch 477

8.2.8 Example of a Bad Latch Build a Bad Latch

clk d l1 l2 qn q s2 s1 cn c2 1: Original latch clk d l1 l2 qn q s2 s1 cn c2 2: Push inverter from clk through wire-fork clk d l1 l2 qn q s2 s1 cn 3: Delete pair of back-to-back inverters clk d l1 l2 qn q s2 s1 cn 4: Compress figure

slide-136
SLIDE 136

Behaviour of a Bad Latch

clk d α

1 α α α α α α α

Circuit is stable in load mode clk d α

α α α α α α α

t=0: Clk transitions from load to store clk d l1 l2 qn q s2 s1 cn clk d l1 l2 qn q s2 s1 cn clk d l1 l2 qn q s2 s1 cn clk d l1 l2 qn q s2 s1 cn

8.2.8 Example of a Bad Latch 478

slide-137
SLIDE 137

479 CHAPTER 8. TIMING ANALYSIS

Analysis of Bad Latch

  • 1. When the latch is in store mode, there must be consistent values in the storage
  • loop. Otherwise, the loop will be metastable.
  • 2. As the circuit transitions from load mode to store mode, there must be

consistent values at the point where the load and store paths join.

  • 3. The current value (α) must arrive at the join gate for the storage loop and load

path before the constant “off” value from the load path arrives at the join gate.

  • 4. Paths for this specific circuit:
  • Path from clk to store-enable to join =
  • Path from clk to load-enable to join =
  • 5. Equation for this specific circuit:

8.2.9 Summary

  • 1. Test if latch is correct

(a) Find the storage loop (b) Find the load path (c) Check that load-mode and storage-mode are mutually exclusive (d) Find gates for load-enable, store-enable, and paths-join (e) Check that have even number of inversions on load path when in load mode (f) Check that have an even number of inversions in storage loop when in store mode (g) Check that path for clk to store-enable to join is faster than path for clk to load-enable to join

  • 2. Determine if latch is active high or active low
  • 3. Find clock-to-Q time: delay along path clk to load-enable to output
  • 4. Find setup time:

delay(path for input to saturate storage loop) – delay(path to turn on storage loop) delay(d to store-enable) – delay(clk to store-enable)

  • 5. Find hold time:

delay(path for input to affect internals) – delay(path to turn off load path) delay(clk to load-enable) – delay(d to load-enable)

480 CHAPTER 8. TIMING ANALYSIS 8.3. ADVANCED TIMING ANALYSIS OF STORAGE ELEMENTS 481

8.3 Advanced Timing Analysis of Storage Elements

This is an advanced section. It is not covered in the course and will not be tested.

8.4 Critical Path

The critical path of a circuit is used to determine the maximum propagation delay

  • f the circuit, which in turn constrains the minimum clock period.

Scenario:

  • a purely combinational circuit
  • at t = 0, one or more inputs change value
  • record the time of the last value change (edge) on an output

The maximum of the times of the last edge is the maximum delay (sometimes, just “delay”) through the circuit.

8.4. CRITICAL PATH 482

slide-138
SLIDE 138

483 CHAPTER 8. TIMING ANALYSIS

Example of Max Delay

a b y z

Each input may be 0, 1, , or . For n inputs, there are 4n − 2n possible input vectors.

delay=5.5ns a b y z 1 2 3 4 5 6 7 8 9 delay=6.5ns a b y z 1 2 3 4 5 6 7 8 9 delay=6.0ns a b y z 1 2 3 4 5 6 7 8 9 delay=7.8ns a b y z 1 2 3 4 5 6 7 8 9

8.4.1 Introduction to Critical and False Paths

Definition critical path: The slowest path on the chip between flops or flops and pins. The critical path limits the maximum clock speed. Definition false path: a path along which an edge cannot travel from beginning to end. Throughout our discussion of critical paths, we will use the delay values for gates shown in the table below. gate delay NOT 2 AND 4 OR 4 XOR 6

484 CHAPTER 8. TIMING ANALYSIS 8.4.1 Introduction to Critical and False Paths 485

8.4.1.1 Example of Critical Path in Full Adder

Question: Find the longest path through the full-adder circuit shown below.

ci a b co s i j k

Question: Does the excitation ci=1, a= , b=0 exercise the longest path?

ci a b co s i j k

Alternative Excitation

Question: Does the excitation ci=0, a= , b=1 exercise the critical path?

ci a b co s i j k

8.4.1 Introduction to Critical and False Paths 486

slide-139
SLIDE 139

487 CHAPTER 8. TIMING ANALYSIS

Exercising the Critical Path

Not all all transitions on the inputs will exercise the critical path. Using timing simulation to find the maximum delay of a circuit might underestimate the delay, because the inputs values that you simulate might not exercise the critical path.

8.4.1.2 Longest Path and Critical Path

The longest path through the circuit might not be the critical path, because the behaviour of the gates might prevent an edge from travelling along the path. Using the longest path to find the maximum delay of a circuit might ovestimate the delay, because the longest path might be a false path.

Example False Path

Question: Determine whether the longest path in the circuit below is a false path a = 0, b = a = 0, b =

y a b y a b

a = 1, b = a = 1, b =

y a b y a b

488 CHAPTER 8. TIMING ANALYSIS 8.4.1 Introduction to Critical and False Paths 489

Analytic Approach

Question: How can we determine analytically that this is a false path?

y a b

Note: False paths False paths are an advanced topice and are covered in section 8.5.

8.4.1.3 Criteria for Critical Path Algorithms

Let Tr be the real (as measured by stimulating the circuit with all possible input vectors) maximum time of the last edge on an output. Given an algorithm to calculate the critical path, let Ta be the time of the last edge as calculated by the algorithm. Three criteria for evaluating merits of a critical path algorithm:

  • 1. Correctness: Ta ≥ Tr: The delay given by the algorithm must be at least as

long as the real maximum delay.

  • 2. Optimality: The goal is to minimize Ta −Tr. The closer algorithm is to the real

maximum delay, the more optimal the algorithm is.

  • 3. Complexity: The goal is to minimize the computational complexity (i.e., run

time) of the algorithm.

Tr

Question: Is “longest path” a correct critical path algorithm?

8.4.1 Introduction to Critical and False Paths 490

slide-140
SLIDE 140

491 CHAPTER 8. TIMING ANALYSIS

8.4.2 Longest Path 8.4.2.1 Algorithm to Find Longest Path

The basic idea is to annotate each signal with the maximum delay from it to an

  • utput.
  • Start at destination signals and traverse through fanin to source signals.

– Destination signals have a delay of 0 – At each gate, annotate the inputs by the delay through the gate plus the delay

  • f the output.

– When a signal fans out to multiple gates, annotate the output of the source (driving) gate with maximum delay of the destination signals.

  • The primary input signal with the maximum delay is the start of the longest path.

The delay annotation of this signal is the delay of the longest path.

  • The longest path is found by working from the source signal to the destination

signals, picking the fanout signal with the maximum delay at each step.

8.4.2.2 Longest Path Example

Question: Find the longest path through the circuit below. a b c l m d e f g h i j k

492 CHAPTER 8. TIMING ANALYSIS 8.4.3 Monotone Speedup 493

8.4.3 Monotone Speedup Variability

Variability:

  • The delay through a gate can change over time

– temperature – supply voltage – other effects

  • The delay through two “identical” gates can be different

– manufacturing variability – load Example: measure the delay 1000 times for each of 1000 “identical” AND gates.

Delay Population

Timing Models

When you design a circuit, you do not know the precise delays through the physical gates that will be on the chips. The manufacturer will discard all chips whose gates are too fast or too slow. Manufacturers give min/max bounds on the delay for each gate in the cell library. Critical path analysis with min/max delays is very complex. Goal: do critical path analysis with just the max delay through a gate. Problem: using the maximum delay through each gate might not cause the maximum delay in the circuit.

8.4.3 Monotone Speedup 494

slide-141
SLIDE 141

495 CHAPTER 8. TIMING ANALYSIS

Slow Gates, Fast Chip

Behaviour with maximum delay through each gate

a b e f c d 2 4 2

Rising edge excitation

a b e f c d 2 4 2 8 6

Falling edge excitation Behaviour with minimum delay through b and d

a b e f c d 0.5 1 2 6 10

Rising edge excitation

a b e f c d 0.5 1 2

Falling edge excitation

Monotonicity

Definition monotonic: A function (f) is monotonic if increasing its input causes the output to increase or remain the same. Mathematically: x < y = ⇒ f(x) ≤ f(y). Definition monotononous: A lecture is monotonous if increasing the length of the lecture increases the number of people who are asleep. Definition monotone speedup: The maximum clockspeed of a circuit should be monotonic with respect to the speed of any gate or sub-circuit. That is, if we increase the speed of part of the circuit, we should either increase the clockspeed of the circuit, or leave it unchanged. Definition monotononous speedup: A lecture has monotonous speedup if increasing the pace of the lecture increases the number of people who are awake. Monotone speedup criteria for a critical path algorithm: if we decrease the delay through any part of the circuit (speedup), then the delay calculated by the critical path algorithm will decrease or stay the same.

496 CHAPTER 8. TIMING ANALYSIS 8.4.3 Monotone Speedup 497

Review: Critical Path Analysis

  • 1. If we say that the delay along the longest path is the delay of the circuit, will

this algorithm be correct with respect to monotone speedup?

8.5 False Paths

This is an advanced section. It is not covered in the course and will not be tested.

8.5. FALSE PATHS 498

slide-142
SLIDE 142

499 CHAPTER 8. TIMING ANALYSIS

8.6 Analog Timing Model

Goal: define how to compute the delay of a gate or FPGA cell.

a b

  • section 8.6: precise differential equations; complex because no closed-form

solutions for realistic circuits

  • section 8.7: Elmore’s approximation to precise differential equations

Objectives

  • How to define the delay through a gate or circuit. (section 8.6.1)
  • How to model a circuit as an RC-network. (section 8.6.2)
  • How to calculate delay of an RC-network. (section 8.6.3)

8.6.1 Defining Delay

Goal: define “the delay through a gate”.

a b

Easy:

time voltage delay Va Vb

Reality:

  • 1. The slope of the output (Vb) is dependent upon the slope of the input (Va):

time voltage which delay?

500 CHAPTER 8. TIMING ANALYSIS 8.6.1 Defining Delay 501

Defining Delay (cont’d)

  • 2. The more gates that a gate drives (the larger the load) the slower the output

voltage will rise.

a b a b time voltage delay Va Vb

Load of 1 gate: short delay

time voltage delay Va Vb

Load of 4 gates: long delay

Defining Delay (cont’d)

  • 3. Because the output waveform is sloped, we must choose the voltage level for

Vb at which we will measure the delay.

time voltage which delay? Va Vb Vdd 0.65 Vdd 0.35 Vdd Vb actual waveform Vb discretized waveform

Definition Trip Points: A high or ’1’ trip point is the voltage level where an upwards transition means the signal represents a ’1’. A low or ’0’ trip point is the voltage level where a downwards transition means the signal represents a ’0’.

8.6.1 Defining Delay 502

slide-143
SLIDE 143

503 CHAPTER 8. TIMING ANALYSIS

Summary: Analog Delay Model

The standard approach to define the delay through a gate is: Input waveform: Step function Load circuit: 4 copies of the gate Output trip points: 0.65 Vdd for a 0 to 1 transition 0.35 Vdd for a 1 to 0 transition. The delay through a circuit is identical, except that the load is either a standard load such as 4 NAND gates, specified by the user, or the module in which the circuit is used.

8.6.2 Modeling Circuits for Timing

  • Model the delay of wires, gates, and connections between wires (via,

switch-box, or antifuse).

  • Dominant factors in delay are resistance and capacitance
  • Resistance and capacitance affected by different parameters.

Resistance Capacitance Wires

  • Material
  • Cross section
  • Length
  • Material (of wire and di-electric)
  • Cross section
  • Length
  • Distance to nearest wire

Gates

  • Usually negligible
  • Size

Vias Anti-fuses Switch-boxes

  • Usually large
  • Usually negligible

504 CHAPTER 8. TIMING ANALYSIS 8.6.2 Modeling Circuits for Timing 505

Resistance and Capacitance of Wires

Wires

  • Material

– Typically aluminum or copper – Copper has less resistance than aluminum – Aluminum is simpler to work with – Aluminum is used for long, fat wires, copper is used everywhere else. – For capacitance, material of di-electric (material surounding wire) is also important

  • Cross section

– Larger cross section has lower resistance – Tall narrow wires require less area, but have higher capacitance

  • Length: longer length has greater resistance and capacitance

Components and Models

Gate Switchbox Wire Physical chip Physical model RC network

8.6.2 Modeling Circuits for Timing 506

slide-144
SLIDE 144

507 CHAPTER 8. TIMING ANALYSIS

8.6.2.1 Example: Two Buffers with Complex Wiring

Schematic

G1 G2

Physical chip (one of many possible layouts)

G1 S1 W1 S2 W2 S3 W3 S4 G2

Physical model

G1 G2 S1 W1 S2 W2 S3 W3 S4

RC network

CW1 G1 Vi CW2 RW1 CW3 RW2 RW3 CG2 G2 RS1 RS2 RS3 RS4 CG1

8.6.2.2 Example: Two Buffers with Simple Wiring

Schematic

G1 G2

Physical chip

G1 S1 W1 G2 S2

Physical model

G1 G2 W1 S2 S1

RC network

508 CHAPTER 8. TIMING ANALYSIS 8.6.3 Calculate Delay 509

8.6.3 Calculate Delay

Trim for timing analysis

CW1 RW1 CG2 RS1 RS2 V0

  • Even this simple example is too complex for our first simple example
  • Simplify the simple example by simply assuming that the capacitance of the

wire is much less than the capacitance of the gate (CW1 ≪ CG2). Simplify

RW1 CG2 RS1 RS2 VG1 VG2

  • Another simplification: collapse the line of resistors into a single resistor (this

just makes the algebra simpler, it does not affect the precision of the analysis). R = RS1 +RW1 +RS2 Simplify

C R VG1 VG2

Two Bufs: Derivation of Delay Equation

Goal: calculate delay from VG1 to VG2

C R VG1 VG2

RC network

time delay VG1 VG2 VDD 0.65 VDD

Waveforms Strategy:

  • 1. Derive equation for

in terms of .

  • 2. Solve for time for

to reach , assuming that .

8.6.3 Calculate Delay 510

slide-145
SLIDE 145

511 CHAPTER 8. TIMING ANALYSIS

Two Bufs Derivation (Cont’d)

Goal: calculate delay from VG1 to VG2

C R VG1 VG2

RC network

time delay VG1 VG2 VDD 0.65 VDD

Waveforms Derivation of equation for VG2: VG2(t) = VG1(t) − voltage drop from VG1 to VG2 = VG1(t) − I(t)R Equation for the current through the capacitor I(t) = CdVG2(t) dt VG2(t) = VG1(t) −

  • CdVG2(t)

dt

  • R

= VG1(t) − RCdVG2(t) dt

512 CHAPTER 8. TIMING ANALYSIS 8.6.3 Calculate Delay 513

Delay Analysis

With initial condition VG2(0) = 0 and forcing function VG1(t) = VDD-step-function, the closed form solution is: VG2(t) = VDD − VDDe−t/RC Find delay for VG2 to reach 0.65VDD 0.65VDD = VDD −VDDe−t/RC 0.35 = e−t/RC ln 0.35 = ln(e−t/RC) −1.05 = −t/RC −1.0 ≈ −t/RC t = RC With a step function input, the delay for VG2 to reach 0.65VDD is: (RS1 +RW1 +RS2)CG2 which is commonly known as the RC time constant.

time delay VG1 VG2 VDD 0.65 VDD

8.6.3 Calculate Delay 514

slide-146
SLIDE 146

515 CHAPTER 8. TIMING ANALYSIS

8.6.4 Ex: Two Bufs with Both Caps

To calculate voltages and currents correctly, we number the nodes of the circuit consistently.

  • The voltage source and the top of each capacitor is a node.
  • We number the nodes, capacitors, and resistors.
  • Resistors are numbered according to the capacitor to their right.
  • Multiple resistors in series without an intervening capacitor are lumped into a

single resistor.

  • Vi is the voltage at node i.
  • Iri is the current flowing through Ri
  • Ici is the current flowing into/out-of Ci

CW1 RW1 CG2 RS1 RS2 V0

Original RC network With node numbers

C1 R1 C2 R2 V0 1 2 IC2 IC1 IR1 IR2

RC network

time delay VG1 VG2 VDD 0.65 VDD

Waveforms

  • Calculate the delay from V0 to V2.
  • Derive the equation for V2:

V2(t) = V0(t) − voltage drop from V0 to V2 The voltage drop is the sum of the voltage drops across the resistors on the path from V0 to V2 = V0(t)−IR1(t)R1 −IR2(t)R2

516 CHAPTER 8. TIMING ANALYSIS 8.6.4 Ex: Two Bufs with Both Caps 517

The current through a resistor is the sum of the currents through the downstream capacitors IR1(t) = IC1(t)+IC2(t) IR2(t) = IC2(t) V2(t) = V0(t) − (IC1(t)+IC2(t))R1 − IC2(t)R2 Group by currents = V0(t) − R1IC1(t) − (R1 +R2)IC2(t) Equation for the current through a capacitor IC(t) = CdV(t) dt V2(t) = V0(t) − R1

  • C1

dV1(t) dt

  • − (R1 +R2)
  • C2

dV2(t) dt

  • V2(t) = V0(t) − R1C1

dV1(t) dt − (R1 +R2)C2 dV2(t) dt Problem: no closed form solution!

Summary: Precise Equations

Measure delay from source node (node 0) to a particular destination (node i)

  • Initial condition: node i is at GND: Vi(0) = 0
  • Voltage at source node is step function to VDD
  • Measure time for Vi to reach 0.65VDD
  • Vi(t) = V0(t) − voltage drop across resistors on path from 0 to i
  • Voltage drop across resistor: V = IRR
  • Current through resistor is sum of currents through downstream capacitors

Result is a partial differential equation without a closed-form solution. To calculate Vi(t) precisely:

  • Write one equation for each node
  • Have set of n partial differential equations for n variables
  • Use numerical methods to calculate Vi(t)
  • Note: to calculate the voltage at one node,

need to calculate the voltage at every node.

8.6.4 Ex: Two Bufs with Both Caps 518

slide-147
SLIDE 147

519 CHAPTER 8. TIMING ANALYSIS

8.7 Elmore Delay Model 8.7.1 Elmore Delay as an Approximation

To avoid solving a set of partial differential equations, Elmore proposed a simple, but effective approximation.

C1 R1 C2 R2 V0 1 2 IC2 IC1 IR1 IR2

Exact equation V2(t) = V0(t) − R1C1 dV1(t) dt − (R1 +R2)C2 dV2(t) dt Template with a closed-form solution: = V0(t) − kdV(t) dt Elmore’s approximation: dV1(t) dt = dV2(t) dt = V0(t) − R1C1 dV2(t) dt − (R1 +R2)C2 dV2(t) dt = V0(t) − (R1C1 +(R1 +R2)C2) dV2(t) dt With initial condition V2(0) = 0 and forcing function V0(t) = VDD-step-function: = VDD − VDDe−t/(R1C1+(R1+R2)C2) Time for V2 to go from GND to 0.65VDD t = R1C1 +(R1 +R2)C2

520 CHAPTER 8. TIMING ANALYSIS 8.7.1 Elmore Delay as an Approximation 521

Plots

10 20 30 40 50 0.0 0.2 0.4 0.6 0.8 1.0

8.7.2 A More Complicated Example

R1 C1 R2 C2 R4 C4 R3 C3

Definition path: The path from the source node to a node i is the set of all resistors between the source and i. Example: path(2) = Definition down: The set of capactitors downstream from a resistor is the set

  • f all capacitors where current would flow through the resistor to charge the
  • capacitor. You can think of this as the set of capacitors that are between the

node and ground. Example: down(R2) = Example: down(R4) =

8.7.2 A More Complicated Example 522

slide-148
SLIDE 148

523 CHAPTER 8. TIMING ANALYSIS

Definition Elmore time constant: Simple formula: TDi =

r∈path(i)

Rr

c∈down(r)

Cc The conventional formula is more complex syntactically, but equivalent mathematically: TDi =

k∈Nodes

Ck

r ∈ (path(i) ∩ path(k))

Rr The equivalence of the two formulations can be shown by observing that if c is downstream from r, then r is on the path to c: c ∈ down(r) ⇐ ⇒ r ∈ path(c)

Calculate Elmore Delay

Question: Calculate the Elmore delay to node 2.

R1 C1 R2 C2 R4 C4 R3 C3

Question: You must increase the size of one resistor while minimizing the increase of the delay to node 2. Which resistor should you increase? Question: Which resistor should you decrease to maximally reduce the delay to node 2?

524 CHAPTER 8. TIMING ANALYSIS 8.7.2 A More Complicated Example 525

Analyze Lower Bound on Voltage Summary of Analog and Elmore Delay

  • 1. The voltage drop from the source to a node is:
  • 2. The voltage drop across a resistor is:
  • 3. The current through a capacitor is:
  • 4. The Elmore approximation is:

8.7.2 A More Complicated Example 526

slide-149
SLIDE 149

527 CHAPTER 8. TIMING ANALYSIS

8.8 Practical Usage of Timing Analysis

This is an advanced section. It is not covered in the course and will not be tested.

528 CHAPTER 8. TIMING ANALYSIS

Chapter 9 Power Analysis and Power-Aware Design

529

9.1 Overview 9.1.1 Importance of Power and Energy

  • Laptops, PDA, cell-phones, etc — obvious!
  • For microprocessors in personal computers, every watt above 40W adds $1 to

manufacturing cost

  • Approx 25% of operating expense of server farm goes to energy bills
  • (Dis)Comfort of Unix labs in E2
  • Sandia Labs had to build a special sub-station when they took delivery of

Teraflops massively parallel supercomputer (over 9000 Pentium Pros)

  • High-speed microprocessors today can run so hot that they will damage

themselves — Athlon reliability problems, Pentium 4 processor thermal throttling

  • In 2000, information technology consumed 8% of total power in US.
  • Future power viruses: cell phone viruses cause cell phone to run in full power

mode and consume battery very quickly; PC viruses that cause CPU to meltdown batteries

9.1. OVERVIEW 530

slide-150
SLIDE 150

531 CHAPTER 9. POWER

9.1.2 Power vs.Energy

Most people talk about “power” reduction, but sometimes they mean “power” and sometimes “energy.”

  • Energy is stored and transmitted.

– Stored (battery or capacitor), or transfered. – Consumed to perform an operation – Goals of minimization: ∗ Reduce energy costs ∗ Increase battery life

  • Power is the rate of energy consumption or transmission (Energy/time)

– Goals of minimization: ∗ Reduce cost of heat removal equipment ∗ Passive devices receive their energy continously over the air. ∗ Reduce size and cost. ∗ Increase operational distance from energy source.

Power vs.Energy

Type Units Equivalent Types Equations Energy Joules Work = Volts×Coulombs = 1

2 ×C ×Volts2

Power Watts Energy / Time = Joules/sec = Volts×Coulombs sec = Volts×Current Memory refresh: Capacitors store energy. Capacitance = Coulombs Volts Current = Coulombs sec

532 CHAPTER 9. POWER 9.1.3 Batteries, Power and Energy 533

9.1.3 Batteries, Power and Energy 9.1.3.1 Do Batteries Store Energy or Power?

Energy = Volts×Coulombs Power = Energy

Time

Batteries rated in Amp-hours at a voltage. battery = Amps×Seconds×Volts = Coulombs

Seconds ×Seconds×Volts

= Coulombs×Volts = Energy Batteries store energy.

9.1.3.2 Battery Life and Efficiency

To extend battery life, we want to increase the amount of work done and/or decrease energy consumed. Work and energy are same units, therefore to extend battery life, we truly want to improve efficiency. “Power efficiency” of microprocessors normally measured in MIPS/Watt. Is this a real measure of efficiency?

MIPs Watts = millions of instructions Seconds

× Seconds Energy = millions of instructions Energy Both instructions executed and energy are measures of work, so MIPs/Watt is a measure of efficiency. Question: What is the weakness of this analysis?

9.1.3 Batteries, Power and Energy 534

slide-151
SLIDE 151

535 CHAPTER 9. POWER

9.1.3.3 Battery Life and Power

Question: Running a VHDL simulation requires executing an average of 1 million instructions per simulation step. My computer runs at 1.5GHz, has a CPI of 0.67, and burns 18W of power. My battery is rated at 11V and 5.6Ah. Assuming all of my computer’s clock cycles go towards running VHDL simulations, how many simulation steps can I run on one battery charge?

Battery Life and Power

Question: In low-power mode, the clock speed is reduced to 1.0GHz and the power consumpsion is 10W. In low-power mode, how much longer can I run the computer on one battery charge?

536 CHAPTER 9. POWER 9.1.3 Batteries, Power and Energy 537

Battery Life and Power

Question: In low-power mode, how many more simulation steps can I run on

  • ne battery?

9.2 Power Equations

Power = SwitchPower+ShortPower

  • + LeakagePower
  • DynamicPower

StaticPower Dynamic Power dependent upon clock speed Switching Power useful — charges up transistors Short Circuit Power not useful — both N and P transistors are on Static Power independent of clock speed Leakage Power not useful — leaks around transistor

9.2. POWER EQUATIONS 538

slide-152
SLIDE 152

539 CHAPTER 9. POWER

Dynamic Power

Dynamic power is proportional to how often signals change their value (switch).

  • Roughly 20% of signals switch during a clock cycle.
  • Need to take glitches into account when calculating activity factor. Glitches

increase the activity factor.

  • Equations for dynamic power contain clock speed and activity factor.

Activity factor = number of value changes number of signal×number of clock cycles

9.2.1 Switching Power

1->0 0->1 CapLoad

Charging a capacitor

0->1 1->0 CapLoad

Discharging a capacitor energy to (dis)charge capacitor = 1 2 ×CapLoad×VoltSup2

540 CHAPTER 9. POWER 9.2.2 Short-Circuited Power 541

9.2.2 Short-Circuited Power

Vi Vo IShort VoltSup GND VoltThresh VoltSup - VoltThresh P-trans on N-trans on TimeShort Gate Voltage

PwrShort = ActFact×ClockSpeed×TimeShort×IShort×VoltSup

9.2.3 Leakage Power

N-substrate P Vi Vo N N P P

Cross section of invertor showing parasitic diode

I V ILeak

Leakage current through parasitic diode PwrLk = ILeak×VoltSup ILeak ∝ e −q×VoltThresh k ×T

  • 9.2.3

Leakage Power 542

slide-153
SLIDE 153

543 CHAPTER 9. POWER

9.2.4 Glossary

This section reserved for your reading pleasure

9.2.5 Note on Power Equations

This section reserved for your reading pleasure

9.3 Overview of Power Reduction Techniques

We can divide power reduction techniques into two classes: analog and digital.

Analog Parameters

Power reduction parameters at the analog level. capacitance for example: Silicon on Insulator (SOI) and high-K dielectrics resistance for example: copper wires rather than aluminum voltage low-voltage circuits

544 CHAPTER 9. POWER 9.3. OVERVIEW OF POWER REDUCTION TECHNIQUES 545

Analog Techniques

Power reduction techniques at the analog level. dual-VDD Two different supply voltages: high voltage for performance-critical portions of design, low voltage for remainder of circuit. Alternatively, can vary voltage over time: high voltage when running performance-critical software and low voltage when running software that is less sensitive to performance. dual-Vt Two different threshold voltages: transistors with low threshold voltage for performance-critical portions of design (can switch more quickly, but more leakage power), transistors with high threshold voltage for remainder of circuit (switches more slowly, but reduces leakage power). exotic circuits Special flops, latches, and combinational circuitry that run at a high frequency while minimizing power adiabatic circuits Special circuitry that consumes power on 0 → 1 transitions, but not 1 → 0 transitions. These sacrifice performance for reduced power. clock trees Up to 30% of total power can be consumed in clock generation and clock tree

Digital Parameters

Power-reduction parameters at the digital level. capacitance (number of gates) activity factor clock frequency

9.3. OVERVIEW OF POWER REDUCTION TECHNIQUES 546

slide-154
SLIDE 154

547 CHAPTER 9. POWER

Digital Techniques

Power-reduction techniques at the digital level. multiple clocks Put a high speed clock in performance-critical parts of design and a low speed clock for remainder of circuit clock gating Turn off clock to portions of a chip when it’s not being used data encoding Gray coding vs one-hot vs fully encoded vs ... glitch reduction Adjust circuit delays or add redundant circuitry to reduce or eliminate glitches. asynchronous circuits Get rid of clocks altogether....

9.4 Voltage, Power, and Delay

If our goal is to reduce power, the most promising approach is to reduce the supply voltage, because, from: Power = (ActFact×ClockSpeed× 1

2CapLoad×VoltSup2)

+ (ActFact×ClockSpeed×TimeShort×IShort×VoltSup) + (ILeak×VoltSup) we observe: Power ∝ VoltSup2

548 CHAPTER 9. POWER 9.4. VOLTAGE, POWER, AND DELAY 549

Reducing Difference Between Supply and Threshold Voltage

When supply voltage decreases:

  • supply current decreases
  • time to charge capacitance load increases
  • load delay of circuit increases

More precisely, the load delay depends on both the supply voltage and the difference between the supply and threshold voltages. LoadDelay ∝ VoltSup (VoltSup−VoltThresh)2

Effect of Decreasing Supply Voltage on Delay

a b co sum

Timing with original VDD

clk co

Timing with reduced VDD

clk co

9.4. VOLTAGE, POWER, AND DELAY 550

slide-155
SLIDE 155

551 CHAPTER 9. POWER

Effect of Decreasing Supply Voltage on Delay

Question: If the delay through a circuit is 20 ns, the supply voltage is 2.8 V, and the threshold voltage is 0.7 V, calculate the delay if the supply voltage is dropped to 2.2 V.

Sacrifice or Optimization?

  • Decreasing the supply voltage increases the delay through the circuit
  • Increasing the clock period allows us to:

552 CHAPTER 9. POWER 9.4. VOLTAGE, POWER, AND DELAY 553

Clock Speed and Power Consumption

Question: In the question on high-performance / low-power for VHDL simulation earlier in the chapter, the laptop was able to execute 27.7 million simulation steps in high power mode and 33.2 million simulation steps in low-power mode. What percentage of the additional simulation steps was due to reducing the clock speed and what percentage was due to reducing the supply voltage?

Reducing Threshold Voltage Increases Leakage Current

If we reduce the supply voltage, we want to also reduce the threshold voltage, so that we do not increase the delay through the circuit. However, as threshold voltage drops, leakage current increases: ILeak ∝ e −q×VoltThresh k ×T

  • And increasing the leakage current increases the power:

Power ∝ ILeak So, need to strike a balance between reducing VoltSup (which has a quadratic affect on reducing power), and increasing ILeak, which has a linear affect on increasing power.

9.4. VOLTAGE, POWER, AND DELAY 554

slide-156
SLIDE 156

555 CHAPTER 9. POWER

Summary of Supply Voltage and Clock Speed

Pix pixel (example of a unit of work) Perf performance Pwr power Uptime Time that the system can run on one battery charge Bat Energy in a battery Clock Circuit VDD speed delay Slack Perf Pwr Energy Uptime Work V cyc/sec sec sec pix/sec J/sec sec/bat pix/bat = ↑ ↑ = ↑ ↑ ↓ ↓

9.5 Data Encoding for Power Reduction 9.5.1 How Data Encoding Can Reduce Power

Data encoding is a technique that chooses data values so that normal execution will have a low activity factor. The most common example is “Gray coding” where exactly one bit changes value each clock cycle when counting.

556 CHAPTER 9. POWER 9.5.1 How Data Encoding Can Reduce Power 557

Decimal Gray Binary 0000 0000 1 0001 0001 2 0011 0010 3 0010 0011 4 0110 0100 5 0111 0101 6 0101 0110 7 0100 0111 8 1100 1000 9 1101 1001 10 1111 1010 11 1110 1011 12 1010 1100 13 1011 1101 14 1001 1110 15 1000 1111

8-bit Counter

Question: For an eight-bit counter, how much more power will a binary counter consume than a Gray-code counter?

9.5.1 How Data Encoding Can Reduce Power 558

slide-157
SLIDE 157

559 CHAPTER 9. POWER

Random Data

Question: For completely random eight-bit data, how much more power will a binary circuit consume than a Gray-code circuit?

9.5.2 Example Problem: Sixteen Pulser 9.5.2.1 Problem Statement

Your task is to do the power analysis for a circuit that should send out a

  • ne-clock-cycle pulse on the done signal once every 16 clock cycles. (That is,

done is ’0’ for 15 clock cycles, then ’1’ for one cycle, then repeat with 15 cycles of ’0’ followed by a ’1’, etc.)

done 1 2 3 16 15 17 32 31 33 clk

Required behaviour You have been asked to consider three different types of counters: a binary counter, a Gray-code counter, and a one-hot counter. (The table below shows the values from 0 to 15 for the different encodings.) Question: What is the relative amount of power consumption for the different

  • ptions?

560 CHAPTER 9. POWER 9.5.2 Example Problem: Sixteen Pulser 561

9.5.2.2 Additional Information

Your implementation technology is an FPGA where each cell has a programable combinational circuit and a flip-flop. The combinational circuit has 4 inputs and 1

  • utput. The capacitive load of the combinational circuit is twice that of the flip-flop.

PLA

cell

  • 1. You may neglect power associated with clocks.
  • 2. You may assume that all counters:

(a) are implemented on the same fabrication process (b) run at the same clock speed (c) have negligible leakage and short-circuit currents

Data Encoding

Decimal Gray One-Hot Binary 0000 0000000000000001 0000 1 0001 0000000000000010 0001 2 0011 0000000000000100 0010 3 0010 0000000000001000 0011 4 0110 0000000000010000 0100 5 0111 0000000000100000 0101 6 0101 0000000001000000 0110 7 0100 0000000010000000 0111 8 1100 0000000100000000 1000 9 1101 0000001000000000 1001 10 1111 0000010000000000 1010 11 1110 0000100000000000 1011 12 1010 0001000000000000 1100 13 1011 0010000000000000 1101 14 1001 0100000000000000 1110 15 1000 1000000000000000 1111

9.5.2 Example Problem: Sixteen Pulser 562

slide-158
SLIDE 158

563 CHAPTER 9. POWER

9.5.2.3 Answer Sketch the Circuitry

Name the output “done” and the count digits “d()”.

Capacitance

cap number subtotal cap Gray d() PLAs Flops done PLAs Flops 1-Hot d() PLAs Flops done PLAs Flops Binary d() PLAs Flops done PLAs Flops

564 CHAPTER 9. POWER 9.5.2 Example Problem: Sixteen Pulser 565

Activity Factors Gray Coding Activity Factor

d(0) d(1) d(2) d(3) done clk 4/16 2/16 2/16 2/16 8/16

Gray coding

One-Hot Activity Factor

d(0) d(1) d(2) done clk 2/16 2/16 2/16 2/16 2/16

One-hot coding

9.5.2 Example Problem: Sixteen Pulser 566

slide-159
SLIDE 159

567 CHAPTER 9. POWER

Binary Coding Activity Factor

d(0) d(1) d(2) d(3) done clk 8/16 4/16 2/16 2/16 16/16

Binary coding

Putting it all Together

subtotal cap act fact power Gray d() PLAs Flops done PLAs Flops Total 1-Hot d() PLAs Flops done PLAs Flops Total Binary d() PLAs Flops done PLAs Flops Total

568 CHAPTER 9. POWER 9.6. CLOCK GATING 569

9.6 Clock Gating

The basic idea of clock gating is to reduce power by turning off the clock when a circuit isn’t needed. This reduces the activity factor. Related to clock gating: Chip enable Use the chip-enable on a flip flop to hold the output constant when not needed. Operand gating Use AND gates and an enable signal to set data (operand) values to zero when a datapath circuit is not needed. Power gating Turn off supply voltage to part of the chip.

9.6.1 Introduction to Clock Gating

Examples of Clock Gating Condition Circuitry turned off O/S in standby mode Everything except “core” state (PC, registers, caches, etc) No floating point instruc- tions for k clock cycles floating point circuitry Instruction cache miss Instruction decode circuitry No instruction in pipe stage i Pipe stage i

9.6.1 Introduction to Clock Gating 570

slide-160
SLIDE 160

571 CHAPTER 9. POWER

9.6.2 Implementing Clock Gating

Clock gating is implemented by adding a component that disables the clock when the circuit isn’t needed.

i_data clk

  • _data

i_valid

  • _valid

Without clock gating

Clock Enable State Machine clk i_wakeup clk_en cool_clk i_data

  • _data

i_valid

  • _valid

With clock gating

9.6.3 Design Process

This section reserved for your reading pleasure

572 CHAPTER 9. POWER 9.6.4 Effectiveness of Clock Gating 573

9.6.4 Effectiveness of Clock Gating

PctClk = Percentage of clock cycles that clock toggles. PctBusy = Percentage of clock cycles when busy doing useful work. A = Activity factor without clock gating A′ = Activity factor with clock gating A′

min

= Activity factor if clock is on only when busy Eff = Effectiveness of clock gating

Activity factor PctClk

A A’min 0% 100% 0% PctBusy 100%

Eff

Eff =0% = ⇒ A′ = Eff =100% = ⇒ A′ = Effectiveness measures the percentage of clock cycles when the circuit is idle (contains only bubbles) that the clock is turned off.

Effectiveness (Cont’d)

Activity factor PctClk

A A’min 0% 100% 0% PctBusy 100%

Eff Eff

0% 100%

PctClk

PctBusy 100%

Activity factor

A A’min 0% 100%

Eff

Eff = PctClk = A’ =

9.6.4 Effectiveness of Clock Gating 574

slide-161
SLIDE 161

575 CHAPTER 9. POWER

Clock Gating Effectiveness Questions

Question: What is the effectiveness if the clock toggles only when the circuit contains a parcel? Question: What is the effectiveness of a clock that always toggles?

Clock Gating Effectiveness Questions

Question: What does it mean for a clock gating scheme to be 75% effective? Question: What happens if PctClk < PctBusy?

576 CHAPTER 9. POWER 9.6.5 Example: Reduced Activity Factor with Clock Gating 577

9.6.5 Example: Reduced Activity Factor with Clock Gating

Question: How much power will be saved in the following clock-gating scheme?

  • 70% of the time the main circuit contains at least one parcel
  • clock gating circuit is 90% effective
  • clock gating circuit has 10% of the area of the main circuit
  • clock gating circuit has same activity factor as main circuit
  • neglect short-circuiting and leakage power

9.6.5 Example: Reduced Activity Factor with Clock Gating 578

slide-162
SLIDE 162

579 CHAPTER 9. POWER

9.6.6 Calculating PctBusy 9.6.6.1 Valid Bits and Busy

Use valid bits to determine when a circuit is busy.

clk i_valid i_data

  • _data
  • _valid

clk i_valid i_data

  • _data
  • _valid

α β γ α β γ

Microscopic Analysis

Which clock edges are needed?

i_valid

  • _valid

clk clk_en cool_clk clk i_valid

  • _valid

1 2 3 4 cool_clk clk_en 5

580 CHAPTER 9. POWER 9.6.6 Calculating PctBusy 581

9.6.6.2 Calculating LenBusy

For Throughput=1.

LenBusy i_valid

  • _valid

clk_en Latency NumPcls LenBusy i_valid

  • _valid

clk_en i_valid

  • _valid

clk_en i_valid

  • _valid

clk_en i_valid

  • _valid

clk_en i_valid

  • _valid

clk_en i_valid

  • _valid

clk_en

1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7

Latency NumPcls

LenBusy when Tput <1

i_valid

  • _valid

clk_en

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

9.6.6 Calculating PctBusy 582

slide-163
SLIDE 163

583 CHAPTER 9. POWER

9.6.6.3 From LenBusy to PctBusy

Find the core of a repeating pattern of parcels and bubbles. LenCore = LenBusy = PctBusy = = Question: What happens if Lat > NumBubbles?

9.6.7 Example: Pipelined Circuit with Clock-Gating

Design a “clock enable state machine” for the pipelined component described below.

  • area of pipelined component = 100
  • latency varies from 5 to 10 clock cycles, uniform distribution of latencies
  • contains a maximum of 6 parcels
  • 60% of clock cycles have a parcel on the inputs
  • average length of continuous sequence of valid parcels is 80
  • area of clock-enable state machine = 13
  • use input and output valid bits for wakeup
  • leakage current is negligible
  • short-circuit current is negligible

584 CHAPTER 9. POWER 9.6.7 Example: Pipelined Circuit with Clock-Gating 585

Waveforms for Parcel Count

i_valid 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

  • _valid

parcel_count parcel_clk_en 18 19 20 21 22 23 24 i_data α β γ δ ε α β γ δ ε

  • _data

Waveforms for Cycle Count

i_valid 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

  • _valid

cycle_count 1 2 1 2 3 4 1 2 3 4 5 6 7 8 9 10 cycle_clk_en 18 19 20 21 22 23 24 5 i_data α β γ δ ε α β γ δ ε

  • _data

Behavioural Analysis

Question: Without further detailed analysis, can we determine which design is the better option?

9.6.7 Example: Pipelined Circuit with Clock-Gating 586

slide-164
SLIDE 164

587 CHAPTER 9. POWER

Question: Which design option has lower power and how much lower is it?

588 CHAPTER 9. POWER 9.6.7 Example: Pipelined Circuit with Clock-Gating 589

9.6.8 Clock Gating in ASICs

EN

en clk en_clk clk en q en_clk

A register with a chip-enable can be synthesized into a register with a gated clock. process (clk) begin if rising_edge( clk ) and en = ’1’ then a <= i_a; b <= i_b; end if; end process; Synthesis tools have commands and flags that cause this type of code to be synthesized into a circuit with a gated clock. FPGAs do not support clock gating — use alternatives in section 9.6.9.

9.6.8 Clock Gating in ASICs 590

slide-165
SLIDE 165

591 CHAPTER 9. POWER

9.6.9 Alternatives to Clock Gating 9.6.9.1 Use Chip Enables

Same coding as with clock gating, just do not enable clock-gating in the synthesis tool. This technique is used with both ASICs and FPGAs.

9.6.9.2 Operand Gating

en a b

n n n n n

z

Advantages Disadvantages

592 CHAPTER 9. POWER

Chapter 10 Review

This chapter lists the major topics of the term. The “Topics List” section for each major area is meant to be relatively complete. 593

10.1 Overview of the Term

  • The purely digital world

– VHDL – design and optimization methods – performance analysis

  • Analog

effects in the digital world – timing analysis – power

10.1. OVERVIEW OF THE TERM 594

slide-166
SLIDE 166

595 CHAPTER 10. REVIEW

10.2 VHDL 10.2.1 VHDL Topics

  • simple syntax and semantics — things that you should know simply by having

done the labs and project

  • behavioural semantics of VHDL
  • synthesis semantics of VHDL
  • VHDL code as legal, synthesizable, and good-practice

10.2.2 VHDL Example Problems

  • identify whether a particular signal will be the output of combinational circuitry or

a flop

  • identify whether a particular process is combinational or clocked
  • legal, synthesizable, and good code
  • perform delta-cycle simulation of VHDL
  • perform RTL simulation of VHDL
  • identify whether two VHDL fragments have same behaviour
  • analyze area, approximate clock period, latency, throughput, etc.of VHDL code

596 CHAPTER 10. REVIEW 10.3. RTL DESIGN TECHNIQUES 597

10.3 RTL Design Techniques 10.3.1 Design Topics

  • coding guidelines
  • generic FPGA hardware
  • area estimation
  • finite state machines

– implicit – explicit-current – explicit-current+next

  • from algorithm to hardware

– dependency graph – dataflow diagram – scheduling – allocation – hardware block diagram – state machine

  • memory dependencies
  • memory arrays and dataflow diagrams
  • Pipelining
  • Retiming
  • Area and performance optimizations

10.3.2 Design Example Problems

  • estimate area to implement a circuit in an FPGA
  • calculate resource usage for a dataflow diagram
  • calculate performance data for a dataflow diagram
  • given an algorithm, design a dataflow diagram
  • given a dataflow diagram, draw a control table and do resource allocation
  • optimize a dataflow diagram to improve performance or reduce area
  • analyze and compare the functionality, area, and performance of

– VHDL code – pseudocode – state machine – dataflow diagram – waveforms – schematics

  • use retiming to improve clock speed or area
  • use retiming to determine if two circuits have the same behaviour

10.3.2 Design Example Problems 598

slide-167
SLIDE 167

599 CHAPTER 10. REVIEW

10.4 Performance Analysis and Optimization 10.4.1 Performance Topics

  • time to execute a program
  • definition of performance
  • speedup
  • n% bigger, smaller
  • calculating performance of different different tasks and of average task
  • changing frequency of task and overall performance
  • choosing which task to optimize to best improve overall performance
  • performance increase over time
  • design tradeoffs (CPI vs NumInsts vs ClockSpeed vs time-to-market)
  • Clock speed vs. performance
  • Optimality — performance / area tradeoffs

10.4.2 Performance Example Problems

  • calculate tradeoffs between performance, area, schedule, and power
  • evaluate performance criteria

600 CHAPTER 10. REVIEW 10.5. TIMING ANALYSIS 601

10.5 Timing Analysis 10.5.1 Timing Topics

  • circuit parameters that affect delay

– clock period – clock skew – clock jitter – propagation delay – load delay – setup time – hold time – clock-to-Q time

  • timing analysis of latch
  • concepts of critical path vs false path,

timing models, monotonic speedup

  • elmore timing model

10.5.2 Timing Example Problems

  • timing parameters for minimum clock period
  • timing parameters for hold constraint
  • determine if a latch will work correctly
  • compute timing parameters of a latch
  • identify timing violation, suggest remedy
  • find the longest path
  • test if an excitation excites a particular path
  • compute the Elmore delay constant
  • use concepts of Elmore delay to compare delay of two circuits
  • compare accuracy of different timing models
  • suggest design change to increase clock speed

10.5.2 Timing Example Problems 602

slide-168
SLIDE 168

603 CHAPTER 10. REVIEW

10.6 Power 10.6.1 Power Topics

  • power vs energy
  • equations for power

– dynamic power – static power – switching power – short circuit power – leakage power – activity factor – leakage current – threshold voltage – supply voltage

  • analog power reduction techniques
  • rtl power reduction techniques

– clock gating

10.6.2 Power Example Problems

  • predict effect of new fabrication process (supply voltage, threshold voltage,

capacitance, circuit delay) on power

  • predict effect of environment change (temp, supply voltage, etc) on power

consumption

  • predict effect of design change on power consumption (capacitance, activity

factor, clock speed)

  • design clock gating scheme for a circuit, predict effect on power consumption
  • asses validity of various power- or energy-consumption metrics

604 CHAPTER 10. REVIEW 10.7. FORMULAS TO BE GIVEN ON FINAL EXAM 605

10.7 Formulas to be Given on Final Exam

P = 1 2(A×C×V2 ×F)+(τ×A×V×ISh×F)+(V×IL) T = Ins×C F F ∝ (V−Vt)2 V P = V×I P = W T IL ∝ e −q×Vt k ×T

10.7. FORMULAS TO BE GIVEN ON FINAL EXAM 606

slide-169
SLIDE 169

607 CHAPTER 10. REVIEW

S = T1 T2 M = F/106 (

n

i=0

PIi ×Ci) A′ = (1−E(1−Pv))A q = 1.60218×10−19C k = 1.38066×10−23J/K logxy = logy logx (xy)z = x(yz) (xy)(xz) = x(y+z) a = bc is equivalent to: a1/c = b

608 CHAPTER 10. REVIEW