Constructible sheaves and their cohomology for asynchronous logic - - PowerPoint PPT Presentation

constructible sheaves and their cohomology for
SMART_READER_LITE
LIVE PREVIEW

Constructible sheaves and their cohomology for asynchronous logic - - PowerPoint PPT Presentation

Constructible sheaves and their cohomology for asynchronous logic and computation 14 January 2010 Michael Robinson Acknowledgements This is a preliminary report on progress in a larger project on applied sheaf theory More substantial


slide-1
SLIDE 1

Constructible sheaves and their cohomology for asynchronous logic and computation

14 January 2010

Michael Robinson

slide-2
SLIDE 2

Acknowledgements

 This is a preliminary report on progress in a larger

project on applied sheaf theory

 More substantial results are to come!

 It's joint work with

 Robert Ghrist (Penn)  Yasu Hiraoka (Hiroshima)

 The focus is on logic here, but is part of

 AFOSR MURI on Information Dynamics in Networks  PI: Rob Calderbank (Princeton)

slide-3
SLIDE 3

3

Logic gates

AND OR NAND NOR NOT

“bubble” indicates negation

1

slide-4
SLIDE 4

4

Logic gates

AND OR NAND NOR NOT

A change

  • ccurs...
slide-5
SLIDE 5

5

Logic gates

1

NOT

... eventually changes the

  • utput

Propagation delay varies from device to device

slide-6
SLIDE 6

6

Problem: time-bound logic

 Propagation delays along connections and within

gates!

 Feedback – can hold state  Race conditions:

 Hazards  Glitches  Oscillations  Lock­ups

slide-7
SLIDE 7

7

Example of timebound logic

Enable Data Output

1 1 1

This is an E flip­flop circuit, a basic memory

  • element. It's initially storing the value 0
slide-8
SLIDE 8

8

Example of timebound logic

Enable Data Output

1 1 1 1

If we change the Data input to 1, nothing exciting happens...

slide-9
SLIDE 9

9

Example of timebound logic

Enable Data Output

1 1 1 1 1

Pulsing the Enable input to 1 causes the Data input to be “read” and “stored”...

slide-10
SLIDE 10

10

Example of timebound logic

Enable Data Output

1 1 1 1

... but it takes time... t=1

slide-11
SLIDE 11

11

Example of timebound logic

Enable Data Output

1 1 1 1

... but it takes time... t=2

Can de­ enable at this time

slide-12
SLIDE 12

12

Example of timebound logic

Enable Data Output

1 1 1 1

... but it takes time... t=3

slide-13
SLIDE 13

13

Example of timebound logic

Enable Data Output

1 1 1

... and will hold the new value!

Data is now ignored

slide-14
SLIDE 14

14

Synchronous design

 Can avoid race conditions by polling after transients

are finished

 Unavoidable limitation: limited by the slowest circuit

 Synchronous solution: circuits poll their inputs only

at specific points in time – a global clock

 But...

 Biggest single drain of power in modern CPUs is the

clock

 Clock distribution and skew a major problem  Correcting clock skew requires additional circuitry and

power usage

slide-15
SLIDE 15

15

Example logic timeline (synchronous)

A B A+B BUS Read from memory Write Time Var 1 A Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A Clock

slide-16
SLIDE 16

16

Example logic timeline (synchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A Clock A

slide-17
SLIDE 17

17

Example logic timeline (synchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS B Clock A

slide-18
SLIDE 18

18

Example logic timeline (synchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS B Clock A

slide-19
SLIDE 19

19

Example logic timeline (synchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation Clock A BUS

slide-20
SLIDE 20

20

Example logic timeline (synchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS Clock A

slide-21
SLIDE 21

21

Example logic timeline (synchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation Clock A A+B BUS

slide-22
SLIDE 22

22

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A A Mem TX CPU Ack CPU TX Done

slide-23
SLIDE 23

23

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A A Mem TX CPU Ack CPU TX Done

slide-24
SLIDE 24

24

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A A Mem TX CPU Ack CPU TX Done

slide-25
SLIDE 25

25

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS B A Mem TX CPU Ack CPU TX Done

slide-26
SLIDE 26

26

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS B A Mem TX CPU Ack CPU TX Done

slide-27
SLIDE 27

27

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A Mem TX CPU Ack CPU TX Done

slide-28
SLIDE 28

28

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A Mem TX CPU Ack CPU TX Done

slide-29
SLIDE 29

29

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A+B A Mem TX CPU Ack CPU TX Done

slide-30
SLIDE 30

30

Example logic timeline (asynchronous)

A B A+B BUS Read from memory Write Time Var 1 Var 2 B Output A+B Memory Output Var 2 Var 1 Computation BUS A+B A Mem TX CPU Ack CPU TX Done

slide-31
SLIDE 31

31

Asynchronous design

 Typical of older bus architectures and of networks  Potential for significant power savings, space­on­

die, and speed in certain areas

 Potential for better distribution of computation  Design elegance: fewer transistors needed, less to

break

 Network communication becomes more natural

 Especially when latency is highly variable

slide-32
SLIDE 32

32

Problems!

 Asynchronous circuits are hard to design!  If you mistake a transient for the “final answer” of a

circuit, you're faced with

 Hazards (uncertainties in output value)  Glitches (very short pulses, which might confuse the

underlying electronic technology)

 Lock­ups (finite state machines getting stuck in a state

where they cannot exit)

 Generally, all are the result of race conditions

slide-33
SLIDE 33

33

Example of a glitch

A C B A B C

Glitch is one propagation delay wide

Race condition between A and B causes glitch!

Input signal Output signal

slide-34
SLIDE 34

34

Limitations in current methods

 Traditional asynchronous design requires either

 Very careful and exhaustive reasoning (time­dependent

theorem­provers, concurrency theory), or

 Detailed high­fidelity simulation (at sampling rate

determined by the “GCD” of the propagation speeds)

 Bookkeeping is difficult, but essential

 Difficult to test in stages, especially in testing response

  • f circuitry to glitches

 Exhaustive simulation is essentially impossible for large

designs (e.g. CPUs)

slide-35
SLIDE 35

35

Sheaf theory in logic circuits

 Provides some computational and conceptual tools

 It's primarily a bookkeeping mechanism

 Building­up local models (gates and wires) into

global ones (computational units)

 The primary tool for this local­to­global transition is

called cohomology

 Sheaf cohomology organizes the computations

effectively, and extracts lots of information!

 Hierarchical design can be examined by local sheaf

cohomology and sheaf direct image functors

slide-36
SLIDE 36

36

Past work

A decidedly non­exhaustive list of some highlights:

 Sheaves over categories of interacting objects

 Bacławski, Goguen (1970s)

 Concurrency & sheaf theory (not cohomological)

 Lillius (1993), Van Glabbeek (2006)

 Constructible sheaves

 Rota, Shapira, MacPherson (1960s)

 Quantum graphs (original motivating example)

 Gutkin, Smilanski (2001), Kuchment (2003)

 Our focus is more strongly on cohomology

slide-37
SLIDE 37

37

Sheaves: definition

A sheaf on a topological space X consists of

 A contravariant functor F from Open(X) to some

subcategory of Set; this is a “sheaf of sets”

 F(U) for open U is called the space of sections over U  The inclusion map UV is sent to a restriction map

F(V)F(U). Usually it is the restriction of functions.

 Given a point p∈X, the direct limit of F(U), for all U

satisfying p∈U is called the stalk at p. It's a generalization of the germ of a smooth function

 And a gluing rule...

slide-38
SLIDE 38

38

Sheaves: gluing

 The gluing rule: if U and V are open sets, then two

sections defined on U and V that agree on U∩V come from a unique section defined on U∪V

slide-39
SLIDE 39

39

Sheaves: gluing

 The gluing rule: if U and V are open sets, then two

sections defined on U and V that agree on U∩V come from a unique section defined on U∪V

Base topological space X

slide-40
SLIDE 40

40

Sheaves: gluing

 The gluing rule: if U and V are open sets, then two

sections defined on U and V that agree on U∩V come from a unique section defined on U∪V

Base topological space X

U

F F(U)

slide-41
SLIDE 41

41

Sheaves: gluing

 The gluing rule: if U and V are open sets, then two

sections defined on U and V that agree on U∩V come from a unique section defined on U∪V

Base topological space X

U

F V F F(U) F(V)

slide-42
SLIDE 42

42

Sheaves: gluing

 The gluing rule: if U and V are open sets, then two

sections defined on U and V that agree on U∩V come from a unique section defined on U∪V

Base topological space X

U∪V

F F(U∪V)

slide-43
SLIDE 43

43

Examples and non-examples

Examples of sheaves:

 Locally constant functions on a topological space  Continuous functions  Analytic functions on a manifold

Non­examples (they violate the gluing rule):

 Constant functions  L2 functions on unbounded domains

slide-44
SLIDE 44

44

Constructible sheaves

 Suppose X has a filtration, X0⊂X1⊂...⊂Xk in which

each Xi is “tame”

 A sheaf F on X is constructible (with respect to the

filtration) if it is locally constant on each stratum: Xi\Xi-1

 Constructible sheaves have constrained structure,

especially if the filtration is finite

 In the case of topological graphs, we'll use the

natural filtration structure induced by the graph

slide-45
SLIDE 45

45

Cohomology

 The cohomology functor is a tool for extracting

global information from a sheaf

 Provided it's a sheaf of abelian groups  It is homotopy invariant

 It tells you all of the global sections, and

  • bstructions for extending local sections to global
  • nes

 For instance H0(X;F)≅F(X) (all global sections)

slide-46
SLIDE 46

46

Čech cohomology

 Select a cover {U} of X and form the sequence of

spaces and maps (the Čech cochain complex) 0 ⊕F(U) ⊕F(U∩U)⊕F(U∩U∩U)...

 The maps are called “coboundaries” and come from

the differences between restrictions maps

 The homology of this sequence is the Čech

cohomology of F with respect to {U}.

 Theorem: (Leray) If the cover is “good”, then the Čech

cohomology is a homotopy invariant, and therefore independent of the choice of cover

slide-47
SLIDE 47

47

Problems with logic and sheaves

 If we use binary­valued (ℤ2­valued) sheaves in the

  • bvious way, we run into a problem: most logical
  • perations don't support the functoriality of any

sheaf in a way that's compatible with cohomology

 Put another way, logical operations aren't all ℤ2­linear!

A B C

A B C 1 1 1 1 1 1 1 Not linear!

A B C

Logic circuit Connection graph

slide-48
SLIDE 48

48

A (standard) algebraic trick!

 Instead, consider any function between sets f:AB  Let R be a ring with unit, and R(A) be the R­module

generated by A

 That is, generators of R(A) are elements of A

 Then f lifts uniquely to an R­module homomorphism

R(A) R(B) A B

f Rf

Notice that generally we cannot recover a unique element of B from R(B). But we can if we've used Rf °(1×) (1×) (1×)

slide-49
SLIDE 49

49

Lifted logic values

 Our logical value is represented by an element of

ℤ2

2 (or R2 where R is a ring with unit):

{(1 0), (0 1), (0 0), (1 1)}

 Put another way, a logical value is aq+bQ, where

 q=(1 0), represents a logic 0  Q=(0 1), represents a logic 1  a,bℤ2 can be interpreted as a flag of whether Q or its

inverted copy q is a possible realization of this value

Logic 0

Logic 1

Uncertain truth value Error state

slide-50
SLIDE 50

50

Switching sheaves

 A switching sheaf over a directed

graph is constructible with respect to stratification by the graph structure and

 Stalks over points in an edge are ℤ2

2

 A stalk over a vertex is the tensor

product of n copies of ℤ2

2, where n is

the incoming degree

 Restriction maps from an open set

containing a single vertex to a connected set in the interior of an edge are given by the diagram at right

C A B

Contraction

  • f A, C

The lift into ℤ2

8ℤ2 2 of

a logic function

slide-51
SLIDE 51

51

Edge collapse

 The benefit of the sheaf formalism is that useful

sheaf functors are already well­known.

 The direct image functor (pushforward) relates to

hierarchical design:

 Consider a continuous map XY that collapses an

edge with distinct ends. This takes a constructible sheaf F on X to a constructible sheaf f*F on Y.

 For switching sheaves, this also induces an isomorphism

  • n cohomology (by the Vietoris mapping theorem)

 Big win conceptually and computationally!

slide-52
SLIDE 52

Collapsed graphs

 We construct a spanning tree T for X, and a

sequence of trees T1, T2, ... , TN = T such that Ti+1 \ Ti consists of exactly one edge

 We can work with collapsed graphs X/Ti, on which

the cohomology is easier to compute

 Vietoris Mappring theorem: isomorphic cohomology

X X/T1 X/T2 X/T3 X/T

slide-53
SLIDE 53

53

Cohomology of switching sheaves

 As noted earlier, H0(X;F)≅F(X), so H0 is generated

by all of the allowable states of the logic circuit

 Switching sheaves don't incorporate time explicitly, but

  • ne can still extract time­dependent information in H0...

 Appears to track hazard­related transitions between states

 Hk(X;F)=0 for k>1, since dim X = 1  H1(X;F) appears to describe the states related to

hazards

slide-54
SLIDE 54

54

Example: flip-flop

C A B T Q

C A B T Q 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Hazard! Set Reset Hold This is what traditional analysis gives... 5 possible states

Transition out

  • f the hazard

state to the a hold state causes a race condition

slide-55
SLIDE 55

55

Conversion to graph

C A B T Q U V W W V U C A B Q Čech cochain complex: 0F(U)F(V)F(W)F(UV)F(UW)F(VW)0 0ℤ2

8ℤ2 2ℤ2 2ℤ2 2ℤ2 2ℤ2 20

slide-56
SLIDE 56

56

Flip-flop cohomology

C A B T Q H1(X;F)ℤ2 H0(X;F)ℤ2

7

Generated by: aBc abc+aBC ABc abc+Abc abC AbC ABC (upper case means the generator corresponding to logical 1)

These states describe the possible transitions out of the hazard state – something that takes a bit more trouble to obtain traditionally States from the traditional approach

slide-57
SLIDE 57

57

Example: glitch generator

A F

U W V Čech cochain complex: 0F(U)F(V)F(W)F(UV)F(UW)F(VW)0 0ℤ2

2ℤ2 2ℤ2 4ℤ2 2ℤ2 2ℤ2 20

H0(X;F) is generated by A+C+D⊗e a+c+d⊗E A+a+C+c+d⊗e+D⊗E H1(X;F)≅ℤ2

C E D Hazard transition state

slide-58
SLIDE 58

58

Computational aspects

 It's not immediately clear how one might store a

(representation of a) constructible sheaf in a computer

 One needs to specify a vector space for each open

set; there are various ways of doing this

 The most obvious way to do this is to write the sheaf as a

function, but then how does one store a vector space?

 Possibly use type­level programming in Haskell? We

could instantiate the sheaf as a type of Functor...

 Seriously, though, it seems to be an impediment to

automating computation in constructible sheaves

slide-59
SLIDE 59

59

Category theory to the rescue!

 It turns out that there's a different way:  Theorem: (MacPherson) The category of

constructible sheaves on an abstract simplicial complex K is isomorphic to the category of presheaves over a certain category associated to K

 By presheaf, we mean a contravariant functor from a

category to a subcategory of Set

 The category in question here is the face category:

  • bjects are simplices, and morphisms describe

boundaries (i.e. AB if B is a face of A)

slide-60
SLIDE 60

60

Simplicial complexes and the face category

A B C D A B C D {B,C} {B,D} {C,D} {A,B} {B,C,D}

Simplicial complex Face category

slide-61
SLIDE 61

61

Presheaves on a face category

 If our graph is a cell complex, we therefore only

need to know the restriction maps and the stalks

  • ver each cell.

 This seems like a minimal amount of information  Further, the construction is functorial, so we can

transfer computation of sheaf cohomology to this context

 This relates to HDA in concurrency theory!

slide-62
SLIDE 62

62

What's next?

 Theoretical directions

 Figure out how exactly glitches and hazards are

represented in the cohomology of a switching sheaf

 Related: what is the physical meaning of H1(X;F)?  Extend edge collapse methodology to other direct

images; aiming towards a hierarchical approach to sheaf cohomology computation

 Computational directions

 Run some more complicated examples of cohomology

computations

 Implement the cohomology computation on a computer