Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Turing Networks: Complexity Theory for Network? Message-Passing - - PowerPoint PPT Presentation
Turing Networks: Complexity Theory for Network? Message-Passing - - PowerPoint PPT Presentation
Turing Networks Jack Romo Introduction So Whats a Turing Turing Networks: Complexity Theory for Network? Message-Passing Parallelism Lower Bounds Upper Bounds Simulation Conclusions Jack Romo University of York jr1161@york.ac.uk
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Overview
1 Introduction 2 So What’s a Turing Network? 3 Lower Bounds 4 Upper Bounds 5 Simulation 6 Conclusions
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Complexity Theory
- The study of resource requirements to solve problems
- eg. Time, memory, ...
- Alternatively, the study of complexity classes, ie. sets of
problems with the same computational overhead
- How are they related?
- eg. P ⊆ NP
- Usually studied in context of Turing Machines
- Large amount of theory developed here over the last
century
- Would be ideal to reuse it in other problems
- Parallel complexity theory?
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Parallel Complexity Theory
- Complexity of parallel computations are extremely
nontrivial
- Turing Machines to parallelism: Parallel Turing Machines
- Same, but can create new identical read-write heads at any
step
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Parallel Complexity Theory
- Complexity of parallel computations are extremely
nontrivial
- Turing Machines to parallelism: Parallel Turing Machines
- Same, but can create new identical read-write heads at any
step
- However, this models shared memory only, not message
passing
- Other models can emulate message passing parallelism,
but do not connect to classical complexity theory
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Parallel Complexity Theory
- Complexity of parallel computations are extremely
nontrivial
- Turing Machines to parallelism: Parallel Turing Machines
- Same, but can create new identical read-write heads at any
step
- However, this models shared memory only, not message
passing
- Other models can emulate message passing parallelism,
but do not connect to classical complexity theory
- This is a problem!
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Modelling Message-Passing Parallelism
We would like a model that...
1 Intuitively emulates a network of communicating
processors.
2 Allows for substantial complexity analysis. 3 Relates classical complexity classes to its own parallel ones. 4 Can simulate other models of parallelism with good
complexity.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Network topologies
- Take a simple undirected graph G = V , E of constant
degree, 1 ∈ V ⊆ N, E ⊆ V × V symmetric relation
- Choose a vertex, index its neighbors from 1 to n; do this
for every vertex
- Call this the graph’s orientation, φ : V × N
→ V
- Call a pair G = G, φ a network topology
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Network topologies
- Take a simple undirected graph G = V , E of constant
degree, 1 ∈ V ⊆ N, E ⊆ V × V symmetric relation
- Choose a vertex, index its neighbors from 1 to n; do this
for every vertex
- Call this the graph’s orientation, φ : V × N
→ V
- Call a pair G = G, φ a network topology
- eg. Linked List
- V = N
- E = {(v, v + 1) | v ∈ V }’s symmetric closure
- φ(1, 1) = 1
- φ(n + 1, 1) = n,
φ(n + 1, 2) = n + 2, n ∈ N
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Communicative Turing Machines
- Each vertex is a Turing Machine
- However, need some capacity to communicate
- Add ’special transitions’ to send/receive a character
- Index neighbors to commune with by orientation
- Two start states, a ’master’ state for vertex 1 and ’slave’
state for the rest
- Slaves must start by waiting for a message
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Communicative Turing Machines
Definition (Communicative Turing Machine)
A Communicative Turing Machine, or CTM, is a 10-tuple T = Q, Σ, Γ, qm, qs, ha, hr, δt, δs, δr where Q, Σ are nonempty and finite, Σ ∪ {Λ} ⊂ Γ, qm, qs, ha, hr ∈ Q, and δt : Q × Γ → Q × Γ × {L, R, S} δs : Q × Γ → N × Γ × Q2 δr : Q → N × Q are partial functions, where δt(qs, x), δs(qs, x) are undefined ∀ x ∈ Γ.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Turing Networks
Definition (Turing Network)
A Turing Network is a pair T = G, T, such that G is a network topology and T is a Communicative Turing machine.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Defining Computations
- Define a configuration of one CTM and a transition
- Extend to a configuration/transition of a TN
- Derivation sequences
- Computations as terminating derivation sequences
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Configurations
Definition (CTM Configuration)
A CTM configuration of a CTM T is a 4-tuple of the form C ∈ Γ∗ × Γ × Q × Γ∗. We name the set of all CTM configurations for the CTM T C(T). We say, for CTM configurations Cn = rn, sn, qn, tn, n ∈ N, a network topology G = V , E and v1, v2 ∈ V , C1 ⊢ C2 ⇔ C1 transitions to C2 as TM configs C1, C2 ⊢v2
v1 C3, C4 ⇔ v1 in config C1 sends a char to v2 in
config C2, transitioning to C3 and C4 C1 v1 C2 ⇔ v1 sends to a nonexistent neighbor
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Configurations
Definition (TN Configuration)
A TN configuration of a TN T is a function of the form Ω : V → C(T). We say, for CTM configurations Ωn, n ∈ N of a Turing network T and v1, v2 ∈ V , Ω1 ⊢v1 Ω2 ⇔ Ω1 |V \{v1}= Ω2 |V \{v1} ∧ (Ω1(v1) ⊢ Ω2(v1) ∨ Ω1(v1) v1 Ω2(v1)) Ω1 ⊢v2
v1 Ω2 ⇔ Ω1 |V \{v1}= Ω2 |V \{v1}
∧ Ω1(v1), Ω1(v2) ⊢v2
v1 Ω2(v1), Ω2(v2)
Ω1 ⊢ Ω2 ⇔ (∃ v ∈ V • Ω1 ⊢v Ω2) ∨ (∃ v1, v2 ∈ V • Ω1 ⊢v2
v1 Ω2)
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Initial and Final States
Definition (Initial State)
An initial state of a TN T is a configuration ΩS for some S ∈ Σ∗ where ΩS(1) = λ, Λ, qm, S ΩS(n + 1) = λ, Λ, qs, λ ∀ n ∈ N
Definition (Final State)
A final state of T is a configuration Ωh where Ωh(1) = A, b, q, C where q ∈ {ha, hr}. The output string is AbC with all characters not in Σ deleted. We say Ωh is accepting if q = ha and rejecting otherwise.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Computations
Definition (Derivation Sequence)
A derivation sequence Ψ = {Ωn}n∈X is a sequence of indexed configurations of T where X ⊆ N and for any n, m ∈ X, Ωn ⊢ Ωm if m is the least element of X greater than n. Say that, for two derivation sequences of T , Ψ1, Ψ2, Ψ1 < Ψ2 if the former is a prefix of the latter as a sequence.
Definition (Computation)
We say a derivation sequence Ψ is a computation if it starts with an initial state and ends with a final state.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Acceptance, Rejection and Computing Functions
Definition (Acceptance and Rejection)
We say T accepts a string S ∈ Σ∗ if every derivation sequence starting with ΩS is less than an accepting computation and all rejecting computations are greater than an accepting
- computation. We say it rejects if there exists a rejecting
computation not greater than some accepting computation.
Definition (Computing Functions)
Say T computes a function f : Σ∗ → Σ∗ if, for every input string s ∈ dom(f ), every computation of T with input string s has a final state with output string f (s).
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Our Time Function
- Insufficient to analyze length of computations; things
happen in parallel!
- Define parallel sequences
- Analyze number of parallel sequences a computation is
itself a sequence of
- τ : Σ∗ → N gives longest parallel time of any computation
starting with an input string
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Our Time Function
Definition (Parallel Derivation Sequence)
A parallel derivation sequence of a Turing network T is a derivation sequence Ψ = {Ωn}n∈X where for all i, j ∈ X, i = j and vn ∈ V , Ωi ⊢v1 Ωx ∧ Ωj ⊢v2 Ωy ⇒ v1 = v2 Ωi ⊢v2
v1 Ωx
∧ Ωj ⊢v3 Ωy ⇒ v1 = v2, v3 Ωi ⊢v2
v1 Ωx
∧ Ωj ⊢v4
v3 Ωy
⇒ v1, v2 = v3, v4 where x, y ∈ X are the successors of i, j in X, if they exist.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Our Time Function
Definition (Parallel Time Function)
The parallel time of a derivation is the smallest number of parallel derivation sequences it is a concatenated sequence of. The time function τ : Σ∗ → N of a Turing network T is a mapping from input strings to the longest parallel time of any accepting computation starting with said string as input the Turing network is capable of.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Complexity Theory
We now have a time function; time to begin analyzing complexities!
Definition (Parallel Complexity)
For any given function f : N0 → N0 and network topology G, PARTIMEG(f (n)) = {g | g : Σ∗ → Σ∗ computable by some TN T with topology G and τT (n) = O(f (n))}
Definition (Polynomial Parallel Time)
For a given network topology G, PparG =
d∈N PARTIMEG(nd)
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Simple Results
Theorem
For any given function f : N0 → N0 and network topologies G, H, DTIME(f (n)) ⊆ PARTIMEG(f (n)) (1) H ≃ G ⇒ PARTIMEH(f (n)) = PARTIMEG(f (n)) (2)
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
The Parallel Computation Thesis
- The time taken to compute a function in parallel is
polynomially related to the space taken sequentially.
- Not all models satisfy this! (eg. Parallel Turing Machines)
- Inspiration for a complexity class relation?
- Investigate when PSPACE ⊆ PparG
- Prove by deciding QBF in polynomial parallel time
- a PSPACE-complete problem
Definition
For the alphabet Σ = {∀, ∃, •, ∧, ∨, ¬, (, ), ⊤, ⊥, a}, QBF = {α ∈ Σ∗ : α is a satisfiable Boolean formula}
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Deciding QBF in Binary Trees
Assume a single CTM is capable of computing the following in polynomial time:
- Check whether a string is a valid Boolean formula.
- Converting a formula to Prenex Normal Form.
- Replacing all occurrences of a variable with ’true’/’false’.
- Checking whether any variables are left in a formula.
- Evaluating a formula with no variables.
Theorem (Parallel Computation Thesis, Binary Trees)
PSPACE ⊆ PparB
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Deciding QBF Generally
Definition (Neighborhood and Growth)
For a given network topology G, the neighborhood function NG : N → P(V ) is defined as NG(n) = {v ∈ V : ∃ path in G from 1 to v of length ≤ n} and the growth function γG : N → N is defined as γG(n) = |NG(n)|
Theorem (Parallel Computation Thesis)
For any network topology G where γG(n) = Ω(2n), PSPACE ⊆ PparG.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Lower Bounds
- We have a substantial lower bound, ie. a set of problems
that can definitely be made tractable by parallelism
- What about an upper bound, ie. problems which can
definitely NOT be made tractable?
- Solution: simulate a TN on a sequential Turing machine!
- See whiteboard for explanation cuntz
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Analyzing Our Simulation
Before we go further, we need to capture name size:
Definition (Name Complexity)
For any network topology G, the name growth function κG : N → N is defined as κG(n) = max(NG(n)) First tape, with input string of length x after n simulated transitions, has size O(x + γG(n)(κG(n) + n)) Second tape similarly has size O(γG(n)(κG(n) + ∆κG(n + 1))), where topology has maximum degree ∆.
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Analyzing Our Simulation
After some magic, we have that transition n takes time t(n) = O((h(n) + k(n))2 + max
v∈NG (n+1),m∈N
tφ(v, m)) where tφ : V × N → N is the time taken to compute φ. Adding together all the transitions, we have (by more magic!) tT (x) = O(γ2
G(τT (x))τ 2 T (x)((x + γG(τT (x))κG(τT (x) + 1))2
+ γ2
G(τT (x))Φ(τT (x))))
where Φ(n) = max
v∈NG (n+1),m∈N
tφ(v, m).
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
What About PparG?
- What upper bound can we get on this with our tT ?
- Now τT (n) = O(nd)
- Note also γG(n) = O(2n), as network topologies have
constant degree
- So, tT (n) = O(2O(nd)(κG(O(nd)) + Φ(O(nd)))2)
- Contributing factors left are κG and tφ
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
What About PparG?
- Best we can hope for is an EXPTIME upper bound, since
2O(nd) in expression
- Need κG(n) = O(2nk); is possible!
- Note tφ(n, m) = O(f (n)) ∀ m means Φ(n) = O(f (κG(n)))
- Hence, need f (O(2nd)) = O(2nh)
- Nontrivial to find biggest possible f ; however,
quasi-polynomials, or f (n) = O(2logc n), work!
- f (n) = O(2n) is in fact too big, would get a 2-EXPTIME
upper bound instead; upper bound between these two...
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Our Upper Bound
Theorem
For any network topology G such that κG(n) = O(2nd) and tφ(v, n) = O(2logc v), PparG ⊆ EXPTIME
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Implications
- This means a problem outside of EXPTIME can only be
made tractable by encoding hard computations in the network topology
- eg. Presburger arithmetic cannot be made tractable by
any polynomial-time computable network topology
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Simulation
We have covered the first three goals of our model; the last was ”simulate other models of parallelism with good complexity.” Will try and simulate BSP and Boolean Circuits.
- BSP is a practical model of parallelism
- Boolean Circuits connect to parallelism in a theoretically
important way
Turing Networks Jack Romo Introduction So What’s a Turing Network? Lower Bounds Upper Bounds Simulation Conclusions
Conclusions
- Overall, have created a model of message-passing
parallelism which meets all of our requirements
- Have shown how to connect Turing machines to this form
- f parallelism
- Our model is limited by a static network of constant