Interactions of Computability, Complexity and Group theory - - PowerPoint PPT Presentation

interactions of computability complexity and group theory
SMART_READER_LITE
LIVE PREVIEW

Interactions of Computability, Complexity and Group theory - - PowerPoint PPT Presentation

Interactions of Computability, Complexity and Group theory Computability from the Point of View of Geometric Group Theory Paul Schupp University of Illinois at Urbana-Champaign December, 2014 Shanghai Jiao Tong University Paul Schupp (UIUC)


slide-1
SLIDE 1

Interactions of Computability, Complexity and Group theory Computability from the Point of View of Geometric Group Theory Paul Schupp University of Illinois at Urbana-Champaign December, 2014 Shanghai Jiao Tong University

Paul Schupp (UIUC) December, 2014 1 / 1

slide-2
SLIDE 2

Some History

The 1912 paper of Max Dehn on finitely presented groups is

  • remarkable. First, he posed the word, conjugacy and isomorphism

problems for finitely presented groups, realizing the importance of questions of computability for group theory. Second, for the fundamental groups Sg = x1, ..., x2g; x1...x2gx1

−1...x2g −1

  • f compact surfaces with g ≥ 2, Dehn gave a linear time algorithm for

the word problem, now called Dehn’s Algorithm: If w = 1 in Sg then w contains more than half of a cyclic permutation of the defining relator

  • r its inverse,

Note that this paper is thirty-four years before the development of the theory of computability and sixty-nine years before the theory of computational complexity. It is very far ahead of its time.

Paul Schupp (UIUC) December, 2014 2 / 1

slide-3
SLIDE 3

Some History

The 1912 paper of Max Dehn on finitely presented groups is

  • remarkable. First, he posed the word, conjugacy and isomorphism

problems for finitely presented groups, realizing the importance of questions of computability for group theory. Second, for the fundamental groups Sg = x1, ..., x2g; x1...x2gx1

−1...x2g −1

  • f compact surfaces with g ≥ 2, Dehn gave a linear time algorithm for

the word problem, now called Dehn’s Algorithm: If w = 1 in Sg then w contains more than half of a cyclic permutation of the defining relator

  • r its inverse,

Note that this paper is thirty-four years before the development of the theory of computability and sixty-nine years before the theory of computational complexity. It is very far ahead of its time.

Paul Schupp (UIUC) December, 2014 2 / 1

slide-4
SLIDE 4

Some History

The 1912 paper of Max Dehn on finitely presented groups is

  • remarkable. First, he posed the word, conjugacy and isomorphism

problems for finitely presented groups, realizing the importance of questions of computability for group theory. Second, for the fundamental groups Sg = x1, ..., x2g; x1...x2gx1

−1...x2g −1

  • f compact surfaces with g ≥ 2, Dehn gave a linear time algorithm for

the word problem, now called Dehn’s Algorithm: If w = 1 in Sg then w contains more than half of a cyclic permutation of the defining relator

  • r its inverse,

Note that this paper is thirty-four years before the development of the theory of computability and sixty-nine years before the theory of computational complexity. It is very far ahead of its time.

Paul Schupp (UIUC) December, 2014 2 / 1

slide-5
SLIDE 5

Some History

The 1912 paper of Max Dehn on finitely presented groups is

  • remarkable. First, he posed the word, conjugacy and isomorphism

problems for finitely presented groups, realizing the importance of questions of computability for group theory. Second, for the fundamental groups Sg = x1, ..., x2g; x1...x2gx1

−1...x2g −1

  • f compact surfaces with g ≥ 2, Dehn gave a linear time algorithm for

the word problem, now called Dehn’s Algorithm: If w = 1 in Sg then w contains more than half of a cyclic permutation of the defining relator

  • r its inverse,

Note that this paper is thirty-four years before the development of the theory of computability and sixty-nine years before the theory of computational complexity. It is very far ahead of its time.

Paul Schupp (UIUC) December, 2014 2 / 1

slide-6
SLIDE 6

Turing Machines and the Halting Problem

One of the great accomplishments of twentieth century mathematics was to give a precise definition of what it means to be computable. The model usually used is “computable by a Turing machine”. One can just think of a Turing machine as an idealized computer: the machine has an infinite memory and we do not care how many operations a computation takes. A machine started on a particular input may or may not eventually halt. The Halting Problem for Turing machines is the following decision problem: Given a Turing machine M and a word w on the input alphabet of M, does M halt when started with input w? Turing showed in 1936 that the Halting Problem is not computable. In the 1930’s the Halting Problem was of course a very abstract mathematical problem. Now it is a daily problem faced by hundreds of people: If one finally gets a long computer program to compile and then starts it, what does it do?

Paul Schupp (UIUC) December, 2014 3 / 1

slide-7
SLIDE 7

Turing Machines and the Halting Problem

One of the great accomplishments of twentieth century mathematics was to give a precise definition of what it means to be computable. The model usually used is “computable by a Turing machine”. One can just think of a Turing machine as an idealized computer: the machine has an infinite memory and we do not care how many operations a computation takes. A machine started on a particular input may or may not eventually halt. The Halting Problem for Turing machines is the following decision problem: Given a Turing machine M and a word w on the input alphabet of M, does M halt when started with input w? Turing showed in 1936 that the Halting Problem is not computable. In the 1930’s the Halting Problem was of course a very abstract mathematical problem. Now it is a daily problem faced by hundreds of people: If one finally gets a long computer program to compile and then starts it, what does it do?

Paul Schupp (UIUC) December, 2014 3 / 1

slide-8
SLIDE 8

Turing Machines and the Halting Problem

One of the great accomplishments of twentieth century mathematics was to give a precise definition of what it means to be computable. The model usually used is “computable by a Turing machine”. One can just think of a Turing machine as an idealized computer: the machine has an infinite memory and we do not care how many operations a computation takes. A machine started on a particular input may or may not eventually halt. The Halting Problem for Turing machines is the following decision problem: Given a Turing machine M and a word w on the input alphabet of M, does M halt when started with input w? Turing showed in 1936 that the Halting Problem is not computable. In the 1930’s the Halting Problem was of course a very abstract mathematical problem. Now it is a daily problem faced by hundreds of people: If one finally gets a long computer program to compile and then starts it, what does it do?

Paul Schupp (UIUC) December, 2014 3 / 1

slide-9
SLIDE 9

Turing Machines and the Halting Problem

One of the great accomplishments of twentieth century mathematics was to give a precise definition of what it means to be computable. The model usually used is “computable by a Turing machine”. One can just think of a Turing machine as an idealized computer: the machine has an infinite memory and we do not care how many operations a computation takes. A machine started on a particular input may or may not eventually halt. The Halting Problem for Turing machines is the following decision problem: Given a Turing machine M and a word w on the input alphabet of M, does M halt when started with input w? Turing showed in 1936 that the Halting Problem is not computable. In the 1930’s the Halting Problem was of course a very abstract mathematical problem. Now it is a daily problem faced by hundreds of people: If one finally gets a long computer program to compile and then starts it, what does it do?

Paul Schupp (UIUC) December, 2014 3 / 1

slide-10
SLIDE 10

The Undecidability of the Word Problem

In the 1950’s P .S. Novikov and W.W. Boone independently constructed finitely presented groups with unsolvable word problem by coding the Halting Problem into the groups. This fundamental result is the basis of all undecidability results in group theory and topology. Almost all sufficiently general problems are

  • unsolvable. Isomorphism to any fixed finitely presented group is
  • undecidable. In particular, there is no algorithm deciding whether or

not finite presentations are presentations of the trivial group.

Paul Schupp (UIUC) December, 2014 4 / 1

slide-11
SLIDE 11

The Undecidability of the Word Problem

In the 1950’s P .S. Novikov and W.W. Boone independently constructed finitely presented groups with unsolvable word problem by coding the Halting Problem into the groups. This fundamental result is the basis of all undecidability results in group theory and topology. Almost all sufficiently general problems are

  • unsolvable. Isomorphism to any fixed finitely presented group is
  • undecidable. In particular, there is no algorithm deciding whether or

not finite presentations are presentations of the trivial group.

Paul Schupp (UIUC) December, 2014 4 / 1

slide-12
SLIDE 12

Another basic problem about finitely presented groups is the Membership Problem: Fix a finitely presented group G. Given a finite set of S of words and a word w, does w belong to the subgroup H generated by S? This problem is already unsolvable in F2 × F2, the direct product of two free groups of rank 2 (Mikhailova”s Theorem) and in many hyperbolic groups (the Rips Construction). A.A. Markov used the undecidability of the Isomorphism Problem to show that for n ≥ 4, there is no algorithm which decides, when given two n-manifolds, whether or not they are homeomorphic.

Paul Schupp (UIUC) December, 2014 5 / 1

slide-13
SLIDE 13

Another basic problem about finitely presented groups is the Membership Problem: Fix a finitely presented group G. Given a finite set of S of words and a word w, does w belong to the subgroup H generated by S? This problem is already unsolvable in F2 × F2, the direct product of two free groups of rank 2 (Mikhailova”s Theorem) and in many hyperbolic groups (the Rips Construction). A.A. Markov used the undecidability of the Isomorphism Problem to show that for n ≥ 4, there is no algorithm which decides, when given two n-manifolds, whether or not they are homeomorphic.

Paul Schupp (UIUC) December, 2014 5 / 1

slide-14
SLIDE 14

Computational Complexity

A basic decision problem is the Satisfiability Problem for Boolean

  • Expressions. Given a Boolean expression E using variables and the

logical connectives ∧, ∨, ¬, is there an assignment of truth-values to the variables which makes the expression E true? One realizes that there may be a difficulty with the naive algorithm of checking the truth-table. If there are n variables then there are 2n rows in the truth-table so that procedure might take exponential time.

Paul Schupp (UIUC) December, 2014 6 / 1

slide-15
SLIDE 15

Computational Complexity

A basic decision problem is the Satisfiability Problem for Boolean

  • Expressions. Given a Boolean expression E using variables and the

logical connectives ∧, ∨, ¬, is there an assignment of truth-values to the variables which makes the expression E true? One realizes that there may be a difficulty with the naive algorithm of checking the truth-table. If there are n variables then there are 2n rows in the truth-table so that procedure might take exponential time.

Paul Schupp (UIUC) December, 2014 6 / 1

slide-16
SLIDE 16

Motivated by the existence of real computers, investigation of time complexity began in the 1960’s. Cobham (1964) defined the class P of decision problems for which there is a polynomial time algorithm and suggested P as a model of feasible problems. Cook (1971) and Levin (1973) independently proved that the Satisfiability Problem is NP-complete. That is, it is maximally hard in a very wide class of combinatorial problems. This has led to the famous P = NP problem: Hundreds of very different problems have been proved to be NP-complete. Can one prove that there is no polynomial time algorithm for some NP-complete problem?

Paul Schupp (UIUC) December, 2014 7 / 1

slide-17
SLIDE 17

Motivated by the existence of real computers, investigation of time complexity began in the 1960’s. Cobham (1964) defined the class P of decision problems for which there is a polynomial time algorithm and suggested P as a model of feasible problems. Cook (1971) and Levin (1973) independently proved that the Satisfiability Problem is NP-complete. That is, it is maximally hard in a very wide class of combinatorial problems. This has led to the famous P = NP problem: Hundreds of very different problems have been proved to be NP-complete. Can one prove that there is no polynomial time algorithm for some NP-complete problem?

Paul Schupp (UIUC) December, 2014 7 / 1

slide-18
SLIDE 18

Motivated by the existence of real computers, investigation of time complexity began in the 1960’s. Cobham (1964) defined the class P of decision problems for which there is a polynomial time algorithm and suggested P as a model of feasible problems. Cook (1971) and Levin (1973) independently proved that the Satisfiability Problem is NP-complete. That is, it is maximally hard in a very wide class of combinatorial problems. This has led to the famous P = NP problem: Hundreds of very different problems have been proved to be NP-complete. Can one prove that there is no polynomial time algorithm for some NP-complete problem?

Paul Schupp (UIUC) December, 2014 7 / 1

slide-19
SLIDE 19

The Simplex Algorithm

In 1947, Dantzig developed his Simplex Algorithm for linear programming problems. This algorithm now runs many hundreds of times every day for scheduling and transportation problems, almost always very quickly. There are clever examples of Klee and Minty showing that there exist problems for which the Simplex Algorithm must take exponential time. But one never encounters such examples in practice.

Paul Schupp (UIUC) December, 2014 8 / 1

slide-20
SLIDE 20

The Simplex Algorithm

In 1947, Dantzig developed his Simplex Algorithm for linear programming problems. This algorithm now runs many hundreds of times every day for scheduling and transportation problems, almost always very quickly. There are clever examples of Klee and Minty showing that there exist problems for which the Simplex Algorithm must take exponential time. But one never encounters such examples in practice.

Paul Schupp (UIUC) December, 2014 8 / 1

slide-21
SLIDE 21

Is Worst-case Complexity a Good Measure of Difficulty?

The classes P, NP and the general concept of being computable are worst-case measures: How hard are the most difficult instances? The example of the Simplex Algorithm shows that hard instances may be very rare. There is now an awareness in complexity theory that worst-case measures may not give a good over-all picture of a particular algorithm

  • r problem. In a sense, the behavior of the simplex algorithm is

perhaps the world’s longest running computer experiment. What does

  • ne “see” in computer experiments?

One sees “Generic behavior”. So let’s try to explain what is going on.

Paul Schupp (UIUC) December, 2014 9 / 1

slide-22
SLIDE 22

Is Worst-case Complexity a Good Measure of Difficulty?

The classes P, NP and the general concept of being computable are worst-case measures: How hard are the most difficult instances? The example of the Simplex Algorithm shows that hard instances may be very rare. There is now an awareness in complexity theory that worst-case measures may not give a good over-all picture of a particular algorithm

  • r problem. In a sense, the behavior of the simplex algorithm is

perhaps the world’s longest running computer experiment. What does

  • ne “see” in computer experiments?

One sees “Generic behavior”. So let’s try to explain what is going on.

Paul Schupp (UIUC) December, 2014 9 / 1

slide-23
SLIDE 23

Is Worst-case Complexity a Good Measure of Difficulty?

The classes P, NP and the general concept of being computable are worst-case measures: How hard are the most difficult instances? The example of the Simplex Algorithm shows that hard instances may be very rare. There is now an awareness in complexity theory that worst-case measures may not give a good over-all picture of a particular algorithm

  • r problem. In a sense, the behavior of the simplex algorithm is

perhaps the world’s longest running computer experiment. What does

  • ne “see” in computer experiments?

One sees “Generic behavior”. So let’s try to explain what is going on.

Paul Schupp (UIUC) December, 2014 9 / 1

slide-24
SLIDE 24

Is Worst-case Complexity a Good Measure of Difficulty?

The classes P, NP and the general concept of being computable are worst-case measures: How hard are the most difficult instances? The example of the Simplex Algorithm shows that hard instances may be very rare. There is now an awareness in complexity theory that worst-case measures may not give a good over-all picture of a particular algorithm

  • r problem. In a sense, the behavior of the simplex algorithm is

perhaps the world’s longest running computer experiment. What does

  • ne “see” in computer experiments?

One sees “Generic behavior”. So let’s try to explain what is going on.

Paul Schupp (UIUC) December, 2014 9 / 1

slide-25
SLIDE 25

Density and Generic Sets

Fix a finite alphabet Σ and let Σ∗ denote the set of all words over Σ. If w ∈ Σ∗, the length |w| of w is the number of letters in w. Since we have a length function we can copy the classical definition of asymptotic density from number theory. If A ⊆ Σ∗, then, for n ≥ 1, the density of A at n is ρn(A) = |{w ∈ A : |w| ≤ n}| |{w ∈ Σ∗ : |w| ≤ n}| If ρ(A) = limn→∞ ρn(A) exists then ρ(A) is the asymptotic density of A. The limit does not exist in general, but the upper density ρ(A) = lim sup{ρn(A)} and the lower density ρ(A) = lim inf{ρn(A)} always exist.

Definition

Paul Schupp (UIUC) December, 2014 10 / 1

slide-26
SLIDE 26

Density and Generic Sets

Fix a finite alphabet Σ and let Σ∗ denote the set of all words over Σ. If w ∈ Σ∗, the length |w| of w is the number of letters in w. Since we have a length function we can copy the classical definition of asymptotic density from number theory. If A ⊆ Σ∗, then, for n ≥ 1, the density of A at n is ρn(A) = |{w ∈ A : |w| ≤ n}| |{w ∈ Σ∗ : |w| ≤ n}| If ρ(A) = limn→∞ ρn(A) exists then ρ(A) is the asymptotic density of A. The limit does not exist in general, but the upper density ρ(A) = lim sup{ρn(A)} and the lower density ρ(A) = lim inf{ρn(A)} always exist.

Definition

Paul Schupp (UIUC) December, 2014 10 / 1

slide-27
SLIDE 27

Density and Generic Sets

Fix a finite alphabet Σ and let Σ∗ denote the set of all words over Σ. If w ∈ Σ∗, the length |w| of w is the number of letters in w. Since we have a length function we can copy the classical definition of asymptotic density from number theory. If A ⊆ Σ∗, then, for n ≥ 1, the density of A at n is ρn(A) = |{w ∈ A : |w| ≤ n}| |{w ∈ Σ∗ : |w| ≤ n}| If ρ(A) = limn→∞ ρn(A) exists then ρ(A) is the asymptotic density of A. The limit does not exist in general, but the upper density ρ(A) = lim sup{ρn(A)} and the lower density ρ(A) = lim inf{ρn(A)} always exist.

Definition

Paul Schupp (UIUC) December, 2014 10 / 1

slide-28
SLIDE 28

Density and Generic Sets

Fix a finite alphabet Σ and let Σ∗ denote the set of all words over Σ. If w ∈ Σ∗, the length |w| of w is the number of letters in w. Since we have a length function we can copy the classical definition of asymptotic density from number theory. If A ⊆ Σ∗, then, for n ≥ 1, the density of A at n is ρn(A) = |{w ∈ A : |w| ≤ n}| |{w ∈ Σ∗ : |w| ≤ n}| If ρ(A) = limn→∞ ρn(A) exists then ρ(A) is the asymptotic density of A. The limit does not exist in general, but the upper density ρ(A) = lim sup{ρn(A)} and the lower density ρ(A) = lim inf{ρn(A)} always exist.

Definition

Paul Schupp (UIUC) December, 2014 10 / 1

slide-29
SLIDE 29

Density and Generic Sets

Fix a finite alphabet Σ and let Σ∗ denote the set of all words over Σ. If w ∈ Σ∗, the length |w| of w is the number of letters in w. Since we have a length function we can copy the classical definition of asymptotic density from number theory. If A ⊆ Σ∗, then, for n ≥ 1, the density of A at n is ρn(A) = |{w ∈ A : |w| ≤ n}| |{w ∈ Σ∗ : |w| ≤ n}| If ρ(A) = limn→∞ ρn(A) exists then ρ(A) is the asymptotic density of A. The limit does not exist in general, but the upper density ρ(A) = lim sup{ρn(A)} and the lower density ρ(A) = lim inf{ρn(A)} always exist.

Definition

Paul Schupp (UIUC) December, 2014 10 / 1

slide-30
SLIDE 30

A set A is generic if it has density 1. So in N, the set of powers of 2 has density 0. Its complement has density 1 and is generic.

Paul Schupp (UIUC) December, 2014 11 / 1

slide-31
SLIDE 31

A set A is generic if it has density 1. So in N, the set of powers of 2 has density 0. Its complement has density 1 and is generic.

Paul Schupp (UIUC) December, 2014 11 / 1

slide-32
SLIDE 32

Generic Computability and Complexity

Gurevich and Levin independently introduced the idea of average-case complexity as a better measure than worst-case complexity: Some problems with hard worst-cases may be much better on average. But average-case is very hard to work with since it is very sensitive to the probability measure used and one must still consider the hardest

  • cases. Kapovich, Miasnikov, Schupp and Shpilrain introduced

generic-case complexity as a measure which is easier to work with than average-case complexity but still allows a nuanced analysis of problems where hard instances are very rare.

Paul Schupp (UIUC) December, 2014 12 / 1

slide-33
SLIDE 33

Generic Computability and Complexity

Gurevich and Levin independently introduced the idea of average-case complexity as a better measure than worst-case complexity: Some problems with hard worst-cases may be much better on average. But average-case is very hard to work with since it is very sensitive to the probability measure used and one must still consider the hardest

  • cases. Kapovich, Miasnikov, Schupp and Shpilrain introduced

generic-case complexity as a measure which is easier to work with than average-case complexity but still allows a nuanced analysis of problems where hard instances are very rare.

Paul Schupp (UIUC) December, 2014 12 / 1

slide-34
SLIDE 34

Generic Computability and Complexity

Gurevich and Levin independently introduced the idea of average-case complexity as a better measure than worst-case complexity: Some problems with hard worst-cases may be much better on average. But average-case is very hard to work with since it is very sensitive to the probability measure used and one must still consider the hardest

  • cases. Kapovich, Miasnikov, Schupp and Shpilrain introduced

generic-case complexity as a measure which is easier to work with than average-case complexity but still allows a nuanced analysis of problems where hard instances are very rare.

Paul Schupp (UIUC) December, 2014 12 / 1

slide-35
SLIDE 35

Definition

Let S be a subset of Σ∗ with characteristic function χS. A set S is generically computable if there exists a partial computable function Φ such that Φ(x) = χS(x) whenever Φ(x) is defined (written Φ(x) ↓) and the domain of Φ is generic in Σ∗. We stress that all answers given by Φ must be correct but Φ need not be everywhere defined, and, indeed, we do not require the domain of Φ to be computable. We can “clock” Φ by requiring that on input w Φ answers in no more than f(|w|) steps or we count Φ as not answering. In this case we say that Φ generically computes S in time f. We say that Φ strongly generically computes S in time f if the density of the domain of Φ converges to 1 exponentially fast.

Paul Schupp (UIUC) December, 2014 13 / 1

slide-36
SLIDE 36

Definition

Let S be a subset of Σ∗ with characteristic function χS. A set S is generically computable if there exists a partial computable function Φ such that Φ(x) = χS(x) whenever Φ(x) is defined (written Φ(x) ↓) and the domain of Φ is generic in Σ∗. We stress that all answers given by Φ must be correct but Φ need not be everywhere defined, and, indeed, we do not require the domain of Φ to be computable. We can “clock” Φ by requiring that on input w Φ answers in no more than f(|w|) steps or we count Φ as not answering. In this case we say that Φ generically computes S in time f. We say that Φ strongly generically computes S in time f if the density of the domain of Φ converges to 1 exponentially fast.

Paul Schupp (UIUC) December, 2014 13 / 1

slide-37
SLIDE 37

Definition

Let S be a subset of Σ∗ with characteristic function χS. A set S is generically computable if there exists a partial computable function Φ such that Φ(x) = χS(x) whenever Φ(x) is defined (written Φ(x) ↓) and the domain of Φ is generic in Σ∗. We stress that all answers given by Φ must be correct but Φ need not be everywhere defined, and, indeed, we do not require the domain of Φ to be computable. We can “clock” Φ by requiring that on input w Φ answers in no more than f(|w|) steps or we count Φ as not answering. In this case we say that Φ generically computes S in time f. We say that Φ strongly generically computes S in time f if the density of the domain of Φ converges to 1 exponentially fast.

Paul Schupp (UIUC) December, 2014 13 / 1

slide-38
SLIDE 38

Definition

Let S be a subset of Σ∗ with characteristic function χS. A set S is generically computable if there exists a partial computable function Φ such that Φ(x) = χS(x) whenever Φ(x) is defined (written Φ(x) ↓) and the domain of Φ is generic in Σ∗. We stress that all answers given by Φ must be correct but Φ need not be everywhere defined, and, indeed, we do not require the domain of Φ to be computable. We can “clock” Φ by requiring that on input w Φ answers in no more than f(|w|) steps or we count Φ as not answering. In this case we say that Φ generically computes S in time f. We say that Φ strongly generically computes S in time f if the density of the domain of Φ converges to 1 exponentially fast.

Paul Schupp (UIUC) December, 2014 13 / 1

slide-39
SLIDE 39

Examples from Group Theory

In general, the word problem for finitely generated groups can be quite difficult, or indeed, not computable. The following “Quotient Theorem” is from Kapovich, Miasnikov, Schupp and Shpilrain.

Theorem

Let G be a finitely group which has a subgroup H of finite index such that there is a homomorphism from H onto a nonamenable group K such that the word problem of K is solvable in time C. Then the word problem for G is strongly generically solvable in time C.

Paul Schupp (UIUC) December, 2014 14 / 1

slide-40
SLIDE 40

Examples from Group Theory

In general, the word problem for finitely generated groups can be quite difficult, or indeed, not computable. The following “Quotient Theorem” is from Kapovich, Miasnikov, Schupp and Shpilrain.

Theorem

Let G be a finitely group which has a subgroup H of finite index such that there is a homomorphism from H onto a nonamenable group K such that the word problem of K is solvable in time C. Then the word problem for G is strongly generically solvable in time C.

Paul Schupp (UIUC) December, 2014 14 / 1

slide-41
SLIDE 41

Examples from Group Theory

In general, the word problem for finitely generated groups can be quite difficult, or indeed, not computable. The following “Quotient Theorem” is from Kapovich, Miasnikov, Schupp and Shpilrain.

Theorem

Let G be a finitely group which has a subgroup H of finite index such that there is a homomorphism from H onto a nonamenable group K such that the word problem of K is solvable in time C. Then the word problem for G is strongly generically solvable in time C.

Paul Schupp (UIUC) December, 2014 14 / 1

slide-42
SLIDE 42

Magnus solved the word problem for one-relator groups in the 1930’s. We have no idea of the possible worst-case complexities over the whole class of one-relator groups. A beautiful theorem of Benjamin Baumslag and Steve Pride shows that every one-relator group with at least three generators has a subgroup

  • f finite index which maps onto the free group of rank 2.

Hence, the word problem for any one-relator group with at least three generators is strongly generically linear time.

Paul Schupp (UIUC) December, 2014 15 / 1

slide-43
SLIDE 43

Magnus solved the word problem for one-relator groups in the 1930’s. We have no idea of the possible worst-case complexities over the whole class of one-relator groups. A beautiful theorem of Benjamin Baumslag and Steve Pride shows that every one-relator group with at least three generators has a subgroup

  • f finite index which maps onto the free group of rank 2.

Hence, the word problem for any one-relator group with at least three generators is strongly generically linear time.

Paul Schupp (UIUC) December, 2014 15 / 1

slide-44
SLIDE 44

Magnus solved the word problem for one-relator groups in the 1930’s. We have no idea of the possible worst-case complexities over the whole class of one-relator groups. A beautiful theorem of Benjamin Baumslag and Steve Pride shows that every one-relator group with at least three generators has a subgroup

  • f finite index which maps onto the free group of rank 2.

Hence, the word problem for any one-relator group with at least three generators is strongly generically linear time.

Paul Schupp (UIUC) December, 2014 15 / 1

slide-45
SLIDE 45

Magnus solved the word problem for one-relator groups in the 1930’s. We have no idea of the possible worst-case complexities over the whole class of one-relator groups. A beautiful theorem of Benjamin Baumslag and Steve Pride shows that every one-relator group with at least three generators has a subgroup

  • f finite index which maps onto the free group of rank 2.

Hence, the word problem for any one-relator group with at least three generators is strongly generically linear time.

Paul Schupp (UIUC) December, 2014 15 / 1

slide-46
SLIDE 46

Subsequent investigations have shown that classical decision problems in group theory often exhibit exactly the same behavior as the Simplex Algorithm. Note that even undecidable problems may have very low generic-case complexity. In particular, although the groups of Novikov and Boone have undecidable word problem their word problems are strongly generically linear time. (They have large free quotients.) We do not know whether or not the isomorphism problem for

  • ne-relator presentations is solvable. However, Kapovich and Schupp

showed that it is generically single exponential time. This again illustrates that it is possible to prove good generic-case results without knowing the worst-case complexity.

Paul Schupp (UIUC) December, 2014 16 / 1

slide-47
SLIDE 47

Subsequent investigations have shown that classical decision problems in group theory often exhibit exactly the same behavior as the Simplex Algorithm. Note that even undecidable problems may have very low generic-case complexity. In particular, although the groups of Novikov and Boone have undecidable word problem their word problems are strongly generically linear time. (They have large free quotients.) We do not know whether or not the isomorphism problem for

  • ne-relator presentations is solvable. However, Kapovich and Schupp

showed that it is generically single exponential time. This again illustrates that it is possible to prove good generic-case results without knowing the worst-case complexity.

Paul Schupp (UIUC) December, 2014 16 / 1

slide-48
SLIDE 48

Subsequent investigations have shown that classical decision problems in group theory often exhibit exactly the same behavior as the Simplex Algorithm. Note that even undecidable problems may have very low generic-case complexity. In particular, although the groups of Novikov and Boone have undecidable word problem their word problems are strongly generically linear time. (They have large free quotients.) We do not know whether or not the isomorphism problem for

  • ne-relator presentations is solvable. However, Kapovich and Schupp

showed that it is generically single exponential time. This again illustrates that it is possible to prove good generic-case results without knowing the worst-case complexity.

Paul Schupp (UIUC) December, 2014 16 / 1

slide-49
SLIDE 49

Subsequent investigations have shown that classical decision problems in group theory often exhibit exactly the same behavior as the Simplex Algorithm. Note that even undecidable problems may have very low generic-case complexity. In particular, although the groups of Novikov and Boone have undecidable word problem their word problems are strongly generically linear time. (They have large free quotients.) We do not know whether or not the isomorphism problem for

  • ne-relator presentations is solvable. However, Kapovich and Schupp

showed that it is generically single exponential time. This again illustrates that it is possible to prove good generic-case results without knowing the worst-case complexity.

Paul Schupp (UIUC) December, 2014 16 / 1

slide-50
SLIDE 50

Miasnikov and Osin have given a beautiful example of a finitely generated, computably presented group whose word problem is not generically computable. This is a striking application of the Golod-Shafarevich inequality. Whether or not there exists a finitely presented group whose word problem is not generically computable is a difficult open question.

Paul Schupp (UIUC) December, 2014 17 / 1

slide-51
SLIDE 51

Miasnikov and Osin have given a beautiful example of a finitely generated, computably presented group whose word problem is not generically computable. This is a striking application of the Golod-Shafarevich inequality. Whether or not there exists a finitely presented group whose word problem is not generically computable is a difficult open question.

Paul Schupp (UIUC) December, 2014 17 / 1

slide-52
SLIDE 52

The Asymptotic-Generic Point of View

The asymptotic-generic point of view is now standard in geometric group theory. This starts with Gromov’s remark that “most” finitely presented groups are hyperbolic. A precise proof was given by Ol’shanskii. There has since been a great deal of research on generic properties of groups. The idea of generic-case complexity is an application of this point of view to complexity theory. Jockusch and Schupp began a study of how this point of view interacts with the general theory of computability. A subsequent series of papers shows that the generic-asymptotic point

  • f view interacts with classical computability theory in a very deep way.

Paul Schupp (UIUC) December, 2014 18 / 1

slide-53
SLIDE 53

The Asymptotic-Generic Point of View

The asymptotic-generic point of view is now standard in geometric group theory. This starts with Gromov’s remark that “most” finitely presented groups are hyperbolic. A precise proof was given by Ol’shanskii. There has since been a great deal of research on generic properties of groups. The idea of generic-case complexity is an application of this point of view to complexity theory. Jockusch and Schupp began a study of how this point of view interacts with the general theory of computability. A subsequent series of papers shows that the generic-asymptotic point

  • f view interacts with classical computability theory in a very deep way.

Paul Schupp (UIUC) December, 2014 18 / 1

slide-54
SLIDE 54

The Asymptotic-Generic Point of View

The asymptotic-generic point of view is now standard in geometric group theory. This starts with Gromov’s remark that “most” finitely presented groups are hyperbolic. A precise proof was given by Ol’shanskii. There has since been a great deal of research on generic properties of groups. The idea of generic-case complexity is an application of this point of view to complexity theory. Jockusch and Schupp began a study of how this point of view interacts with the general theory of computability. A subsequent series of papers shows that the generic-asymptotic point

  • f view interacts with classical computability theory in a very deep way.

Paul Schupp (UIUC) December, 2014 18 / 1

slide-55
SLIDE 55

Coarse computability

Jockusch and Schupp introduced another possible model of less than perfect computability. Two sets A and B are coarsely similar if their symmetric difference A▽B has density 0. It is not difficult to check that this is indeed an equivalence relation. Any set D which is coarsely similar to A is called a coarse description of A. Say that a set A is coarsely computable if A is coarsely similar to a computable set. That is, A has a computable coarse description. We can think of this in the following way. We now have a total computable function which may make mistakes about membership in A, but which is correct on a set of density 1. In particular, all sets of density 0 and all sets of density 1 are coarsely computable.

Paul Schupp (UIUC) December, 2014 19 / 1

slide-56
SLIDE 56

Coarse computability

Jockusch and Schupp introduced another possible model of less than perfect computability. Two sets A and B are coarsely similar if their symmetric difference A▽B has density 0. It is not difficult to check that this is indeed an equivalence relation. Any set D which is coarsely similar to A is called a coarse description of A. Say that a set A is coarsely computable if A is coarsely similar to a computable set. That is, A has a computable coarse description. We can think of this in the following way. We now have a total computable function which may make mistakes about membership in A, but which is correct on a set of density 1. In particular, all sets of density 0 and all sets of density 1 are coarsely computable.

Paul Schupp (UIUC) December, 2014 19 / 1

slide-57
SLIDE 57

Coarse computability

Jockusch and Schupp introduced another possible model of less than perfect computability. Two sets A and B are coarsely similar if their symmetric difference A▽B has density 0. It is not difficult to check that this is indeed an equivalence relation. Any set D which is coarsely similar to A is called a coarse description of A. Say that a set A is coarsely computable if A is coarsely similar to a computable set. That is, A has a computable coarse description. We can think of this in the following way. We now have a total computable function which may make mistakes about membership in A, but which is correct on a set of density 1. In particular, all sets of density 0 and all sets of density 1 are coarsely computable.

Paul Schupp (UIUC) December, 2014 19 / 1

slide-58
SLIDE 58

Coarse computability

Jockusch and Schupp introduced another possible model of less than perfect computability. Two sets A and B are coarsely similar if their symmetric difference A▽B has density 0. It is not difficult to check that this is indeed an equivalence relation. Any set D which is coarsely similar to A is called a coarse description of A. Say that a set A is coarsely computable if A is coarsely similar to a computable set. That is, A has a computable coarse description. We can think of this in the following way. We now have a total computable function which may make mistakes about membership in A, but which is correct on a set of density 1. In particular, all sets of density 0 and all sets of density 1 are coarsely computable.

Paul Schupp (UIUC) December, 2014 19 / 1

slide-59
SLIDE 59

The coarse computability bound

It is natural to consider both partial and coarse computabilty at densities less than one.

Definition

A set A is coarsely computable at density r if there is a computable set C such that the lower density of the symmetric agreement of A and C is at least r, that is ρ({w : A(w) = C(w)}) ≥ r So we can define

Definition

If A Σ∗, the coarse computability bound of A is γ(A) = sup{r : A is computable at density r

Paul Schupp (UIUC) December, 2014 20 / 1

slide-60
SLIDE 60

The coarse computability bound

It is natural to consider both partial and coarse computabilty at densities less than one.

Definition

A set A is coarsely computable at density r if there is a computable set C such that the lower density of the symmetric agreement of A and C is at least r, that is ρ({w : A(w) = C(w)}) ≥ r So we can define

Definition

If A Σ∗, the coarse computability bound of A is γ(A) = sup{r : A is computable at density r

Paul Schupp (UIUC) December, 2014 20 / 1

slide-61
SLIDE 61

The coarse computability bound

It is natural to consider both partial and coarse computabilty at densities less than one.

Definition

A set A is coarsely computable at density r if there is a computable set C such that the lower density of the symmetric agreement of A and C is at least r, that is ρ({w : A(w) = C(w)}) ≥ r So we can define

Definition

If A Σ∗, the coarse computability bound of A is γ(A) = sup{r : A is computable at density r

Paul Schupp (UIUC) December, 2014 20 / 1

slide-62
SLIDE 62

Note that A is coarsely computable if and only if A is computable at density 1. But there are many sets A with γ(A) = 1 which are not coarsely computable. The coarse computability bound of a “random set” is 1

2.

Either of the constant algorithms, always answer “No” or always answer “Yes”, works half the time. If not, that is a demonstration that the set is not random.

Paul Schupp (UIUC) December, 2014 21 / 1

slide-63
SLIDE 63

Note that A is coarsely computable if and only if A is computable at density 1. But there are many sets A with γ(A) = 1 which are not coarsely computable. The coarse computability bound of a “random set” is 1

2.

Either of the constant algorithms, always answer “No” or always answer “Yes”, works half the time. If not, that is a demonstration that the set is not random.

Paul Schupp (UIUC) December, 2014 21 / 1

slide-64
SLIDE 64

Note that A is coarsely computable if and only if A is computable at density 1. But there are many sets A with γ(A) = 1 which are not coarsely computable. The coarse computability bound of a “random set” is 1

2.

Either of the constant algorithms, always answer “No” or always answer “Yes”, works half the time. If not, that is a demonstration that the set is not random.

Paul Schupp (UIUC) December, 2014 21 / 1

slide-65
SLIDE 65

The distance D

If A, B ⊆ Σ∗, define D(A, B) = ρ(A▽B) where A▽B denotes the symmetric difference of A and B. A Venn diagram argument shows that D satisfies the triangle inequality and so D is a pseudo-metric on subsets of Σ∗. Since D(A, B) = 0 exactly if A and B are coarsely similar, D is actually a metric on the space of coarse similarity classes. Note that γ is an invariant of coarse similarity classes.

  • Lemma. ρ(A) = 1 − ρ(A)

As a corollary, ρ({n : A(n) = C(n)}) = 1 − D(A, C).

Paul Schupp (UIUC) December, 2014 22 / 1

slide-66
SLIDE 66

The distance D

If A, B ⊆ Σ∗, define D(A, B) = ρ(A▽B) where A▽B denotes the symmetric difference of A and B. A Venn diagram argument shows that D satisfies the triangle inequality and so D is a pseudo-metric on subsets of Σ∗. Since D(A, B) = 0 exactly if A and B are coarsely similar, D is actually a metric on the space of coarse similarity classes. Note that γ is an invariant of coarse similarity classes.

  • Lemma. ρ(A) = 1 − ρ(A)

As a corollary, ρ({n : A(n) = C(n)}) = 1 − D(A, C).

Paul Schupp (UIUC) December, 2014 22 / 1

slide-67
SLIDE 67

The distance D

If A, B ⊆ Σ∗, define D(A, B) = ρ(A▽B) where A▽B denotes the symmetric difference of A and B. A Venn diagram argument shows that D satisfies the triangle inequality and so D is a pseudo-metric on subsets of Σ∗. Since D(A, B) = 0 exactly if A and B are coarsely similar, D is actually a metric on the space of coarse similarity classes. Note that γ is an invariant of coarse similarity classes.

  • Lemma. ρ(A) = 1 − ρ(A)

As a corollary, ρ({n : A(n) = C(n)}) = 1 − D(A, C).

Paul Schupp (UIUC) December, 2014 22 / 1

slide-68
SLIDE 68

The distance D

If A, B ⊆ Σ∗, define D(A, B) = ρ(A▽B) where A▽B denotes the symmetric difference of A and B. A Venn diagram argument shows that D satisfies the triangle inequality and so D is a pseudo-metric on subsets of Σ∗. Since D(A, B) = 0 exactly if A and B are coarsely similar, D is actually a metric on the space of coarse similarity classes. Note that γ is an invariant of coarse similarity classes.

  • Lemma. ρ(A) = 1 − ρ(A)

As a corollary, ρ({n : A(n) = C(n)}) = 1 − D(A, C).

Paul Schupp (UIUC) December, 2014 22 / 1

slide-69
SLIDE 69

The distance D

If A, B ⊆ Σ∗, define D(A, B) = ρ(A▽B) where A▽B denotes the symmetric difference of A and B. A Venn diagram argument shows that D satisfies the triangle inequality and so D is a pseudo-metric on subsets of Σ∗. Since D(A, B) = 0 exactly if A and B are coarsely similar, D is actually a metric on the space of coarse similarity classes. Note that γ is an invariant of coarse similarity classes.

  • Lemma. ρ(A) = 1 − ρ(A)

As a corollary, ρ({n : A(n) = C(n)}) = 1 − D(A, C).

Paul Schupp (UIUC) December, 2014 22 / 1

slide-70
SLIDE 70

So, γ(A) = 1 if and only if A is a limit of computable sets in the pseudo-metric D. Note that γ(A) = 1 does not necessarily mean that A is coarsely computable. In general, γ(A) = r means that the distance from A to the family C of computable sets is 1 − r. It remains to be seen to what degree one can connect the pseudo-metric D and specific computability properties, and this is currently an interesting area of investigation. The following theorem was proved by Hirschfeldt, Jockush, McNicholl and Schupp.

Theorem

For every r ∈ (0, 1] there is a set A with γ(A) = r and such that A is not coarsely computable at density r, and a set B such that γ(B) = r and B is coarsely computable at density r.

Paul Schupp (UIUC) December, 2014 23 / 1

slide-71
SLIDE 71

So, γ(A) = 1 if and only if A is a limit of computable sets in the pseudo-metric D. Note that γ(A) = 1 does not necessarily mean that A is coarsely computable. In general, γ(A) = r means that the distance from A to the family C of computable sets is 1 − r. It remains to be seen to what degree one can connect the pseudo-metric D and specific computability properties, and this is currently an interesting area of investigation. The following theorem was proved by Hirschfeldt, Jockush, McNicholl and Schupp.

Theorem

For every r ∈ (0, 1] there is a set A with γ(A) = r and such that A is not coarsely computable at density r, and a set B such that γ(B) = r and B is coarsely computable at density r.

Paul Schupp (UIUC) December, 2014 23 / 1

slide-72
SLIDE 72

So, γ(A) = 1 if and only if A is a limit of computable sets in the pseudo-metric D. Note that γ(A) = 1 does not necessarily mean that A is coarsely computable. In general, γ(A) = r means that the distance from A to the family C of computable sets is 1 − r. It remains to be seen to what degree one can connect the pseudo-metric D and specific computability properties, and this is currently an interesting area of investigation. The following theorem was proved by Hirschfeldt, Jockush, McNicholl and Schupp.

Theorem

For every r ∈ (0, 1] there is a set A with γ(A) = r and such that A is not coarsely computable at density r, and a set B such that γ(B) = r and B is coarsely computable at density r.

Paul Schupp (UIUC) December, 2014 23 / 1

slide-73
SLIDE 73

So, γ(A) = 1 if and only if A is a limit of computable sets in the pseudo-metric D. Note that γ(A) = 1 does not necessarily mean that A is coarsely computable. In general, γ(A) = r means that the distance from A to the family C of computable sets is 1 − r. It remains to be seen to what degree one can connect the pseudo-metric D and specific computability properties, and this is currently an interesting area of investigation. The following theorem was proved by Hirschfeldt, Jockush, McNicholl and Schupp.

Theorem

For every r ∈ (0, 1] there is a set A with γ(A) = r and such that A is not coarsely computable at density r, and a set B such that γ(B) = r and B is coarsely computable at density r.

Paul Schupp (UIUC) December, 2014 23 / 1

slide-74
SLIDE 74

What is really interesting is how these new ideas of “imperfect computability” interact with ideas from classical computability theory which seem very abstract for someone not familiar with the subject. So far, the “most interesting” theorem about the coarse computability bound is the following.

Theorem

If A is computable from a 1-generic set which is in turn computable from the Halting Problem and γ(A) = 1 then A is coarsely computable.

Paul Schupp (UIUC) December, 2014 24 / 1

slide-75
SLIDE 75

What is really interesting is how these new ideas of “imperfect computability” interact with ideas from classical computability theory which seem very abstract for someone not familiar with the subject. So far, the “most interesting” theorem about the coarse computability bound is the following.

Theorem

If A is computable from a 1-generic set which is in turn computable from the Halting Problem and γ(A) = 1 then A is coarsely computable.

Paul Schupp (UIUC) December, 2014 24 / 1

slide-76
SLIDE 76

What is really interesting is how these new ideas of “imperfect computability” interact with ideas from classical computability theory which seem very abstract for someone not familiar with the subject. So far, the “most interesting” theorem about the coarse computability bound is the following.

Theorem

If A is computable from a 1-generic set which is in turn computable from the Halting Problem and γ(A) = 1 then A is coarsely computable.

Paul Schupp (UIUC) December, 2014 24 / 1

slide-77
SLIDE 77

What is really interesting is how these new ideas of “imperfect computability” interact with ideas from classical computability theory which seem very abstract for someone not familiar with the subject. So far, the “most interesting” theorem about the coarse computability bound is the following.

Theorem

If A is computable from a 1-generic set which is in turn computable from the Halting Problem and γ(A) = 1 then A is coarsely computable.

Paul Schupp (UIUC) December, 2014 24 / 1

slide-78
SLIDE 78

Thank you!

Paul Schupp (UIUC) December, 2014 25 / 1