Synchronizing Finite Automata Lecture IV. Synchronizing Automata and - - PowerPoint PPT Presentation

synchronizing finite automata
SMART_READER_LITE
LIVE PREVIEW

Synchronizing Finite Automata Lecture IV. Synchronizing Automata and - - PowerPoint PPT Presentation

Synchronizing Finite Automata Lecture IV. Synchronizing Automata and Markov Chains Mikhail Volkov Ural Federal University Mikhail Volkov Synchronizing Finite Automata 1. Linearization We associate a natural linear structure with each


slide-1
SLIDE 1

Synchronizing Finite Automata

Lecture IV. Synchronizing Automata and Markov Chains Mikhail Volkov

Ural Federal University

Mikhail Volkov Synchronizing Finite Automata

slide-2
SLIDE 2
  • 1. Linearization

We associate a natural linear structure with each automaton A = Q, Σ. Assume that Q = {1, 2, . . . , n} and assign to each subset K ⊆ Q its characteristic vector [K] ∈ Rn (the space of the n-dimensional column vectors): the i-th entry of [K] is 1 if i ∈ K,

  • therwise the entry is 0.

For each word w ∈ Σ∗, the action [i] → [i . w] gives rise to a linear transformation of Rn; we denote by [w] the matrix of this transformation in the standard basis [1], . . . , [n] of Rn. Clearly, the matrix [w] has exactly one non-zero entry in each column and this entry is equal to 1. For K ⊆ Q and v ∈ Σ∗, let K . v −1 = {q | q . v ∈ K}. Then [K . v −1] = [v]T [K], where [v]T stands for the usual transpose of the matrix [v]. A word w is a reset word for A iff q . w−1 = Q for some state q. Now we can rewrite this as [w]T [q] = [Q].

Mikhail Volkov Synchronizing Finite Automata

slide-3
SLIDE 3
  • 1. Linearization

We associate a natural linear structure with each automaton A = Q, Σ. Assume that Q = {1, 2, . . . , n} and assign to each subset K ⊆ Q its characteristic vector [K] ∈ Rn (the space of the n-dimensional column vectors): the i-th entry of [K] is 1 if i ∈ K,

  • therwise the entry is 0.

For each word w ∈ Σ∗, the action [i] → [i . w] gives rise to a linear transformation of Rn; we denote by [w] the matrix of this transformation in the standard basis [1], . . . , [n] of Rn. Clearly, the matrix [w] has exactly one non-zero entry in each column and this entry is equal to 1. For K ⊆ Q and v ∈ Σ∗, let K . v −1 = {q | q . v ∈ K}. Then [K . v −1] = [v]T [K], where [v]T stands for the usual transpose of the matrix [v]. A word w is a reset word for A iff q . w−1 = Q for some state q. Now we can rewrite this as [w]T [q] = [Q].

Mikhail Volkov Synchronizing Finite Automata

slide-4
SLIDE 4
  • 1. Linearization

We associate a natural linear structure with each automaton A = Q, Σ. Assume that Q = {1, 2, . . . , n} and assign to each subset K ⊆ Q its characteristic vector [K] ∈ Rn (the space of the n-dimensional column vectors): the i-th entry of [K] is 1 if i ∈ K,

  • therwise the entry is 0.

For each word w ∈ Σ∗, the action [i] → [i . w] gives rise to a linear transformation of Rn; we denote by [w] the matrix of this transformation in the standard basis [1], . . . , [n] of Rn. Clearly, the matrix [w] has exactly one non-zero entry in each column and this entry is equal to 1. For K ⊆ Q and v ∈ Σ∗, let K . v −1 = {q | q . v ∈ K}. Then [K . v −1] = [v]T [K], where [v]T stands for the usual transpose of the matrix [v]. A word w is a reset word for A iff q . w−1 = Q for some state q. Now we can rewrite this as [w]T [q] = [Q].

Mikhail Volkov Synchronizing Finite Automata

slide-5
SLIDE 5
  • 1. Linearization

We associate a natural linear structure with each automaton A = Q, Σ. Assume that Q = {1, 2, . . . , n} and assign to each subset K ⊆ Q its characteristic vector [K] ∈ Rn (the space of the n-dimensional column vectors): the i-th entry of [K] is 1 if i ∈ K,

  • therwise the entry is 0.

For each word w ∈ Σ∗, the action [i] → [i . w] gives rise to a linear transformation of Rn; we denote by [w] the matrix of this transformation in the standard basis [1], . . . , [n] of Rn. Clearly, the matrix [w] has exactly one non-zero entry in each column and this entry is equal to 1. For K ⊆ Q and v ∈ Σ∗, let K . v −1 = {q | q . v ∈ K}. Then [K . v −1] = [v]T [K], where [v]T stands for the usual transpose of the matrix [v]. A word w is a reset word for A iff q . w−1 = Q for some state q. Now we can rewrite this as [w]T [q] = [Q].

Mikhail Volkov Synchronizing Finite Automata

slide-6
SLIDE 6
  • 1. Linearization

We associate a natural linear structure with each automaton A = Q, Σ. Assume that Q = {1, 2, . . . , n} and assign to each subset K ⊆ Q its characteristic vector [K] ∈ Rn (the space of the n-dimensional column vectors): the i-th entry of [K] is 1 if i ∈ K,

  • therwise the entry is 0.

For each word w ∈ Σ∗, the action [i] → [i . w] gives rise to a linear transformation of Rn; we denote by [w] the matrix of this transformation in the standard basis [1], . . . , [n] of Rn. Clearly, the matrix [w] has exactly one non-zero entry in each column and this entry is equal to 1. For K ⊆ Q and v ∈ Σ∗, let K . v −1 = {q | q . v ∈ K}. Then [K . v −1] = [v]T [K], where [v]T stands for the usual transpose of the matrix [v]. A word w is a reset word for A iff q . w−1 = Q for some state q. Now we can rewrite this as [w]T [q] = [Q].

Mikhail Volkov Synchronizing Finite Automata

slide-7
SLIDE 7
  • 1. Linearization

We associate a natural linear structure with each automaton A = Q, Σ. Assume that Q = {1, 2, . . . , n} and assign to each subset K ⊆ Q its characteristic vector [K] ∈ Rn (the space of the n-dimensional column vectors): the i-th entry of [K] is 1 if i ∈ K,

  • therwise the entry is 0.

For each word w ∈ Σ∗, the action [i] → [i . w] gives rise to a linear transformation of Rn; we denote by [w] the matrix of this transformation in the standard basis [1], . . . , [n] of Rn. Clearly, the matrix [w] has exactly one non-zero entry in each column and this entry is equal to 1. For K ⊆ Q and v ∈ Σ∗, let K . v −1 = {q | q . v ∈ K}. Then [K . v −1] = [v]T [K], where [v]T stands for the usual transpose of the matrix [v]. A word w is a reset word for A iff q . w−1 = Q for some state q. Now we can rewrite this as [w]T [q] = [Q].

Mikhail Volkov Synchronizing Finite Automata

slide-8
SLIDE 8
  • 1. Linearization

We associate a natural linear structure with each automaton A = Q, Σ. Assume that Q = {1, 2, . . . , n} and assign to each subset K ⊆ Q its characteristic vector [K] ∈ Rn (the space of the n-dimensional column vectors): the i-th entry of [K] is 1 if i ∈ K,

  • therwise the entry is 0.

For each word w ∈ Σ∗, the action [i] → [i . w] gives rise to a linear transformation of Rn; we denote by [w] the matrix of this transformation in the standard basis [1], . . . , [n] of Rn. Clearly, the matrix [w] has exactly one non-zero entry in each column and this entry is equal to 1. For K ⊆ Q and v ∈ Σ∗, let K . v −1 = {q | q . v ∈ K}. Then [K . v −1] = [v]T [K], where [v]T stands for the usual transpose of the matrix [v]. A word w is a reset word for A iff q . w−1 = Q for some state q. Now we can rewrite this as [w]T [q] = [Q].

Mikhail Volkov Synchronizing Finite Automata

slide-9
SLIDE 9
  • 2. Extensibility in Linear Terms

For g1, g2 ∈ Rn, we denote their usual inner product by g1, g2. Let 1n := [Q]

n be the uniform stochastic vector in Rn, that is, the

vector with all entries equal to 1

  • n. Then the fact that a word w

extends a subset K ⊂ Q (that is, the inequality |K| < |K . w−1|) can be rewritten as |K| n = [K], 1n < [w]T [K], 1n = |K . w−1| n . Thus, the extension method amounts to finding a state q, a letter a, and a sequence of words w1, w2, . . . , wd such that 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Here d ≤ n − 2 because at each step the inner product increases by at least 1

  • n. The problem is that in general there is no linear

bound for the lengths of the wi’s.

Mikhail Volkov Synchronizing Finite Automata

slide-10
SLIDE 10
  • 2. Extensibility in Linear Terms

For g1, g2 ∈ Rn, we denote their usual inner product by g1, g2. Let 1n := [Q]

n be the uniform stochastic vector in Rn, that is, the

vector with all entries equal to 1

  • n. Then the fact that a word w

extends a subset K ⊂ Q (that is, the inequality |K| < |K . w−1|) can be rewritten as |K| n = [K], 1n < [w]T [K], 1n = |K . w−1| n . Thus, the extension method amounts to finding a state q, a letter a, and a sequence of words w1, w2, . . . , wd such that 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Here d ≤ n − 2 because at each step the inner product increases by at least 1

  • n. The problem is that in general there is no linear

bound for the lengths of the wi’s.

Mikhail Volkov Synchronizing Finite Automata

slide-11
SLIDE 11
  • 2. Extensibility in Linear Terms

For g1, g2 ∈ Rn, we denote their usual inner product by g1, g2. Let 1n := [Q]

n be the uniform stochastic vector in Rn, that is, the

vector with all entries equal to 1

  • n. Then the fact that a word w

extends a subset K ⊂ Q (that is, the inequality |K| < |K . w−1|) can be rewritten as |K| n = [K], 1n < [w]T [K], 1n = |K . w−1| n . Thus, the extension method amounts to finding a state q, a letter a, and a sequence of words w1, w2, . . . , wd such that 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Here d ≤ n − 2 because at each step the inner product increases by at least 1

  • n. The problem is that in general there is no linear

bound for the lengths of the wi’s.

Mikhail Volkov Synchronizing Finite Automata

slide-12
SLIDE 12
  • 2. Extensibility in Linear Terms

For g1, g2 ∈ Rn, we denote their usual inner product by g1, g2. Let 1n := [Q]

n be the uniform stochastic vector in Rn, that is, the

vector with all entries equal to 1

  • n. Then the fact that a word w

extends a subset K ⊂ Q (that is, the inequality |K| < |K . w−1|) can be rewritten as |K| n = [K], 1n < [w]T [K], 1n = |K . w−1| n . Thus, the extension method amounts to finding a state q, a letter a, and a sequence of words w1, w2, . . . , wd such that 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Here d ≤ n − 2 because at each step the inner product increases by at least 1

  • n. The problem is that in general there is no linear

bound for the lengths of the wi’s.

Mikhail Volkov Synchronizing Finite Automata

slide-13
SLIDE 13
  • 2. Extensibility in Linear Terms

For g1, g2 ∈ Rn, we denote their usual inner product by g1, g2. Let 1n := [Q]

n be the uniform stochastic vector in Rn, that is, the

vector with all entries equal to 1

  • n. Then the fact that a word w

extends a subset K ⊂ Q (that is, the inequality |K| < |K . w−1|) can be rewritten as |K| n = [K], 1n < [w]T [K], 1n = |K . w−1| n . Thus, the extension method amounts to finding a state q, a letter a, and a sequence of words w1, w2, . . . , wd such that 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Here d ≤ n − 2 because at each step the inner product increases by at least 1

  • n. The problem is that in general there is no linear

bound for the lengths of the wi’s.

Mikhail Volkov Synchronizing Finite Automata

slide-14
SLIDE 14
  • 2. Extensibility in Linear Terms

For g1, g2 ∈ Rn, we denote their usual inner product by g1, g2. Let 1n := [Q]

n be the uniform stochastic vector in Rn, that is, the

vector with all entries equal to 1

  • n. Then the fact that a word w

extends a subset K ⊂ Q (that is, the inequality |K| < |K . w−1|) can be rewritten as |K| n = [K], 1n < [w]T [K], 1n = |K . w−1| n . Thus, the extension method amounts to finding a state q, a letter a, and a sequence of words w1, w2, . . . , wd such that 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Here d ≤ n − 2 because at each step the inner product increases by at least 1

  • n. The problem is that in general there is no linear

bound for the lengths of the wi’s.

Mikhail Volkov Synchronizing Finite Automata

slide-15
SLIDE 15
  • 3. Jungers’s Dualization

Rapha¨ el Jungers (The synchronizing probability function of an automaton, SIAM J. Discrete Math. 26 (2011) 177–192) has suggested an interesting idea that in our notation can be described as follows: one should substitute the uniform stochastic vector 1n by an adaptive positive stochastic vector p which can depend on both the automaton A and the given proper subset K ⊂ Q but has the property that there exists a word v of length at most |Q| such that [v]T [K], p > [K], p. Jungers has explored this idea using techniques from linear programming and has proved that such a positive stochastic vector indeed exists for every strongly connected synchronizing automaton and every proper subset. Warning: Here we encounter a dual problem since it is not clear how to find a linear upper bound for the number of extension steps.

Mikhail Volkov Synchronizing Finite Automata

slide-16
SLIDE 16
  • 3. Jungers’s Dualization

Rapha¨ el Jungers (The synchronizing probability function of an automaton, SIAM J. Discrete Math. 26 (2011) 177–192) has suggested an interesting idea that in our notation can be described as follows: one should substitute the uniform stochastic vector 1n by an adaptive positive stochastic vector p which can depend on both the automaton A and the given proper subset K ⊂ Q but has the property that there exists a word v of length at most |Q| such that [v]T [K], p > [K], p. Jungers has explored this idea using techniques from linear programming and has proved that such a positive stochastic vector indeed exists for every strongly connected synchronizing automaton and every proper subset. Warning: Here we encounter a dual problem since it is not clear how to find a linear upper bound for the number of extension steps.

Mikhail Volkov Synchronizing Finite Automata

slide-17
SLIDE 17
  • 3. Jungers’s Dualization

Rapha¨ el Jungers (The synchronizing probability function of an automaton, SIAM J. Discrete Math. 26 (2011) 177–192) has suggested an interesting idea that in our notation can be described as follows: one should substitute the uniform stochastic vector 1n by an adaptive positive stochastic vector p which can depend on both the automaton A and the given proper subset K ⊂ Q but has the property that there exists a word v of length at most |Q| such that [v]T [K], p > [K], p. Jungers has explored this idea using techniques from linear programming and has proved that such a positive stochastic vector indeed exists for every strongly connected synchronizing automaton and every proper subset. Warning: Here we encounter a dual problem since it is not clear how to find a linear upper bound for the number of extension steps.

Mikhail Volkov Synchronizing Finite Automata

slide-18
SLIDE 18
  • 3. Jungers’s Dualization

Rapha¨ el Jungers (The synchronizing probability function of an automaton, SIAM J. Discrete Math. 26 (2011) 177–192) has suggested an interesting idea that in our notation can be described as follows: one should substitute the uniform stochastic vector 1n by an adaptive positive stochastic vector p which can depend on both the automaton A and the given proper subset K ⊂ Q but has the property that there exists a word v of length at most |Q| such that [v]T [K], p > [K], p. Jungers has explored this idea using techniques from linear programming and has proved that such a positive stochastic vector indeed exists for every strongly connected synchronizing automaton and every proper subset. Warning: Here we encounter a dual problem since it is not clear how to find a linear upper bound for the number of extension steps.

Mikhail Volkov Synchronizing Finite Automata

slide-19
SLIDE 19
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-20
SLIDE 20
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-21
SLIDE 21
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-22
SLIDE 22
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-23
SLIDE 23
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-24
SLIDE 24
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-25
SLIDE 25
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-26
SLIDE 26
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-27
SLIDE 27
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-28
SLIDE 28
  • 4. Comparison

‘Classic’ extension: 1 n = [q], 1n < [a]T [q], 1n < [w1a]T[q], 1n < . . . · · · < [wd · · · w2w1a]T[q], 1n = 1. Constant vector (1n); constant increment (at least 1

n), whence

d ≤ n − 2; no upper bound on |wi|. Jungers’s extension: 1 n = [q], 1n < [a]T [q], 1n [a]T [q], p1 < [w1a]T[q], p1 . . . [wd−1 · · · w2w1a]T [q], pd < [wd · · · w2w1a]T[q], pd = 1. Adaptive vector (pi); unpredictable increment, whence no upper bound on d; however, |wi| ≤ n.

Mikhail Volkov Synchronizing Finite Automata

slide-29
SLIDE 29
  • 5. Primitivity

As we shown in Lecture II, it suffices to consider strongly connected automata when studying the ˇ Cern´ y conjecture. Recall that a (directed) graph is called primitive if the gcd

  • f lengths of its cycles is equal to 1.

Observe that the underlying graph of a synchronizing automaton must be primitive. To see this, suppose that Γ = (V , E) is a strongly connected graph and k > 1 is a common divisor of lengths of its cycles. Take a vertex v0 ∈ V and, for i = 0, 1, . . . , k − 1, let Vi := {v ∈ V | ∃ path from v0 to v of length i (mod k)}. Clearly, V =

k−1

  • i=0
  • Vi. We claim that Vi ∩ Vj = ∅ if i = j.

Mikhail Volkov Synchronizing Finite Automata

slide-30
SLIDE 30
  • 5. Primitivity

As we shown in Lecture II, it suffices to consider strongly connected automata when studying the ˇ Cern´ y conjecture. Recall that a (directed) graph is called primitive if the gcd

  • f lengths of its cycles is equal to 1.

Observe that the underlying graph of a synchronizing automaton must be primitive. To see this, suppose that Γ = (V , E) is a strongly connected graph and k > 1 is a common divisor of lengths of its cycles. Take a vertex v0 ∈ V and, for i = 0, 1, . . . , k − 1, let Vi := {v ∈ V | ∃ path from v0 to v of length i (mod k)}. Clearly, V =

k−1

  • i=0
  • Vi. We claim that Vi ∩ Vj = ∅ if i = j.

Mikhail Volkov Synchronizing Finite Automata

slide-31
SLIDE 31
  • 5. Primitivity

As we shown in Lecture II, it suffices to consider strongly connected automata when studying the ˇ Cern´ y conjecture. Recall that a (directed) graph is called primitive if the gcd

  • f lengths of its cycles is equal to 1.

Observe that the underlying graph of a synchronizing automaton must be primitive. To see this, suppose that Γ = (V , E) is a strongly connected graph and k > 1 is a common divisor of lengths of its cycles. Take a vertex v0 ∈ V and, for i = 0, 1, . . . , k − 1, let Vi := {v ∈ V | ∃ path from v0 to v of length i (mod k)}. Clearly, V =

k−1

  • i=0
  • Vi. We claim that Vi ∩ Vj = ∅ if i = j.

Mikhail Volkov Synchronizing Finite Automata

slide-32
SLIDE 32
  • 5. Primitivity

As we shown in Lecture II, it suffices to consider strongly connected automata when studying the ˇ Cern´ y conjecture. Recall that a (directed) graph is called primitive if the gcd

  • f lengths of its cycles is equal to 1.

Observe that the underlying graph of a synchronizing automaton must be primitive. To see this, suppose that Γ = (V , E) is a strongly connected graph and k > 1 is a common divisor of lengths of its cycles. Take a vertex v0 ∈ V and, for i = 0, 1, . . . , k − 1, let Vi := {v ∈ V | ∃ path from v0 to v of length i (mod k)}. Clearly, V =

k−1

  • i=0
  • Vi. We claim that Vi ∩ Vj = ∅ if i = j.

Mikhail Volkov Synchronizing Finite Automata

slide-33
SLIDE 33
  • 5. Primitivity

As we shown in Lecture II, it suffices to consider strongly connected automata when studying the ˇ Cern´ y conjecture. Recall that a (directed) graph is called primitive if the gcd

  • f lengths of its cycles is equal to 1.

Observe that the underlying graph of a synchronizing automaton must be primitive. To see this, suppose that Γ = (V , E) is a strongly connected graph and k > 1 is a common divisor of lengths of its cycles. Take a vertex v0 ∈ V and, for i = 0, 1, . . . , k − 1, let Vi := {v ∈ V | ∃ path from v0 to v of length i (mod k)}. Clearly, V =

k−1

  • i=0
  • Vi. We claim that Vi ∩ Vj = ∅ if i = j.

Mikhail Volkov Synchronizing Finite Automata

slide-34
SLIDE 34
  • 5. Primitivity

As we shown in Lecture II, it suffices to consider strongly connected automata when studying the ˇ Cern´ y conjecture. Recall that a (directed) graph is called primitive if the gcd

  • f lengths of its cycles is equal to 1.

Observe that the underlying graph of a synchronizing automaton must be primitive. To see this, suppose that Γ = (V , E) is a strongly connected graph and k > 1 is a common divisor of lengths of its cycles. Take a vertex v0 ∈ V and, for i = 0, 1, . . . , k − 1, let Vi := {v ∈ V | ∃ path from v0 to v of length i (mod k)}. Clearly, V =

k−1

  • i=0
  • Vi. We claim that Vi ∩ Vj = ∅ if i = j.

Mikhail Volkov Synchronizing Finite Automata

slide-35
SLIDE 35
  • 5. Primitivity

As we shown in Lecture II, it suffices to consider strongly connected automata when studying the ˇ Cern´ y conjecture. Recall that a (directed) graph is called primitive if the gcd

  • f lengths of its cycles is equal to 1.

Observe that the underlying graph of a synchronizing automaton must be primitive. To see this, suppose that Γ = (V , E) is a strongly connected graph and k > 1 is a common divisor of lengths of its cycles. Take a vertex v0 ∈ V and, for i = 0, 1, . . . , k − 1, let Vi := {v ∈ V | ∃ path from v0 to v of length i (mod k)}. Clearly, V =

k−1

  • i=0
  • Vi. We claim that Vi ∩ Vj = ∅ if i = j.

Mikhail Volkov Synchronizing Finite Automata

slide-36
SLIDE 36
  • 6. Primitivity

Let v ∈ Vi ∩ Vj where i = j. This means that in Γ there are two paths from v0 to v: of length ℓ ≡ i (mod k) and of length m ≡ j (mod k). There is also a path from v to v0 of length, say, n. Combining it with the two paths above we get a cycle of length ℓ + n and a cycle of length m + n.

Mikhail Volkov Synchronizing Finite Automata

slide-37
SLIDE 37
  • 6. Primitivity

Let v ∈ Vi ∩ Vj where i = j. This means that in Γ there are two paths from v0 to v: of length ℓ ≡ i (mod k) and of length m ≡ j (mod k). v0 v There is also a path from v to v0 of length, say, n. Combining it with the two paths above we get a cycle of length ℓ + n and a cycle of length m + n.

Mikhail Volkov Synchronizing Finite Automata

slide-38
SLIDE 38
  • 6. Primitivity

Let v ∈ Vi ∩ Vj where i = j. This means that in Γ there are two paths from v0 to v: of length ℓ ≡ i (mod k) and of length m ≡ j (mod k). v0 v . . . There is also a path from v to v0 of length, say, n. Combining it with the two paths above we get a cycle of length ℓ + n and a cycle of length m + n.

Mikhail Volkov Synchronizing Finite Automata

slide-39
SLIDE 39
  • 7. Primitivity

Since k divides the length of any cycle in Γ, we have ℓ + n ≡ i + n ≡ 0(mod k) and m + n ≡ j + n ≡ 0(mod k), whence i ≡ j (mod k), a contradiction. Thus, V is a disjoint union of V0, V1, . . . , Vk−1, and by the definition each arrow in Γ leads from Vi to Vi+1(mod k). Then Γ definitely cannot underlie any synchronizing automaton: for instance, no paths of the same length ℓ originated in V0 and V1 can terminate at the same vertex because they end in Vℓ(mod k) and in Vℓ+1(mod k) respectively.

Mikhail Volkov Synchronizing Finite Automata

slide-40
SLIDE 40
  • 7. Primitivity

Since k divides the length of any cycle in Γ, we have ℓ + n ≡ i + n ≡ 0(mod k) and m + n ≡ j + n ≡ 0(mod k), whence i ≡ j (mod k), a contradiction. Thus, V is a disjoint union of V0, V1, . . . , Vk−1, and by the definition each arrow in Γ leads from Vi to Vi+1(mod k). Then Γ definitely cannot underlie any synchronizing automaton: for instance, no paths of the same length ℓ originated in V0 and V1 can terminate at the same vertex because they end in Vℓ(mod k) and in Vℓ+1(mod k) respectively.

Mikhail Volkov Synchronizing Finite Automata

slide-41
SLIDE 41
  • 7. Primitivity

Since k divides the length of any cycle in Γ, we have ℓ + n ≡ i + n ≡ 0(mod k) and m + n ≡ j + n ≡ 0(mod k), whence i ≡ j (mod k), a contradiction. Thus, V is a disjoint union of V0, V1, . . . , Vk−1, and by the definition each arrow in Γ leads from Vi to Vi+1(mod k). Then Γ definitely cannot underlie any synchronizing automaton: for instance, no paths of the same length ℓ originated in V0 and V1 can terminate at the same vertex because they end in Vℓ(mod k) and in Vℓ+1(mod k) respectively.

Mikhail Volkov Synchronizing Finite Automata

slide-42
SLIDE 42
  • 8. Markov Chains

Assume that Σ = {a1, a2, . . . , ak}. Each positive stochastic vector π ∈ Rk

+ defines a probability distribution on Σ. Consider a process

in which an agent randomly walks on the underlying graph of A , choosing for each move an edge labeled ai with probability p(ai). This is a Markov chain with the transition matrix S = S(A , π) :=

k

  • i=1

p(ai)[ai].

Mikhail Volkov Synchronizing Finite Automata

slide-43
SLIDE 43
  • 8. Markov Chains

Assume that Σ = {a1, a2, . . . , ak}. Each positive stochastic vector π ∈ Rk

+ defines a probability distribution on Σ. Consider a process

in which an agent randomly walks on the underlying graph of A , choosing for each move an edge labeled ai with probability p(ai). This is a Markov chain with the transition matrix S = S(A , π) :=

k

  • i=1

p(ai)[ai].

Mikhail Volkov Synchronizing Finite Automata

slide-44
SLIDE 44
  • 8. Markov Chains

Assume that Σ = {a1, a2, . . . , ak}. Each positive stochastic vector π ∈ Rk

+ defines a probability distribution on Σ. Consider a process

in which an agent randomly walks on the underlying graph of A , choosing for each move an edge labeled ai with probability p(ai). This is a Markov chain with the transition matrix S = S(A , π) :=

k

  • i=1

p(ai)[ai].

Mikhail Volkov Synchronizing Finite Automata

slide-45
SLIDE 45
  • 8. Markov Chains

Assume that Σ = {a1, a2, . . . , ak}. Each positive stochastic vector π ∈ Rk

+ defines a probability distribution on Σ. Consider a process

in which an agent randomly walks on the underlying graph of A , choosing for each move an edge labeled ai with probability p(ai). This is a Markov chain with the transition matrix S = S(A , π) :=

k

  • i=1

p(ai)[ai].

Mikhail Volkov Synchronizing Finite Automata

slide-46
SLIDE 46
  • 8. Markov Chains

Assume that Σ = {a1, a2, . . . , ak}. Each positive stochastic vector π ∈ Rk

+ defines a probability distribution on Σ. Consider a process

in which an agent randomly walks on the underlying graph of A , choosing for each move an edge labeled ai with probability p(ai). This is a Markov chain with the transition matrix S = S(A , π) :=

k

  • i=1

p(ai)[ai]. 1 2 3 4 a a a b b b a, b

Mikhail Volkov Synchronizing Finite Automata

slide-47
SLIDE 47
  • 8. Markov Chains

Assume that Σ = {a1, a2, . . . , ak}. Each positive stochastic vector π ∈ Rk

+ defines a probability distribution on Σ. Consider a process

in which an agent randomly walks on the underlying graph of A , choosing for each move an edge labeled ai with probability p(ai). This is a Markov chain with the transition matrix S = S(A , π) :=

k

  • i=1

p(ai)[ai]. 1 2 3 4 a a a b b b a, b [a] =     1 1 1 1     , [b] =     1 1 1 1    

Mikhail Volkov Synchronizing Finite Automata

slide-48
SLIDE 48
  • 8. Markov Chains

Assume that Σ = {a1, a2, . . . , ak}. Each positive stochastic vector π ∈ Rk

+ defines a probability distribution on Σ. Consider a process

in which an agent randomly walks on the underlying graph of A , choosing for each move an edge labeled ai with probability p(ai). This is a Markov chain with the transition matrix S = S(A , π) :=

k

  • i=1

p(ai)[ai]. 1 2 3 4 a a a b b b a, b If p(a) = 1

3, p(b) = 2 3, then S =

      

1 3

1

2 3 1 3 2 3 1 3 2 3

      

Mikhail Volkov Synchronizing Finite Automata

slide-49
SLIDE 49
  • 9. Stationary Distributions

If A is synchronizing, then the matrix S = S(A , π) is primitive since its digraph is such. By basic properties of Markov chains, there exists the stationary distribution α ∈ Rn

+ of this Markov chain, that is, a unique positive

stochastic vector satisfying Sα = α. Indeed, since S is a column stochastic we have ST1n = 1n. Thus, 1 is an eigenvalue of ST, but then 1 is also an eigenvalue of S. Then S has an eigenvector α belonging to 1. Since S is primitive, the Perron–Frobenius theorem applies, telling us that this eigenvector α is positive and unique up to a positive scalar, so that one can make it be stochastic. It is known that for every initial distribution β ∈ Rn

+,

the sequence Snβ tends to α as n → +∞ (ergodic theorem).

Mikhail Volkov Synchronizing Finite Automata

slide-50
SLIDE 50
  • 9. Stationary Distributions

If A is synchronizing, then the matrix S = S(A , π) is primitive since its digraph is such. By basic properties of Markov chains, there exists the stationary distribution α ∈ Rn

+ of this Markov chain, that is, a unique positive

stochastic vector satisfying Sα = α. Indeed, since S is a column stochastic we have ST1n = 1n. Thus, 1 is an eigenvalue of ST, but then 1 is also an eigenvalue of S. Then S has an eigenvector α belonging to 1. Since S is primitive, the Perron–Frobenius theorem applies, telling us that this eigenvector α is positive and unique up to a positive scalar, so that one can make it be stochastic. It is known that for every initial distribution β ∈ Rn

+,

the sequence Snβ tends to α as n → +∞ (ergodic theorem).

Mikhail Volkov Synchronizing Finite Automata

slide-51
SLIDE 51
  • 9. Stationary Distributions

If A is synchronizing, then the matrix S = S(A , π) is primitive since its digraph is such. By basic properties of Markov chains, there exists the stationary distribution α ∈ Rn

+ of this Markov chain, that is, a unique positive

stochastic vector satisfying Sα = α. Indeed, since S is a column stochastic we have ST1n = 1n. Thus, 1 is an eigenvalue of ST, but then 1 is also an eigenvalue of S. Then S has an eigenvector α belonging to 1. Since S is primitive, the Perron–Frobenius theorem applies, telling us that this eigenvector α is positive and unique up to a positive scalar, so that one can make it be stochastic. It is known that for every initial distribution β ∈ Rn

+,

the sequence Snβ tends to α as n → +∞ (ergodic theorem).

Mikhail Volkov Synchronizing Finite Automata

slide-52
SLIDE 52
  • 9. Stationary Distributions

If A is synchronizing, then the matrix S = S(A , π) is primitive since its digraph is such. By basic properties of Markov chains, there exists the stationary distribution α ∈ Rn

+ of this Markov chain, that is, a unique positive

stochastic vector satisfying Sα = α. Indeed, since S is a column stochastic we have ST1n = 1n. Thus, 1 is an eigenvalue of ST, but then 1 is also an eigenvalue of S. Then S has an eigenvector α belonging to 1. Since S is primitive, the Perron–Frobenius theorem applies, telling us that this eigenvector α is positive and unique up to a positive scalar, so that one can make it be stochastic. It is known that for every initial distribution β ∈ Rn

+,

the sequence Snβ tends to α as n → +∞ (ergodic theorem).

Mikhail Volkov Synchronizing Finite Automata

slide-53
SLIDE 53
  • 9. Stationary Distributions

If A is synchronizing, then the matrix S = S(A , π) is primitive since its digraph is such. By basic properties of Markov chains, there exists the stationary distribution α ∈ Rn

+ of this Markov chain, that is, a unique positive

stochastic vector satisfying Sα = α. Indeed, since S is a column stochastic we have ST1n = 1n. Thus, 1 is an eigenvalue of ST, but then 1 is also an eigenvalue of S. Then S has an eigenvector α belonging to 1. Since S is primitive, the Perron–Frobenius theorem applies, telling us that this eigenvector α is positive and unique up to a positive scalar, so that one can make it be stochastic. It is known that for every initial distribution β ∈ Rn

+,

the sequence Snβ tends to α as n → +∞ (ergodic theorem).

Mikhail Volkov Synchronizing Finite Automata

slide-54
SLIDE 54
  • 9. Stationary Distributions

If A is synchronizing, then the matrix S = S(A , π) is primitive since its digraph is such. By basic properties of Markov chains, there exists the stationary distribution α ∈ Rn

+ of this Markov chain, that is, a unique positive

stochastic vector satisfying Sα = α. Indeed, since S is a column stochastic we have ST1n = 1n. Thus, 1 is an eigenvalue of ST, but then 1 is also an eigenvalue of S. Then S has an eigenvector α belonging to 1. Since S is primitive, the Perron–Frobenius theorem applies, telling us that this eigenvector α is positive and unique up to a positive scalar, so that one can make it be stochastic. It is known that for every initial distribution β ∈ Rn

+,

the sequence Snβ tends to α as n → +∞ (ergodic theorem).

Mikhail Volkov Synchronizing Finite Automata

slide-55
SLIDE 55
  • 9. Stationary Distributions

If A is synchronizing, then the matrix S = S(A , π) is primitive since its digraph is such. By basic properties of Markov chains, there exists the stationary distribution α ∈ Rn

+ of this Markov chain, that is, a unique positive

stochastic vector satisfying Sα = α. Indeed, since S is a column stochastic we have ST1n = 1n. Thus, 1 is an eigenvalue of ST, but then 1 is also an eigenvalue of S. Then S has an eigenvector α belonging to 1. Since S is primitive, the Perron–Frobenius theorem applies, telling us that this eigenvector α is positive and unique up to a positive scalar, so that one can make it be stochastic. It is known that for every initial distribution β ∈ Rn

+,

the sequence Snβ tends to α as n → +∞ (ergodic theorem).

Mikhail Volkov Synchronizing Finite Automata

slide-56
SLIDE 56
  • 10. Berlinkov’s Result

It turns out that the stationary distribution of the Markov chain with the matrix S = S(A , π) yields a strong extension property.

Mikhail Volkov Synchronizing Finite Automata

slide-57
SLIDE 57
  • 10. Berlinkov’s Result

It turns out that the stationary distribution of the Markov chain with the matrix S = S(A , π) yields a strong extension property. Theorem (Berlinkov, 2012) Let A = Q, Σ be a strongly connected synchronizing automaton with n states and k letters, π ∈ Rk

+ a positive stochastic vector,

and α the stationary distribution of the Markov chain with the transition matrix S(A , π) = k

i=1 p(ai)[ai]. Then for each state

q ∈ Q, there exists a sequence of words w1, w2, . . . , wd ∈ Σ∗

  • f length at most n − 1 such that

[q], α < [w1]T [q], α < · · · < [wd · · · w2w1]T[q], α = 1.

Mikhail Volkov Synchronizing Finite Automata

slide-58
SLIDE 58
  • 11. Another Comparison

Approach ‘Classic’ Jungers Berlinkov

Mikhail Volkov Synchronizing Finite Automata

slide-59
SLIDE 59
  • 11. Another Comparison

Approach ‘Classic’ Jungers Berlinkov Vector Constant (1n) Depends on both A and the subset to be extended Depends

  • n

A and π but not on the subset

Mikhail Volkov Synchronizing Finite Automata

slide-60
SLIDE 60
  • 11. Another Comparison

Approach ‘Classic’ Jungers Berlinkov Vector Constant (1n) Depends on both A and the subset to be extended Depends

  • n

A and π but not on the subset Increment At least 1

n

Hard to predict At least 1

m

where m is the lcm of the denominators of entries of α (if π ∈ Qk

+,

then α ∈ Qn

+)

Mikhail Volkov Synchronizing Finite Automata

slide-61
SLIDE 61
  • 11. Another Comparison

Approach ‘Classic’ Jungers Berlinkov Vector Constant (1n) Depends on both A and the subset to be extended Depends

  • n

A and π but not on the subset Increment At least 1

n

Hard to predict At least 1

m

‘Height’ At most n − 1 Hard to predict At most m − 1 where m is the lcm of the denominators of entries of α

Mikhail Volkov Synchronizing Finite Automata

slide-62
SLIDE 62
  • 11. Another Comparison

Approach ‘Classic’ Jungers Berlinkov Vector Constant (1n) Depends on both A and the subset to be extended Depends

  • n

A and π but not on the subset Increment At least 1

n

Hard to predict At least 1

m

‘Height’ At most n − 1 Hard to predict At most m − 1 |wi| Hard to predict At most n At most n − 1 where m is the lcm of the denominators of entries of α

Mikhail Volkov Synchronizing Finite Automata

slide-63
SLIDE 63
  • 12. Consequences and Prospects

An immediate application: a new proof of the ˇ Cern´ y conjecture for automata with Eulerian digraphs. In this case the matrix S(A , π) is doubly stochastic whence the uniform vector 1n is its stationary distribution and d ≤ n − 2. Steinberg’s result about pseudo-Eulerian automata follows as well; here an automaton is A = (Q, Σ) is pseudo-Eulerian if there is a probability distribution π on Σ such that the matrix S(A , π) is doubly stochastic. Several new results where a quadratic bound on the reset threshold can be achieved. See: Mikhail Berlinkov, Marek Szyku la, Algebraic synchronization criterion and computing reset words, Inf. Sci. 369: 718–730 (2016). Still one extra degree of freedom: the choice of the probability distribution π. An ‘obvious’ choice: the distribution under which the random walk

  • n A synchronizes in the shortest expected time.

Mikhail Volkov Synchronizing Finite Automata

slide-64
SLIDE 64
  • 12. Consequences and Prospects

An immediate application: a new proof of the ˇ Cern´ y conjecture for automata with Eulerian digraphs. In this case the matrix S(A , π) is doubly stochastic whence the uniform vector 1n is its stationary distribution and d ≤ n − 2. Steinberg’s result about pseudo-Eulerian automata follows as well; here an automaton is A = (Q, Σ) is pseudo-Eulerian if there is a probability distribution π on Σ such that the matrix S(A , π) is doubly stochastic. Several new results where a quadratic bound on the reset threshold can be achieved. See: Mikhail Berlinkov, Marek Szyku la, Algebraic synchronization criterion and computing reset words, Inf. Sci. 369: 718–730 (2016). Still one extra degree of freedom: the choice of the probability distribution π. An ‘obvious’ choice: the distribution under which the random walk

  • n A synchronizes in the shortest expected time.

Mikhail Volkov Synchronizing Finite Automata

slide-65
SLIDE 65
  • 12. Consequences and Prospects

An immediate application: a new proof of the ˇ Cern´ y conjecture for automata with Eulerian digraphs. In this case the matrix S(A , π) is doubly stochastic whence the uniform vector 1n is its stationary distribution and d ≤ n − 2. Steinberg’s result about pseudo-Eulerian automata follows as well; here an automaton is A = (Q, Σ) is pseudo-Eulerian if there is a probability distribution π on Σ such that the matrix S(A , π) is doubly stochastic. Several new results where a quadratic bound on the reset threshold can be achieved. See: Mikhail Berlinkov, Marek Szyku la, Algebraic synchronization criterion and computing reset words, Inf. Sci. 369: 718–730 (2016). Still one extra degree of freedom: the choice of the probability distribution π. An ‘obvious’ choice: the distribution under which the random walk

  • n A synchronizes in the shortest expected time.

Mikhail Volkov Synchronizing Finite Automata

slide-66
SLIDE 66
  • 12. Consequences and Prospects

An immediate application: a new proof of the ˇ Cern´ y conjecture for automata with Eulerian digraphs. In this case the matrix S(A , π) is doubly stochastic whence the uniform vector 1n is its stationary distribution and d ≤ n − 2. Steinberg’s result about pseudo-Eulerian automata follows as well; here an automaton is A = (Q, Σ) is pseudo-Eulerian if there is a probability distribution π on Σ such that the matrix S(A , π) is doubly stochastic. Several new results where a quadratic bound on the reset threshold can be achieved. See: Mikhail Berlinkov, Marek Szyku la, Algebraic synchronization criterion and computing reset words, Inf. Sci. 369: 718–730 (2016). Still one extra degree of freedom: the choice of the probability distribution π. An ‘obvious’ choice: the distribution under which the random walk

  • n A synchronizes in the shortest expected time.

Mikhail Volkov Synchronizing Finite Automata

slide-67
SLIDE 67
  • 12. Consequences and Prospects

An immediate application: a new proof of the ˇ Cern´ y conjecture for automata with Eulerian digraphs. In this case the matrix S(A , π) is doubly stochastic whence the uniform vector 1n is its stationary distribution and d ≤ n − 2. Steinberg’s result about pseudo-Eulerian automata follows as well; here an automaton is A = (Q, Σ) is pseudo-Eulerian if there is a probability distribution π on Σ such that the matrix S(A , π) is doubly stochastic. Several new results where a quadratic bound on the reset threshold can be achieved. See: Mikhail Berlinkov, Marek Szyku la, Algebraic synchronization criterion and computing reset words, Inf. Sci. 369: 718–730 (2016). Still one extra degree of freedom: the choice of the probability distribution π. An ‘obvious’ choice: the distribution under which the random walk

  • n A synchronizes in the shortest expected time.

Mikhail Volkov Synchronizing Finite Automata

slide-68
SLIDE 68
  • 12. Consequences and Prospects

An immediate application: a new proof of the ˇ Cern´ y conjecture for automata with Eulerian digraphs. In this case the matrix S(A , π) is doubly stochastic whence the uniform vector 1n is its stationary distribution and d ≤ n − 2. Steinberg’s result about pseudo-Eulerian automata follows as well; here an automaton is A = (Q, Σ) is pseudo-Eulerian if there is a probability distribution π on Σ such that the matrix S(A , π) is doubly stochastic. Several new results where a quadratic bound on the reset threshold can be achieved. See: Mikhail Berlinkov, Marek Szyku la, Algebraic synchronization criterion and computing reset words, Inf. Sci. 369: 718–730 (2016). Still one extra degree of freedom: the choice of the probability distribution π. An ‘obvious’ choice: the distribution under which the random walk

  • n A synchronizes in the shortest expected time.

Mikhail Volkov Synchronizing Finite Automata

slide-69
SLIDE 69
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-70
SLIDE 70
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-71
SLIDE 71
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-72
SLIDE 72
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-73
SLIDE 73
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-74
SLIDE 74
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-75
SLIDE 75
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-76
SLIDE 76
  • 13. Proof

Recall the setting:

  • A = (Q, Σ), a synchronizing automaton with |Q| = n and

Σ = {a1, a2, . . . , ak}.

  • π ∈ Rk

+, a stochastic vector (a probability distribution on Σ).

  • α ∈ Rn

+, the stationary distribution of the Markov chain with

the transition matrix S = S(A , π) = k

i=1 p(ai)[ai].

Take a vector x ∈ Rn with (x, α = 0 and let v ∈ Σ∗ be a word of minimum length such that [v]T x, α > 0. We claim that: 1.

u∈Σ∗, |u|=r p(u)[u]Tx, α = 0 for every r > 0.

  • 2. If u ∈ Σ∗ is a word with |u| < |v|, then [u]T x, α = 0.
  • 3. For i = 1, 2, . . . , let Ui be the subspace spanned by all vectors

[u]α with |u| ≤ i − 1. Then |v| ≤ dim Un − 1 ≤ n − 1.

Mikhail Volkov Synchronizing Finite Automata

slide-77
SLIDE 77
  • 14. Claims 1 and 2

Since Sα = α, we have Srα = α for every positive integer r. Since Sr =

|u|=r p(u)[u], we have |u|=r p(u)[u]α = α.

Multiplying through by x, we obtain 0 = α, x =

  • |u|=r p(u)[u]α, x
  • =

|u|=r p(u)[u]α, x =

  • |u|=r p(u)[u]T x, α. This proves claim 1.

By the choice of v, if |u| < |v|, then [u]Tx, α ≤ 0. But if |u| = r, then

|u|=r p(u)[u]T x, α = 0 by claim 1.

If non-positive summands sum up to 0, each of them should be 0. This proves claim 2.

Mikhail Volkov Synchronizing Finite Automata

slide-78
SLIDE 78
  • 14. Claims 1 and 2

Since Sα = α, we have Srα = α for every positive integer r. Since Sr =

|u|=r p(u)[u], we have |u|=r p(u)[u]α = α.

Multiplying through by x, we obtain 0 = α, x =

  • |u|=r p(u)[u]α, x
  • =

|u|=r p(u)[u]α, x =

  • |u|=r p(u)[u]T x, α. This proves claim 1.

By the choice of v, if |u| < |v|, then [u]Tx, α ≤ 0. But if |u| = r, then

|u|=r p(u)[u]T x, α = 0 by claim 1.

If non-positive summands sum up to 0, each of them should be 0. This proves claim 2.

Mikhail Volkov Synchronizing Finite Automata

slide-79
SLIDE 79
  • 14. Claims 1 and 2

Since Sα = α, we have Srα = α for every positive integer r. Since Sr =

|u|=r p(u)[u], we have |u|=r p(u)[u]α = α.

Multiplying through by x, we obtain 0 = α, x =

  • |u|=r p(u)[u]α, x
  • =

|u|=r p(u)[u]α, x =

  • |u|=r p(u)[u]T x, α. This proves claim 1.

By the choice of v, if |u| < |v|, then [u]Tx, α ≤ 0. But if |u| = r, then

|u|=r p(u)[u]T x, α = 0 by claim 1.

If non-positive summands sum up to 0, each of them should be 0. This proves claim 2.

Mikhail Volkov Synchronizing Finite Automata

slide-80
SLIDE 80
  • 14. Claims 1 and 2

Since Sα = α, we have Srα = α for every positive integer r. Since Sr =

|u|=r p(u)[u], we have |u|=r p(u)[u]α = α.

Multiplying through by x, we obtain 0 = α, x =

  • |u|=r p(u)[u]α, x
  • =

|u|=r p(u)[u]α, x =

  • |u|=r p(u)[u]T x, α. This proves claim 1.

By the choice of v, if |u| < |v|, then [u]Tx, α ≤ 0. But if |u| = r, then

|u|=r p(u)[u]T x, α = 0 by claim 1.

If non-positive summands sum up to 0, each of them should be 0. This proves claim 2.

Mikhail Volkov Synchronizing Finite Automata

slide-81
SLIDE 81
  • 14. Claims 1 and 2

Since Sα = α, we have Srα = α for every positive integer r. Since Sr =

|u|=r p(u)[u], we have |u|=r p(u)[u]α = α.

Multiplying through by x, we obtain 0 = α, x =

  • |u|=r p(u)[u]α, x
  • =

|u|=r p(u)[u]α, x =

  • |u|=r p(u)[u]T x, α. This proves claim 1.

By the choice of v, if |u| < |v|, then [u]Tx, α ≤ 0. But if |u| = r, then

|u|=r p(u)[u]T x, α = 0 by claim 1.

If non-positive summands sum up to 0, each of them should be 0. This proves claim 2.

Mikhail Volkov Synchronizing Finite Automata

slide-82
SLIDE 82
  • 14. Claims 1 and 2

Since Sα = α, we have Srα = α for every positive integer r. Since Sr =

|u|=r p(u)[u], we have |u|=r p(u)[u]α = α.

Multiplying through by x, we obtain 0 = α, x =

  • |u|=r p(u)[u]α, x
  • =

|u|=r p(u)[u]α, x =

  • |u|=r p(u)[u]T x, α. This proves claim 1.

By the choice of v, if |u| < |v|, then [u]Tx, α ≤ 0. But if |u| = r, then

|u|=r p(u)[u]T x, α = 0 by claim 1.

If non-positive summands sum up to 0, each of them should be 0. This proves claim 2.

Mikhail Volkov Synchronizing Finite Automata

slide-83
SLIDE 83
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-84
SLIDE 84
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-85
SLIDE 85
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-86
SLIDE 86
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-87
SLIDE 87
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-88
SLIDE 88
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-89
SLIDE 89
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-90
SLIDE 90
  • 15. Claim 3

Recall that Ui is the subspace spanned by all vectors [u]α with |u| ≤ i − 1, i = 1, 2, . . . . Suppose that |v| ≥ dim Un. Then claim 2 implies that [u]T x, α = x, [u]α = 0 for every word u such that |u| < dim Un. The chain U1 ⊆ U2 ⊆ · · · ⊆ Un ⊆ . . . starts with the 1-dimensional subspace U1 spanned by α. Therefore, the number of strict inclusions in this chain cannot exceed n − 1 whence Uj = Uj+1 for some j ≤ n. The subspace Uj is invariant with respect to all the letters in Σ, whence, in particular, [v]α belongs to Uj. Since (x, [u]α = 0 for every u with |u| < dim Un = dim Uj, we conclude that x, g = 0 for each g ∈ Uj. Since [v]α belongs to Uj, we must have x, [v]α = [v]T x, α = 0, and this contradicts the condition [v]T x, α > 0. This proves claim 3.

Mikhail Volkov Synchronizing Finite Automata

slide-91
SLIDE 91
  • 16. End of the Proof

To complete the proof of Berlinkov’s theorem, it suffices to show that for every proper non-empty subset K ⊂ Q, there exists a word v ∈ Σ∗ of length at most n − 1 such that [v]T [K], α > [K], α, that is, [v]T [K], α − [K], α > 0. Let x := [K] − [K], α[Q]. Then x, α = 0 because [Q], α = 1. Since A is synchronizing and strongly connected, there exists a word w ∈ Σ∗ such that K . w−1 = Q, that is, [w]T [K] = [Q]. As [w]T [Q] = [Q], we have [w]Tx = (1 − [K], α)[Q], whence w]T x, α = 1 − [K], α > 0 since α is a positive stochastic

  • vector. Now by claim 3, if v ∈ Σ∗ is a word of minimum length

such that [v]T x, α > 0, then the length of v is at most n − 1. Finally, [v]Tx = [v]T [K] − [K], α[Q]

  • = [v]T[K] − [K], α[Q],

whence [v]T [K], α − [K], α = [v]T [K] − [K], α[Q], α = [v]T x, α > 0, as required.

Mikhail Volkov Synchronizing Finite Automata

slide-92
SLIDE 92
  • 16. End of the Proof

To complete the proof of Berlinkov’s theorem, it suffices to show that for every proper non-empty subset K ⊂ Q, there exists a word v ∈ Σ∗ of length at most n − 1 such that [v]T [K], α > [K], α, that is, [v]T [K], α − [K], α > 0. Let x := [K] − [K], α[Q]. Then x, α = 0 because [Q], α = 1. Since A is synchronizing and strongly connected, there exists a word w ∈ Σ∗ such that K . w−1 = Q, that is, [w]T [K] = [Q]. As [w]T [Q] = [Q], we have [w]Tx = (1 − [K], α)[Q], whence w]T x, α = 1 − [K], α > 0 since α is a positive stochastic

  • vector. Now by claim 3, if v ∈ Σ∗ is a word of minimum length

such that [v]T x, α > 0, then the length of v is at most n − 1. Finally, [v]Tx = [v]T [K] − [K], α[Q]

  • = [v]T[K] − [K], α[Q],

whence [v]T [K], α − [K], α = [v]T [K] − [K], α[Q], α = [v]T x, α > 0, as required.

Mikhail Volkov Synchronizing Finite Automata

slide-93
SLIDE 93
  • 16. End of the Proof

To complete the proof of Berlinkov’s theorem, it suffices to show that for every proper non-empty subset K ⊂ Q, there exists a word v ∈ Σ∗ of length at most n − 1 such that [v]T [K], α > [K], α, that is, [v]T [K], α − [K], α > 0. Let x := [K] − [K], α[Q]. Then x, α = 0 because [Q], α = 1. Since A is synchronizing and strongly connected, there exists a word w ∈ Σ∗ such that K . w−1 = Q, that is, [w]T [K] = [Q]. As [w]T [Q] = [Q], we have [w]Tx = (1 − [K], α)[Q], whence w]T x, α = 1 − [K], α > 0 since α is a positive stochastic

  • vector. Now by claim 3, if v ∈ Σ∗ is a word of minimum length

such that [v]T x, α > 0, then the length of v is at most n − 1. Finally, [v]Tx = [v]T [K] − [K], α[Q]

  • = [v]T[K] − [K], α[Q],

whence [v]T [K], α − [K], α = [v]T [K] − [K], α[Q], α = [v]T x, α > 0, as required.

Mikhail Volkov Synchronizing Finite Automata

slide-94
SLIDE 94
  • 16. End of the Proof

To complete the proof of Berlinkov’s theorem, it suffices to show that for every proper non-empty subset K ⊂ Q, there exists a word v ∈ Σ∗ of length at most n − 1 such that [v]T [K], α > [K], α, that is, [v]T [K], α − [K], α > 0. Let x := [K] − [K], α[Q]. Then x, α = 0 because [Q], α = 1. Since A is synchronizing and strongly connected, there exists a word w ∈ Σ∗ such that K . w−1 = Q, that is, [w]T [K] = [Q]. As [w]T [Q] = [Q], we have [w]Tx = (1 − [K], α)[Q], whence w]T x, α = 1 − [K], α > 0 since α is a positive stochastic

  • vector. Now by claim 3, if v ∈ Σ∗ is a word of minimum length

such that [v]T x, α > 0, then the length of v is at most n − 1. Finally, [v]Tx = [v]T [K] − [K], α[Q]

  • = [v]T[K] − [K], α[Q],

whence [v]T [K], α − [K], α = [v]T [K] − [K], α[Q], α = [v]T x, α > 0, as required.

Mikhail Volkov Synchronizing Finite Automata

slide-95
SLIDE 95
  • 16. End of the Proof

To complete the proof of Berlinkov’s theorem, it suffices to show that for every proper non-empty subset K ⊂ Q, there exists a word v ∈ Σ∗ of length at most n − 1 such that [v]T [K], α > [K], α, that is, [v]T [K], α − [K], α > 0. Let x := [K] − [K], α[Q]. Then x, α = 0 because [Q], α = 1. Since A is synchronizing and strongly connected, there exists a word w ∈ Σ∗ such that K . w−1 = Q, that is, [w]T [K] = [Q]. As [w]T [Q] = [Q], we have [w]Tx = (1 − [K], α)[Q], whence w]T x, α = 1 − [K], α > 0 since α is a positive stochastic

  • vector. Now by claim 3, if v ∈ Σ∗ is a word of minimum length

such that [v]T x, α > 0, then the length of v is at most n − 1. Finally, [v]Tx = [v]T [K] − [K], α[Q]

  • = [v]T[K] − [K], α[Q],

whence [v]T [K], α − [K], α = [v]T [K] − [K], α[Q], α = [v]T x, α > 0, as required.

Mikhail Volkov Synchronizing Finite Automata

slide-96
SLIDE 96
  • 16. End of the Proof

To complete the proof of Berlinkov’s theorem, it suffices to show that for every proper non-empty subset K ⊂ Q, there exists a word v ∈ Σ∗ of length at most n − 1 such that [v]T [K], α > [K], α, that is, [v]T [K], α − [K], α > 0. Let x := [K] − [K], α[Q]. Then x, α = 0 because [Q], α = 1. Since A is synchronizing and strongly connected, there exists a word w ∈ Σ∗ such that K . w−1 = Q, that is, [w]T [K] = [Q]. As [w]T [Q] = [Q], we have [w]Tx = (1 − [K], α)[Q], whence w]T x, α = 1 − [K], α > 0 since α is a positive stochastic

  • vector. Now by claim 3, if v ∈ Σ∗ is a word of minimum length

such that [v]T x, α > 0, then the length of v is at most n − 1. Finally, [v]Tx = [v]T [K] − [K], α[Q]

  • = [v]T[K] − [K], α[Q],

whence [v]T [K], α − [K], α = [v]T [K] − [K], α[Q], α = [v]T x, α > 0, as required.

Mikhail Volkov Synchronizing Finite Automata

slide-97
SLIDE 97
  • 16. End of the Proof

To complete the proof of Berlinkov’s theorem, it suffices to show that for every proper non-empty subset K ⊂ Q, there exists a word v ∈ Σ∗ of length at most n − 1 such that [v]T [K], α > [K], α, that is, [v]T [K], α − [K], α > 0. Let x := [K] − [K], α[Q]. Then x, α = 0 because [Q], α = 1. Since A is synchronizing and strongly connected, there exists a word w ∈ Σ∗ such that K . w−1 = Q, that is, [w]T [K] = [Q]. As [w]T [Q] = [Q], we have [w]Tx = (1 − [K], α)[Q], whence w]T x, α = 1 − [K], α > 0 since α is a positive stochastic

  • vector. Now by claim 3, if v ∈ Σ∗ is a word of minimum length

such that [v]T x, α > 0, then the length of v is at most n − 1. Finally, [v]Tx = [v]T [K] − [K], α[Q]

  • = [v]T[K] − [K], α[Q],

whence [v]T [K], α − [K], α = [v]T [K] − [K], α[Q], α = [v]T x, α > 0, as required.

Mikhail Volkov Synchronizing Finite Automata