INF2080 Repetition Daniel Lupp Universitetet i Oslo 9th March - - PowerPoint PPT Presentation

inf2080
SMART_READER_LITE
LIVE PREVIEW

INF2080 Repetition Daniel Lupp Universitetet i Oslo 9th March - - PowerPoint PPT Presentation

INF2080 Repetition Daniel Lupp Universitetet i Oslo 9th March 2018 Department of University of Informatics Oslo INF2080 Lecture :: 9th March 1 / 38 Today wrap-up of last week Friday repetition of course so far INF2080 Lecture :: 9th


slide-1
SLIDE 1

INF2080

Repetition Daniel Lupp

Universitetet i Oslo

9th March 2018

Department of Informatics University of Oslo

INF2080 Lecture :: 9th March 1 / 38

slide-2
SLIDE 2

Today

wrap-up of last week Friday repetition of course so far

INF2080 Lecture :: 9th March 2 / 38

slide-3
SLIDE 3

Wrap-up: Reducibility

Idea: Convert a problem A into a second problem B in such a way that a solution for B gives us a solution for A.

INF2080 Lecture :: 9th March 3 / 38

slide-4
SLIDE 4

Wrap-up: Reducibility

Idea: Convert a problem A into a second problem B in such a way that a solution for B gives us a solution for A. Example: Last week’s bad mathematician joke: A mathematician and an engineer go camping. After setting up camp, they go back to their car to get their pots, go to the river to fetch water, put the water on the fire to boil.

INF2080 Lecture :: 9th March 3 / 38

slide-5
SLIDE 5

Wrap-up: Reducibility

Idea: Convert a problem A into a second problem B in such a way that a solution for B gives us a solution for A. Example: Last week’s bad mathematician joke: A mathematician and an engineer go camping. After setting up camp, they go back to their car to get their pots, go to the river to fetch water, put the water on the fire to boil. the next morning, they both need to boil water. The engineer fills up one pot with water and begins heating it up.

INF2080 Lecture :: 9th March 3 / 38

slide-6
SLIDE 6

Wrap-up: Reducibility

Idea: Convert a problem A into a second problem B in such a way that a solution for B gives us a solution for A. Example: Last week’s bad mathematician joke: A mathematician and an engineer go camping. After setting up camp, they go back to their car to get their pots, go to the river to fetch water, put the water on the fire to boil. the next morning, they both need to boil water. The engineer fills up one pot with water and begins heating it up. The mathematician reduces the current problem (“boil water after a night camping”) to a problem with a known solution.

INF2080 Lecture :: 9th March 3 / 38

slide-7
SLIDE 7

Wrap-up: Reducibility

Idea: Convert a problem A into a second problem B in such a way that a solution for B gives us a solution for A. Example: Last week’s bad mathematician joke: A mathematician and an engineer go camping. After setting up camp, they go back to their car to get their pots, go to the river to fetch water, put the water on the fire to boil. the next morning, they both need to boil water. The engineer fills up one pot with water and begins heating it up. The mathematician reduces the current problem (“boil water after a night camping”) to a problem with a known solution. The mathematician (1) brings the pot back to the car,

INF2080 Lecture :: 9th March 3 / 38

slide-8
SLIDE 8

Wrap-up: Reducibility

Idea: Convert a problem A into a second problem B in such a way that a solution for B gives us a solution for A. Example: Last week’s bad mathematician joke: A mathematician and an engineer go camping. After setting up camp, they go back to their car to get their pots, go to the river to fetch water, put the water on the fire to boil. the next morning, they both need to boil water. The engineer fills up one pot with water and begins heating it up. The mathematician reduces the current problem (“boil water after a night camping”) to a problem with a known solution. The mathematician (1) brings the pot back to the car, (2) goes back to camp.

INF2080 Lecture :: 9th March 3 / 38

slide-9
SLIDE 9

Wrap-up: Reducibility

Idea: Convert a problem A into a second problem B in such a way that a solution for B gives us a solution for A. Example: Last week’s bad mathematician joke: A mathematician and an engineer go camping. After setting up camp, they go back to their car to get their pots, go to the river to fetch water, put the water on the fire to boil. the next morning, they both need to boil water. The engineer fills up one pot with water and begins heating it up. The mathematician reduces the current problem (“boil water after a night camping”) to a problem with a known solution. The mathematician (1) brings the pot back to the car, (2) goes back to camp. Thus the problem has been reduced to the problem solved the day before: how to get boiling water when the pot is in the car.

INF2080 Lecture :: 9th March 3 / 38

slide-10
SLIDE 10

Wrap-up: Reducibility

We used this to show various decidability and undecidability results, e.g., HALTTM = {M, w | M is a TM that halts on input w} is undecidable (reduction from ATM)

INF2080 Lecture :: 9th March 4 / 38

slide-11
SLIDE 11

Wrap-up: Reducibility

We used this to show various decidability and undecidability results, e.g., HALTTM = {M, w | M is a TM that halts on input w} is undecidable (reduction from ATM) ETM = {M | M is a TM accepts no input} is undecidable (reduction from ATM)

INF2080 Lecture :: 9th March 4 / 38

slide-12
SLIDE 12

Wrap-up: Reducibility

We used this to show various decidability and undecidability results, e.g., HALTTM = {M, w | M is a TM that halts on input w} is undecidable (reduction from ATM) ETM = {M | M is a TM accepts no input} is undecidable (reduction from ATM) REGULARTM = {M | M is a TM that accepts a regular language} is undecidable (reduction from ATM)

INF2080 Lecture :: 9th March 4 / 38

slide-13
SLIDE 13

Wrap-up: Reducibility

We used this to show various decidability and undecidability results, e.g., HALTTM = {M, w | M is a TM that halts on input w} is undecidable (reduction from ATM) ETM = {M | M is a TM accepts no input} is undecidable (reduction from ATM) REGULARTM = {M | M is a TM that accepts a regular language} is undecidable (reduction from ATM) Rice’s theorem: checking any nontrivial property of a Turing machine (if it’s regular, context-free, etc.) is undecidable!

INF2080 Lecture :: 9th March 4 / 38

slide-14
SLIDE 14

Wrap-up: Reducibility

We used this to show various decidability and undecidability results, e.g., HALTTM = {M, w | M is a TM that halts on input w} is undecidable (reduction from ATM) ETM = {M | M is a TM accepts no input} is undecidable (reduction from ATM) REGULARTM = {M | M is a TM that accepts a regular language} is undecidable (reduction from ATM) Rice’s theorem: checking any nontrivial property of a Turing machine (if it’s regular, context-free, etc.) is undecidable! ...

INF2080 Lecture :: 9th March 4 / 38

slide-15
SLIDE 15

Wrap-up: Reducibility

We formalized reducibility as follows: Definition Language A is mapping reducible to language B, written A ≤m B, if there exists a computable function f : Σ∗ → Σ∗ such that for every w w ∈ A ⇐ ⇒ f (w) ∈ B

INF2080 Lecture :: 9th March 5 / 38

slide-16
SLIDE 16

Wrap-up: Reducibility

We formalized reducibility as follows: Definition Language A is mapping reducible to language B, written A ≤m B, if there exists a computable function f : Σ∗ → Σ∗ such that for every w w ∈ A ⇐ ⇒ f (w) ∈ B Recall: a function f : Σ∗ → Σ∗ is computable if there exists a Turing machine M that for every input w halts with just f (w) on its tape.

INF2080 Lecture :: 9th March 5 / 38

slide-17
SLIDE 17

Wrap-up: Reducibility

Definition Language A is mapping reducible to language B, written A ≤m B, if there exists a computable function f : Σ∗ → Σ∗ such that for every w w ∈ A ⇐ ⇒ f (w) ∈ B By this definition: A ≤m B ⇐ ⇒ A ≤m B (useful tool we will use soon).

INF2080 Lecture :: 9th March 6 / 38

slide-18
SLIDE 18

Wrap-up: Reducibility

Theorem If A ≤m B and B is decidable [Turing-recognizable], then A is decidable [Turing-recognizable].

INF2080 Lecture :: 9th March 7 / 38

slide-19
SLIDE 19

Wrap-up: Reducibility

Theorem If A ≤m B and B is decidable [Turing-recognizable], then A is decidable [Turing-recognizable]. Theorem If A ≤m B and A is undecidable [non-Turing-recognizable], then B is undecidable [non-Turing-recognizable].

INF2080 Lecture :: 9th March 7 / 38

slide-20
SLIDE 20

Wrap-up: Reducibility

We can use this to show: Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: Let’s show that EQTM is not Turing recognizable. We show a mapping reduction from ATM, i.e., ATM ≤m EQTM.

INF2080 Lecture :: 9th March 8 / 38

slide-21
SLIDE 21

Wrap-up: Reducibility

We can use this to show: Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: Let’s show that EQTM is not Turing recognizable. We show a mapping reduction from ATM, i.e., ATM ≤m EQTM. This is the same as showing ATM ≤m EQTM.

INF2080 Lecture :: 9th March 8 / 38

slide-22
SLIDE 22

Wrap-up: Reducibility

We can use this to show: Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: Let’s show that EQTM is not Turing recognizable. We show a mapping reduction from ATM, i.e., ATM ≤m EQTM. This is the same as showing ATM ≤m EQTM. The computable function is described by the following Turing machine: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, reject M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2. INF2080 Lecture :: 9th March 8 / 38

slide-23
SLIDE 23

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, reject M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2. INF2080 Lecture :: 9th March 9 / 38

slide-24
SLIDE 24

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, reject M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2.

L(M2) = ∅ if M does not accept w.

INF2080 Lecture :: 9th March 9 / 38

slide-25
SLIDE 25

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, reject M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2.

L(M2) = ∅ if M does not accept w. L(M2) = Σ∗ if M accepts w.

INF2080 Lecture :: 9th March 9 / 38

slide-26
SLIDE 26

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, reject M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2.

L(M2) = ∅ if M does not accept w. L(M2) = Σ∗ if M accepts w. Thus, L(M1) = L(M2) iff M accepts w, and ATM ≤m EQTM

INF2080 Lecture :: 9th March 9 / 38

slide-27
SLIDE 27

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: Next show that EQTM is not co-Turing recognizable. We show a mapping reduction from ATM, i.e., ATM ≤m EQTM.

INF2080 Lecture :: 9th March 10 / 38

slide-28
SLIDE 28

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: Next show that EQTM is not co-Turing recognizable. We show a mapping reduction from ATM, i.e., ATM ≤m EQTM. This is the same as showing ATM ≤m EQTM.

INF2080 Lecture :: 9th March 10 / 38

slide-29
SLIDE 29

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: Next show that EQTM is not co-Turing recognizable. We show a mapping reduction from ATM, i.e., ATM ≤m EQTM. This is the same as showing ATM ≤m EQTM. The computable function is described by the following Turing machine: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, accept M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2. INF2080 Lecture :: 9th March 10 / 38

slide-30
SLIDE 30

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, accept M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2. INF2080 Lecture :: 9th March 11 / 38

slide-31
SLIDE 31

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, accept M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2.

L(M2) = ∅ if M does not accept w

INF2080 Lecture :: 9th March 11 / 38

slide-32
SLIDE 32

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, accept M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2.

L(M2) = ∅ if M does not accept w L(M2) = Σ∗ if M accepts w.

INF2080 Lecture :: 9th March 11 / 38

slide-33
SLIDE 33

Wrap-up: Reducibility

Theorem The language EQTM = {M1, M2 | M1, M2 are TMs with L(M1) = L(M2)} is neither Turing-recognizable nor co-Turing-recognizable. Proof: F = on input M, w:

1 Construct the following two machines M1 and M2:

M1 : on any input, accept M2 : on any input, run M on w. If it accepts, accept.

2 Output M1, M2.

L(M2) = ∅ if M does not accept w L(M2) = Σ∗ if M accepts w. Thus, L(M1) = L(M2) iff M accepts w, and ATM ≤m EQTM

INF2080 Lecture :: 9th March 11 / 38

slide-34
SLIDE 34

Wrap-up: Reducibility

Let’s consider the implications: We have seen that Turing machines capture the expressivity of any computational model that has unlimited access to infinite memory that is allowed to do finite work per step

INF2080 Lecture :: 9th March 12 / 38

slide-35
SLIDE 35

Wrap-up: Reducibility

Let’s consider the implications: We have seen that Turing machines capture the expressivity of any computational model that has unlimited access to infinite memory that is allowed to do finite work per step There exist languages that are not algorithmically solvable, i.e., membership and non-membership determined after a finite number of steps (undecidable, e.g., HALTTM)

INF2080 Lecture :: 9th March 12 / 38

slide-36
SLIDE 36

Wrap-up: Reducibility

Let’s consider the implications: We have seen that Turing machines capture the expressivity of any computational model that has unlimited access to infinite memory that is allowed to do finite work per step There exist languages that are not algorithmically solvable, i.e., membership and non-membership determined after a finite number of steps (undecidable, e.g., HALTTM) There exist languages that are not recognizable, i.e., no Turing machine can check membership after finite steps (non-Turing-recognizable, e.g., ATM)

INF2080 Lecture :: 9th March 12 / 38

slide-37
SLIDE 37

Wrap-up: Reducibility

Let’s consider the implications: We have seen that Turing machines capture the expressivity of any computational model that has unlimited access to infinite memory that is allowed to do finite work per step There exist languages that are not algorithmically solvable, i.e., membership and non-membership determined after a finite number of steps (undecidable, e.g., HALTTM) There exist languages that are not recognizable, i.e., no Turing machine can check membership after finite steps (non-Turing-recognizable, e.g., ATM) There exist languages that are neither recognizable nor co-recognizable, i.e., no such computational model can check membership or non-membership! (e.g., EQTM)

INF2080 Lecture :: 9th March 12 / 38

slide-38
SLIDE 38

Regular Languages

Determininstic Finite Automata (DFA): an automata with a finite number of states where for every state and input there is precisely one transition leading to another state.

INF2080 Lecture :: 9th March 13 / 38

slide-39
SLIDE 39

Regular Languages

Determininstic Finite Automata (DFA): an automata with a finite number of states where for every state and input there is precisely one transition leading to another state.

start 1

contain a start state, possibly multiple accepting states. If after starting in the start state, parsing an input and following correct transitions the automaton ends in an accept state, the input is accepted

INF2080 Lecture :: 9th March 13 / 38

slide-40
SLIDE 40

Regular Languages

Determininstic Finite Automata (DFA): an automata with a finite number of states where for every state and input there is precisely one transition leading to another state.

start 1

contain a start state, possibly multiple accepting states. If after starting in the start state, parsing an input and following correct transitions the automaton ends in an accept state, the input is accepted The set of inputs accepted by a DFA is called a regular language

INF2080 Lecture :: 9th March 13 / 38

slide-41
SLIDE 41

Regular Languages

We can add nondeterminism: given a state and a current input symbol, multiple possible following states:

start 0, 1

INF2080 Lecture :: 9th March 14 / 38

slide-42
SLIDE 42

Regular Languages

We can add nondeterminism: given a state and a current input symbol, multiple possible following states:

start 0, 1

NFA’s accept the same languages as DFA’s, i.e., a language is regular iff an NFA accepts it

INF2080 Lecture :: 9th March 14 / 38

slide-43
SLIDE 43

Regular Languages

We can add nondeterminism: given a state and a current input symbol, multiple possible following states:

start 0, 1

NFA’s accept the same languages as DFA’s, i.e., a language is regular iff an NFA accepts it proof idea: Given an NFA N with state set Q, we define a DFA D with state set P(Q), where the state Q ∈ P(Q) in D represents that N could be in any state q ∈ Q.

INF2080 Lecture :: 9th March 14 / 38

slide-44
SLIDE 44

Regular Languages

Another way of encoding regular languages are regular expressions: strings constructed from symbols from the alphabet Σ and the operations: Kleene star(∗), union (∪), and concatanation.

INF2080 Lecture :: 9th March 15 / 38

slide-45
SLIDE 45

Regular Languages

Another way of encoding regular languages are regular expressions: strings constructed from symbols from the alphabet Σ and the operations: Kleene star(∗), union (∪), and concatanation. Order of operations: Kleene star binds stronger than concatanation, which binds stronger than union: Example: 0 ∪ 10∗ = (0) ∪ (1(0∗))

INF2080 Lecture :: 9th March 15 / 38

slide-46
SLIDE 46

Regular Languages

Another way of encoding regular languages are regular expressions: strings constructed from symbols from the alphabet Σ and the operations: Kleene star(∗), union (∪), and concatanation. Order of operations: Kleene star binds stronger than concatanation, which binds stronger than union: Example: 0 ∪ 10∗ = (0) ∪ (1(0∗)) → remember to use parentheses when necessary!!

INF2080 Lecture :: 9th March 15 / 38

slide-47
SLIDE 47

Regular Languages

Another way of encoding regular languages are regular expressions: strings constructed from symbols from the alphabet Σ and the operations: Kleene star(∗), union (∪), and concatanation. Order of operations: Kleene star binds stronger than concatanation, which binds stronger than union: Example: 0 ∪ 10∗ = (0) ∪ (1(0∗)) → remember to use parentheses when necessary!! The expressivity of regular languages is precisely that of DFA/NFA. To show this, we introduced GNFA (generalized finite automata), NFA’s with RE’s as labels instead of symbols.

start 01∗

INF2080 Lecture :: 9th March 15 / 38

slide-48
SLIDE 48

Regular Languages

Proof idea that RE=DFA: Consider a DFA as a GNFA. Then iteratively remove nodes, and encode paths through that node in other edges:

INF2080 Lecture :: 9th March 16 / 38

slide-49
SLIDE 49

Regular Languages

Proof idea that RE=DFA: Consider a DFA as a GNFA. Then iteratively remove nodes, and encode paths through that node in other edges:

X R2 R4 R3 R1

R1 ∪ (R2R∗

3R4)

INF2080 Lecture :: 9th March 16 / 38

slide-50
SLIDE 50

Pumping Lemma - Regular Languages

Lemma (Pumping Lemma) If A is a regular language, then there is a number p, called the pumping length, where if w is a word in A of length ≥ p then w can be divided into three parts, w = xyz, such that

1 xyiz ∈ A for every i ≥ 0, 2 |y| > 0, 3 |xy| ≤ p. INF2080 Lecture :: 9th March 17 / 38

slide-51
SLIDE 51

Pumping Lemma - Regular Languages

Lemma (Pumping Lemma) If A is a regular language, then there is a number p, called the pumping length, where if w is a word in A of length ≥ p then w can be divided into three parts, w = xyz, such that

1 xyiz ∈ A for every i ≥ 0, 2 |y| > 0, 3 |xy| ≤ p.

Use the fact that regular languages only have finite memory

INF2080 Lecture :: 9th March 17 / 38

slide-52
SLIDE 52

Pumping Lemma - Regular Languages

Lemma (Pumping Lemma) If A is a regular language, then there is a number p, called the pumping length, where if w is a word in A of length ≥ p then w can be divided into three parts, w = xyz, such that

1 xyiz ∈ A for every i ≥ 0, 2 |y| > 0, 3 |xy| ≤ p.

Use the fact that regular languages only have finite memory An automaton’s memory is represented by the states, i.e., if a word is longer than the number of states (=available memory), some state must be repeated twice in the accepting path. → cycle!

INF2080 Lecture :: 9th March 17 / 38

slide-53
SLIDE 53

Pumping Lemma - Regular Languages

Lemma (Pumping Lemma) If A is a regular language, then there is a number p, called the pumping length, where if w is a word in A of length ≥ p then w can be divided into three parts, w = xyz, such that

1 xyiz ∈ A for every i ≥ 0, 2 |y| > 0, 3 |xy| ≤ p.

Use the fact that regular languages only have finite memory An automaton’s memory is represented by the states, i.e., if a word is longer than the number of states (=available memory), some state must be repeated twice in the accepting path. → cycle! Then this accepting path can be divided up into three parts: x (leading to the cycle), y (the cycle), z (path from cycle to accept)

INF2080 Lecture :: 9th March 17 / 38

slide-54
SLIDE 54

Pumping Lemma - Regular Languages

Lemma (Pumping Lemma) If A is a regular language, then there is a number p, called the pumping length, where if w is a word in A of length ≥ p then w can be divided into three parts, w = xyz, such that

1 xyiz ∈ A for every i ≥ 0, 2 |y| > 0, 3 |xy| ≤ p.

Use the fact that regular languages only have finite memory An automaton’s memory is represented by the states, i.e., if a word is longer than the number of states (=available memory), some state must be repeated twice in the accepting path. → cycle! Then this accepting path can be divided up into three parts: x (leading to the cycle), y (the cycle), z (path from cycle to accept)

INF2080 Lecture :: 9th March 17 / 38

slide-55
SLIDE 55

Pumping Lemma - Regular Languages

useful tool for showing that a language is nonregular

INF2080 Lecture :: 9th March 18 / 38

slide-56
SLIDE 56

Pumping Lemma - Regular Languages

useful tool for showing that a language is nonregular Example: {anbn | n ≥ 0}

INF2080 Lecture :: 9th March 18 / 38

slide-57
SLIDE 57

Pumping Lemma - Regular Languages

useful tool for showing that a language is nonregular Example: {anbn | n ≥ 0} NOT useful for showing a language is regular: {canbn | n ≥ 0} ∪ {ckw | k = 1, w ∈ Σ∗ does not start with c}

INF2080 Lecture :: 9th March 18 / 38

slide-58
SLIDE 58

Pumping Lemma - Regular Languages

useful tool for showing that a language is nonregular Example: {anbn | n ≥ 0} NOT useful for showing a language is regular: {canbn | n ≥ 0} ∪ {ckw | k = 1, w ∈ Σ∗ does not start with c} a language that is nonregular, yet every word can be pumped according to pumping lemma! → sometimes other tools are required (see, e.g., oblig 2)

INF2080 Lecture :: 9th March 18 / 38

slide-59
SLIDE 59

Context-free languages

defined context-free grammars: essentialy, a set of rules of the form A → w where A is a variable and w is a string of variables and terminals

INF2080 Lecture :: 9th March 19 / 38

slide-60
SLIDE 60

Context-free languages

defined context-free grammars: essentialy, a set of rules of the form A → w where A is a variable and w is a string of variables and terminals a grammar G generates a word w if starting with the start variable S the word w can be

  • btained by sequential application of rules in G

INF2080 Lecture :: 9th March 19 / 38

slide-61
SLIDE 61

Context-free languages

defined context-free grammars: essentialy, a set of rules of the form A → w where A is a variable and w is a string of variables and terminals a grammar G generates a word w if starting with the start variable S the word w can be

  • btained by sequential application of rules in G

a word w is ambiguously generated if there are two or more leftmost derivations of w

INF2080 Lecture :: 9th March 19 / 38

slide-62
SLIDE 62

Context-free languages

defined context-free grammars: essentialy, a set of rules of the form A → w where A is a variable and w is a string of variables and terminals a grammar G generates a word w if starting with the start variable S the word w can be

  • btained by sequential application of rules in G

a word w is ambiguously generated if there are two or more leftmost derivations of w

E E a + E E a × E a

Intuitively corresponds to a + (a × a)

E E E a + E a × E a

Intuitively corresponds to (a + a) × a

INF2080 Lecture :: 9th March 19 / 38

slide-63
SLIDE 63

Context-free languages

Context-free languages are accepted by pushdown automata: an NFA with an additional stack

INF2080 Lecture :: 9th March 20 / 38

slide-64
SLIDE 64

Context-free languages

Context-free languages are accepted by pushdown automata: an NFA with an additional stack in each transition, we are allowed to pop off and/or push on to the stack.

INF2080 Lecture :: 9th March 20 / 38

slide-65
SLIDE 65

Context-free languages

Context-free languages are accepted by pushdown automata: an NFA with an additional stack in each transition, we are allowed to pop off and/or push on to the stack.

q1 start q2 ε, ε → $ 0, ε → 0 q3 1, 0 → ε 1, 0 → ε q4 ε, $ → ε

INF2080 Lecture :: 9th March 20 / 38

slide-66
SLIDE 66

Context-free languages

converting a CFG to a PDA: use the stack to store intermediate strings of a derivation. The PDA nondeterministically guesses which rule to apply next

INF2080 Lecture :: 9th March 21 / 38

slide-67
SLIDE 67

Context-free languages

converting a CFG to a PDA: use the stack to store intermediate strings of a derivation. The PDA nondeterministically guesses which rule to apply next converting PDA to CFG: much more involved. General idea: For each pair of states p, q in PDA, add a variable Apq to G that generates all strings that take the PDA from p to q with empty stacks (i.e., stack when arriving at p is equal to the stack when arriving at q). Add certain rules according to transition function δ.

INF2080 Lecture :: 9th March 21 / 38

slide-68
SLIDE 68

Context-free languages

converting a CFG to a PDA: use the stack to store intermediate strings of a derivation. The PDA nondeterministically guesses which rule to apply next converting PDA to CFG: much more involved. General idea: For each pair of states p, q in PDA, add a variable Apq to G that generates all strings that take the PDA from p to q with empty stacks (i.e., stack when arriving at p is equal to the stack when arriving at q). Add certain rules according to transition function δ. So, CFG=PDA

INF2080 Lecture :: 9th March 21 / 38

slide-69
SLIDE 69

Context-free languages

converting a CFG to a PDA: use the stack to store intermediate strings of a derivation. The PDA nondeterministically guesses which rule to apply next converting PDA to CFG: much more involved. General idea: For each pair of states p, q in PDA, add a variable Apq to G that generates all strings that take the PDA from p to q with empty stacks (i.e., stack when arriving at p is equal to the stack when arriving at q). Add certain rules according to transition function δ. So, CFG=PDA noteworthy: deterministic PDA (DPDA) is not equal to PDA, though we haven’t covered this in the lecture

INF2080 Lecture :: 9th March 21 / 38

slide-70
SLIDE 70

Context-free languages

Every CFG can be rewritten into a grammar in Chomsky normal form: Definition A grammar is in Chomsky Normal Form if every rule is of the form: A → BC A → a where a is any terminal, A is any variable, B, C are any variables that are not the start variable. In addition the rule S → ε is permitted.

INF2080 Lecture :: 9th March 22 / 38

slide-71
SLIDE 71

Pumping Lemma - CFL

Lemma (Pumping Lemma for CFLs) For every context-free language A there exists a number p (called the pumping length) where, if s is a word in A of length ≥ p, then s can be divided into five parts, s = uvxyz, satisfying the following conditions:

1 uvixyiz ∈ A for all i ≥ 0, 2 |vy| > 0, 3 |vxy| ≤ p.

similar to RL, we exploit the limited memory of CFL’s If a word is “long enough”, the smallest parse tree will contain two occurences of the same variable

INF2080 Lecture :: 9th March 23 / 38

slide-72
SLIDE 72

Pumping Lemma - CFLs

T R R

INF2080 Lecture :: 9th March 24 / 38

slide-73
SLIDE 73

Pumping Lemma - CFLs

T R R u v x y z

INF2080 Lecture :: 9th March 24 / 38

slide-74
SLIDE 74

Pumping Lemma - CFLs

T R u x z

INF2080 Lecture :: 9th March 25 / 38

slide-75
SLIDE 75

Pumping Lemma - CFLs

T R u x z → uv0xy0z = uxz

INF2080 Lecture :: 9th March 25 / 38

slide-76
SLIDE 76

Pumping Lemma - CFLs

T R R u v x y z

INF2080 Lecture :: 9th March 26 / 38

slide-77
SLIDE 77

Pumping Lemma - CFLs

T R R R u v v x y y z all valid parse trees in G

INF2080 Lecture :: 9th March 27 / 38

slide-78
SLIDE 78

Pumping Lemma - CFLs

T R R R u v v x y y z → uv2xy2z, and so on all valid parse trees in G

INF2080 Lecture :: 9th March 27 / 38

slide-79
SLIDE 79

Pumping Lemma - CFLs

Once again, useful tool for determining if a language is not context-free

INF2080 Lecture :: 9th March 28 / 38

slide-80
SLIDE 80

Pumping Lemma - CFLs

Once again, useful tool for determining if a language is not context-free However, just like in the regular case, there exist languages that are not context-free that can be pumped. (analogous to the regular case)

INF2080 Lecture :: 9th March 28 / 38

slide-81
SLIDE 81

Pumping Lemma - CFLs

Once again, useful tool for determining if a language is not context-free However, just like in the regular case, there exist languages that are not context-free that can be pumped. (analogous to the regular case) Thus, we have so far seen {RL}{CFL}, and that there exist non-context-free languages

INF2080 Lecture :: 9th March 28 / 38

slide-82
SLIDE 82

Turing Machines

Defined Turing machines: Finite state machine

INF2080 Lecture :: 9th March 29 / 38

slide-83
SLIDE 83

Turing Machines

Defined Turing machines: Finite state machine a finite state machine with access to an infinite tape

INF2080 Lecture :: 9th March 29 / 38

slide-84
SLIDE 84

Turing Machines

Defined Turing machines: Finite state machine a finite state machine with access to an infinite tape modelled by having a read/write head that can move left or right over the tape

INF2080 Lecture :: 9th March 29 / 38

slide-85
SLIDE 85

Turing Machines

each of the computational models we had seen so far were special cases of Turing machines

INF2080 Lecture :: 9th March 30 / 38

slide-86
SLIDE 86

Turing Machines

each of the computational models we had seen so far were special cases of Turing machines different description levels of Turing machiens: high-level (“algorithmic” description, no fine-grained detail on tape operations),

INF2080 Lecture :: 9th March 30 / 38

slide-87
SLIDE 87

Turing Machines

each of the computational models we had seen so far were special cases of Turing machines different description levels of Turing machiens: high-level (“algorithmic” description, no fine-grained detail on tape operations), low-level (description of how the head operates on tape),

INF2080 Lecture :: 9th March 30 / 38

slide-88
SLIDE 88

Turing Machines

each of the computational models we had seen so far were special cases of Turing machines different description levels of Turing machiens: high-level (“algorithmic” description, no fine-grained detail on tape operations), low-level (description of how the head operates on tape), implementation level (formal definition of the Turing machine) It is important to remember how high-level things can be implemented by tape manipulation, however formal definitions of Turing machines can be cumbersome

INF2080 Lecture :: 9th March 30 / 38

slide-89
SLIDE 89

Turing Machines

Turing machines are a bit different from the other automata: DFA/PDA: could only read input once (and never move backwards over the input)

INF2080 Lecture :: 9th March 31 / 38

slide-90
SLIDE 90

Turing Machines

Turing machines are a bit different from the other automata: DFA/PDA: could only read input once (and never move backwards over the input) would only accept after having read the entire input (reject if no computational branches accept)

INF2080 Lecture :: 9th March 31 / 38

slide-91
SLIDE 91

Turing Machines

Turing machines are a bit different from the other automata: DFA/PDA: could only read input once (and never move backwards over the input) would only accept after having read the entire input (reject if no computational branches accept) either finite memory (DFA), or restricted access to memory (PDA)

INF2080 Lecture :: 9th March 31 / 38

slide-92
SLIDE 92

Turing Machines

Turing machines are a bit different from the other automata: DFA/PDA: could only read input once (and never move backwards over the input) would only accept after having read the entire input (reject if no computational branches accept) either finite memory (DFA), or restricted access to memory (PDA) TM: can move left and right across it’s tape

INF2080 Lecture :: 9th March 31 / 38

slide-93
SLIDE 93

Turing Machines

Turing machines are a bit different from the other automata: DFA/PDA: could only read input once (and never move backwards over the input) would only accept after having read the entire input (reject if no computational branches accept) either finite memory (DFA), or restricted access to memory (PDA) TM: can move left and right across it’s tape if enters accept/reject state, immediately stops computing

INF2080 Lecture :: 9th March 31 / 38

slide-94
SLIDE 94

Turing Machines

Turing machines are a bit different from the other automata: DFA/PDA: could only read input once (and never move backwards over the input) would only accept after having read the entire input (reject if no computational branches accept) either finite memory (DFA), or restricted access to memory (PDA) TM: can move left and right across it’s tape if enters accept/reject state, immediately stops computing unrestricted access to infinite memory

INF2080 Lecture :: 9th March 31 / 38

slide-95
SLIDE 95

Turing Machines

A language accepted by a Turing machine is called Turing-recognizable. If the machine halts on every input, then the language it recognizes is called decidable.

INF2080 Lecture :: 9th March 32 / 38

slide-96
SLIDE 96

Turing Machines

Have looked at Turing machine variants, seen that they are equivalent: the LRS Turing machine (the head can move left, right, or stay put) the multitape Turing machine (multiple tapes, multiple heads) the nondeterministic Turing machine the enumerator NFA with two stacks ... All computational models with unlimited access to infinite memory that can perform finite work in one step are equivalent to a Turing machine!

INF2080 Lecture :: 9th March 33 / 38

slide-97
SLIDE 97

Church-Turing Thesis

Church and Turing independently formalized the notion of algorithm

INF2080 Lecture :: 9th March 34 / 38

slide-98
SLIDE 98

Church-Turing Thesis

Church and Turing independently formalized the notion of algorithm Previous, intuitive notion: a method according to which after a finite number of

  • perations an answer is given (paraphrased, many formulations)

INF2080 Lecture :: 9th March 34 / 38

slide-99
SLIDE 99

Church-Turing Thesis

Church and Turing independently formalized the notion of algorithm Previous, intuitive notion: a method according to which after a finite number of

  • perations an answer is given (paraphrased, many formulations)

Formal: an algorithm is a decidable Turing machine (deciders)

INF2080 Lecture :: 9th March 34 / 38

slide-100
SLIDE 100

Church-Turing Thesis

Church and Turing independently formalized the notion of algorithm Previous, intuitive notion: a method according to which after a finite number of

  • perations an answer is given (paraphrased, many formulations)

Formal: an algorithm is a decidable Turing machine (deciders) Church Turing thesis: each intuitive definition of algorithms can be described by decidable Turing machines

INF2080 Lecture :: 9th March 34 / 38

slide-101
SLIDE 101

Decidability

Considered acceptance, emptiness, and equivalence problems for computational models, e.g.: ATM = {M, w | M is a Turing machine that accepts w}

INF2080 Lecture :: 9th March 35 / 38

slide-102
SLIDE 102

Decidability

Considered acceptance, emptiness, and equivalence problems for computational models, e.g.: ATM = {M, w | M is a Turing machine that accepts w} We showed various decidability/undecidability results for languages: x ∈ L L = ∅ L = Σ∗ L = K regular

  • CFL
  • X

X LBA

  • X

X X decidable

  • X

X X Turing-rec. X X X X

INF2080 Lecture :: 9th March 35 / 38

slide-103
SLIDE 103

Undecidability

Considered the halting problem: HALTTM = {M, w | M is a TM that halts on input w}

INF2080 Lecture :: 9th March 36 / 38

slide-104
SLIDE 104

Undecidability

Considered the halting problem: HALTTM = {M, w | M is a TM that halts on input w} HALTTM is undecidable.

INF2080 Lecture :: 9th March 36 / 38

slide-105
SLIDE 105

Undecidability

Considered the halting problem: HALTTM = {M, w | M is a TM that halts on input w} HALTTM is undecidable. Thus, it is algorithmically unsolvable to determine whether a given problem will terminate!

INF2080 Lecture :: 9th March 36 / 38

slide-106
SLIDE 106

Undecidability

Considered the halting problem: HALTTM = {M, w | M is a TM that halts on input w} HALTTM is undecidable. Thus, it is algorithmically unsolvable to determine whether a given problem will terminate! saw PCP yesterday: given a set of dominoes, does there exist a match?

INF2080 Lecture :: 9th March 36 / 38

slide-107
SLIDE 107

Undecidability

Considered the halting problem: HALTTM = {M, w | M is a TM that halts on input w} HALTTM is undecidable. Thus, it is algorithmically unsolvable to determine whether a given problem will terminate! saw PCP yesterday: given a set of dominoes, does there exist a match? also undecidable.

INF2080 Lecture :: 9th March 36 / 38

slide-108
SLIDE 108

Undecidability

Considered the halting problem: HALTTM = {M, w | M is a TM that halts on input w} HALTTM is undecidable. Thus, it is algorithmically unsolvable to determine whether a given problem will terminate! saw PCP yesterday: given a set of dominoes, does there exist a match? also undecidable. → decidability relates to more things than just Turing machines!

INF2080 Lecture :: 9th March 36 / 38

slide-109
SLIDE 109

Wrap-up

Connecting Chomsky and Turing, the Chomsky hierarchy:

INF2080 Lecture :: 9th March 37 / 38

slide-110
SLIDE 110

Wrap-up

Connecting Chomsky and Turing, the Chomsky hierarchy: Type-0: recursively enumerable, i.e., Turing-recognizable languages. Type-1: context-sensitive languages. Type-2: context-free languages. Type-3: regular languages.

INF2080 Lecture :: 9th March 37 / 38

slide-111
SLIDE 111

Wrap-up

Connecting Chomsky and Turing, the Chomsky hierarchy: Type-0: recursively enumerable, i.e., Turing-recognizable languages. Type-1: context-sensitive languages. Type-2: context-free languages. Type-3: regular languages. We haven’t gone through Type-1 (extra lecture at the end of the semester, if desired), however we have seen the computational model that accepts them: linear bounded automata (LBA) and seen that these are decidable.

INF2080 Lecture :: 9th March 37 / 38

slide-112
SLIDE 112

What’s next?

Complexity! not so much about decidability vs. undecidability...most of what we’ll consider will be decidable, i.e., algorithmically solvable.

INF2080 Lecture :: 9th March 38 / 38

slide-113
SLIDE 113

What’s next?

Complexity! not so much about decidability vs. undecidability...most of what we’ll consider will be decidable, i.e., algorithmically solvable. ...but how hard are these problems? How can they be compared with one another

INF2080 Lecture :: 9th March 38 / 38

slide-114
SLIDE 114

What’s next?

Complexity! not so much about decidability vs. undecidability...most of what we’ll consider will be decidable, i.e., algorithmically solvable. ...but how hard are these problems? How can they be compared with one another related to reducibility, computable functions

INF2080 Lecture :: 9th March 38 / 38

slide-115
SLIDE 115

What’s next?

Complexity! not so much about decidability vs. undecidability...most of what we’ll consider will be decidable, i.e., algorithmically solvable. ...but how hard are these problems? How can they be compared with one another related to reducibility, computable functions highly relevant for anything within computer science, be it crypto/security, programming, theoretical work (AI, databases)

INF2080 Lecture :: 9th March 38 / 38