cs453 intro and pa1 1
play

CS453 Intro and PA1 1 Augmenting the grammar with End of File - PowerPoint PPT Presentation

Plan for Today Ambiguous Grammars E Ambiguous grammar: >1 parse tree for 1 sentence E E + Ambiguous Grammars E E Expression grammar parse tree 1 * Num Disambiguating ambiguous grammars E E * E Num (42) Num E


  1. Plan for Today Ambiguous Grammars E Ambiguous grammar: >1 parse tree for 1 sentence E E + Ambiguous Grammars E E Expression grammar parse tree 1 * Num Disambiguating ambiguous grammars E à E * E Num (42) Num E à E + E (6) (7) Predictive parsing E à E - E E à ( E ) E FIRST and FOLLOW sets E à ID E E * E à NUM Predictive Parsing table E E Num + String parse tree 2 (6) Num Num 42 + 7 * 6 (7) (42) what about 42-7-6? CS453 Lecture Top-Down Predictive Parsers 1 CS453 Lecture Top-Down Predictive Parsers 2 Goal: disambiguate the grammar Unambiguous grammar for simple expressions Grammar Cause: the grammar did not specify the precedence nor the E associativity of the operators +,-,* E à à E + T | E-T | T parse tree Two options: T à à T * F | F E T + keep the ambiguous grammar, but add extra directives to the parser, F à à ( E ) | ID | NUM T F so that only one tree is formed (See PA0.cup for Simple Expression * T Language) String Num F (6) F Rewrite the grammar, making the precedence and associativity 42+7*6 Num explicit in the grammar. Num (7) How is the precedence encoded? (42) How is the associativity encoded? CS453 Lecture Top-Down Predictive Parsers 3 CS453 Lecture Top-Down Predictive Parsers 4 CS453 Intro and PA1 1

  2. Augmenting the grammar with End of File Predictive Parsing Predictive parsing, such as recursive descent parsing, creates the parse tree TOP DOWN, starting at the start symbol. Grammar defines the syntactically valid strings For each non-terminal N there is a method recognizing the strings that can be produced by N, with one (case) clause for each production. Parser recognizes them (same as reg.exp. and scanner) This worked great for a slightly changed version of our example from last lecture: To deal with end-of-file we augment the grammar with an end-of-file symbol ($), and create a new start symbol: start -> stmts EOF start -> stmts EOF stmts -> stmts -> ε | stmt stmts | stmt stmts stmt -> ifStmt | whileStmt | ID = NUM stmt -> ifStmt | whileStmt | ID = NUM ifStmt -> IF id { stmts } ifStmt -> IF id { stmts } S’ à à S $ whileStmt -> WHILE id { stmts } whileStmt -> WHILE id { stmts } because each clause could be uniquely identified by looking ahead one token. Let’s predictively build the parse tree for if t { while b { x = 6 }} $ CS453 Lecture Top-Down Predictive Parsers 5 CS453 Lecture Top-Down Predictive Parsers 6 First When Predictive Parsing works, when it does not What about our expression grammar: Given a phrase γ of terminals and non-terminals (a rhs of a production), FIRST( γ ) is the set of all terminals that can begin a string derived from γ . E à à E + T | E-T | T T à à T * F | F FIRST(T*F) = ? F à à ( E ) | ID | NUM FIRST(F)= ? The E method cannot decide looking one token ahead whether to predict FIRST(XYZ) = FIRST(X) ? E+T, E-T, or T. Same problem for T. Predictive parsing works for grammars where the first terminal symbol NO! X could produce ε and then FIRST(Y) comes into play of each sub expression provides enough information to decide which production to use. we must keep track of which non terminals are NULLABLE CS453 Lecture Top-Down Predictive Parsers 7 CS453 Lecture Top-Down Predictive Parsers 8 CS453 Intro and PA1 2

  3. Follow FIRST and FOLLOW sets NULLABLE – X is a nonterminal – nullable(X) is true if X can derive the empty string It also turns out to be useful to determine which terminals can directly follow a non terminal X (to decide parsing X is finished). FIRST – FIRST(z) = {z}, where z is a terminal – FIRST(X) = union of all FIRST( rhs i ), where X is a nonterminal and X terminal t is in FOLLOW(X) if there is any derivation containing Xt. -> rhs i – FIRST(rhs i ) = union all of FIRST(sym) on rhs up to and including first This can occur if the derivation contains XYZt and Y and Z are nonnullable nullable FOLLOW(Y), only relevant when Y is a nonterminal – look for Y in rhs of rules (lhs -> rhs) and union all FIRST sets for symbols after Y up to and including first nonnullable – if all symbols after Y are nullable then also union in FOLLOW(lhs) CS453 Lecture Top-Down Predictive Parsers 9 CS453 Lecture Top-Down Predictive Parsers 10 Constructive Definition of nullable, first and follow Class Exercise for each terminal t FIRST(t)={t} Compute nullable, FIRST and FOLLOW for Another Transitive Closure algorithm: Z à à d | X Y Z keep doing STEP until nothing changes X à à a | Y Y à à c | ε STEP: for each production X à à Y 1 Y 2 … Y k if Y 1 to Y k nullable (or k = 0) nullable(X) = true for each i from 1 to k, each j from i+1 to k 1: if Y 1 …Y i-1 nullable (or i=1) FIRST(X) += FIRST(Y i ) //+: union 2: if Y i+1 …Y k nullable (or i=k) FOLLOW(Y i ) += FOLLOW(X) 3: if Y i+1 …Y j-1 nullable (or i+1=j) FOLLOW(Y i ) += FIRST(Y j ) We can compute nullable, then FIRST, and then FOLLOW CS453 Lecture Top-Down Predictive Parsers 11 CS453 Lecture Top-Down Predictive Parsers 12 CS453 Intro and PA1 3

  4. Constructing the Predictive Parser Table Multiple entries in the Predictive parse table: Ambiguity A predictive parse table has a row for each non-terminal X, and a column An ambiguous grammar will lead to multiple entries in the parse table. for each input token t. Entries table[X,t] contain productions: for each X -> gamma Our grammar IS ambiguous, e.g. Z à à d for each t in FIRST(gamma) but also Z à à XYZ à à YZ à à d table[X,t] = X->gamma if gamma is nullable for each t in FOLLOW(X) table[X,t] = X->gamma For grammars with no multiple entries in the table, we can use the table a c d to produce one parse tree for each valid sentence. We call these grammars X X à a X à Y X à Y LL(1): Left to right parse, Left-most derivation, 1 symbol lookahead. Compute the predictive X à Y parse table for Y Y à ε Y à ε Y à ε Z à à d | X Y Z A recursive descent parser examines input left to right. The order it Y à c expands non-terminals is leftmost first, and it looks ahead 1 token. X à à a | Y Z Z à XYZ Z à XYZ Z à XYZ Y à à c | ε Z à d CS453 Lecture Top-Down Predictive Parsers 13 CS453 Lecture Top-Down Predictive Parsers 14 Left recursion and Predictive parsing Left Factoring What happens to the recursive descent parser if we have a left Left recursion does not work for predictive parsing. Neither does a recursive production rule, e.g. E à à E+T|T grammar that has a non-terminal with two productions that start with a E calls E calls E forever common phrase, so we left factor the grammar: To eliminate left recursion we rewrite the grammar: Left refactor S → α S ' S → αβ from: to: 1 E à à E + T | E-T | T E à à T E’ S → αβ 2 S ' → β 1 | β 2 T à à T * F | F E’ à à + T E’ | - T E’ | ε E.g.: if statement: F à à ( E ) | ID | NUM T à à F T’ S à à IF t THEN S ELSE S | IF t THEN S | o T’ à à * T E’ | ε F à à ( E ) | ID | NUM becomes replacing left recursion X à à X γ | α (where α does not start with X) S à à IF t THEN S X | o by right recursion, as X produces α γ * that can be produced right X à à ELSE S | ε recursively. Now we can augment the grammar (S à à E$), compute nullable, FIRST and FOLLOW, and produce an LL(1) predictive When building the predictive parse table, there will be a multiple entries. WHY? parse table, see Tiger Section 3.2. CS453 Lecture Top-Down Predictive Parsers 15 CS453 Lecture Top-Down Predictive Parsers 16 CS453 Intro and PA1 4

  5. Dangling else problem: ambiguity Dangling else disambiguation Given construct two parse trees for The correct parse tree is: S à à IF t THEN S X | o IF t THEN IF t THEN o ELSE o S X à à ELSE S | ε S S IF t THEN S X ε IF t THEN S X IF t THEN S X IF t THEN S X ε ELSE S o ELSE S IF t THEN S X IF t THEN S X o o o ELSE S ε o o We can get this parse tree by removing the X à à ε rule in the multiple entry slot in the parse tree. See written homework 2. Which is the correct parse tree? (C, Java rules) CS453 Lecture Top-Down Predictive Parsers 17 CS453 Lecture Top-Down Predictive Parsers 18 One more time One more time, but this time with feeling … Balanced parentheses grammar 1: Balanced parentheses grammar 1: S à à ( S ) | SS | ε S à à ( S )S | ε 1. Augment the grammar 1. Augment the grammar 2. Construct Nullable, First and Follow 2. Construct Nullable, First and Follow 3. Build the predictive parse table, what happens? 3. Build the predictive parse table 4. Using the predictive parse table, construct the parse tree for ( ) ( ( ) ) $ and ( ) ( ) ( ) $ CS453 Lecture Top-Down Predictive Parsers 19 CS453 Lecture Top-Down Predictive Parsers 20 CS453 Intro and PA1 5

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend