using frames in spoken language understanding
play

Using Frames in Spoken Language Understanding Renato De Mori LUNA - PowerPoint PPT Presentation

Using Frames in Spoken Language Understanding Renato De Mori LUNA IST contract no 33549 LangTech 2008 Rome , Feb 27th, 2008 Rome , Feb 27th, 2008 1 LangTech2008 1 Summary THE LUNA EU PROJECT SIGN TO MEANING PROCESS


  1. Meaning representation Semantic theories have inspired the conception of Meaning Representation Languages (MRL). MRLs have a syntax and a semantic (Woods, 1975) and should, among other things: represent intension and extension, with defining and asserting properties, use quantifiers as higher operators, lambda abstraction And make it possible to perform inference Frame languages define computational structures (Kifer et al., JACM, 1995) and can be seen as cognitive structuring devices (Fillmore, 1968, 1985) in a semantic construction theory. Rome , Feb 27th, 2008 LangTech2008 21

  2. Frames as computational structures (intension) A frame scheme with defining properties represents types of conceptual structures (intension) as well as instances of them (extension). Relations with signs can be established by attached procedures (S. Young et al., 1989). {address [TOWN] loc …… attached procedures [ DEPARTMENT OR PROVINCE OR STATE ] area …… attached procedures [ NATION ] country …… attached procedures [ NUMBER AND NAME ] street …… attached procedures [ ORDINAL NUMBER ] zip …… attached procedures } Rome , Feb 27th, 2008 LangTech2008 22

  3. Frame instances (extension) A convenient way for asserting properties, and reasoning about semantic knowledge is to represent it as a set of logic formulas . ∧ ∧ ∧ ⎧ ⎫ ins tan ce _ of ( x , address ) loc ( x , Avignon ) area ( x , Vaucluse ) ∃ ⎨ ⎬ ( x ) ∧ ∧ ∧ ⎩ country ( x , France ) street ( x , 1 avenue Pascal ) zip ( x , 84000 ) ⎭ A frame instance (extension) can be obtained from predicates that are related and composed into a computational structure. Frame schemata can be derived from knowledge obtained by applying semantic theories. Interesting theories can be found, for example in (Jackendoff, 1990, 2002) or in (Brackman 1978, reviewed by Woods 1985) Rome , Feb 27th, 2008 LangTech2008 23

  4. Frame instance Schemata contain collections of properties and values expressing relations. A property or a role are represented by a slot filled by a value {a0001 address instance_of Avignon loc Vaucluse area France country 1, avenue Pascal street 84000 zip } Rome , Feb 27th, 2008 LangTech2008 24

  5. Process overview An integrated solution: the blackboard architecture (Erman et al., ACM Comp. Surveys 1980) learning Long Term Memory : AM LM interpretation KSs speech speech to conceptual structures and MRL signs concept structures words concept tags MRL description Short Term Memory dialogue Rome , Feb 27th, 2008 LangTech2008 25

  6. Interpretation problem decomposition Speech signs meaning 1-best, n-best, lattices Acoustic features words constituents structures features for interpretation Problem reduction representation is context-sensitive Interpretation is a composite decision process. Many decompositions are possible involving a variety of methods and KSs, suggesting to consider a modular approach to process design. Robustness is obtained by evaluation and possible integration of different KSs and methods used for the same sub-task. Rome , Feb 27th, 2008 LangTech2008 26

  7. Levels of processes and application complexity Translation from words to basic conceptual constituents Semantic composition on basic constituents Context-sensitive validation Combination of level processes may depend on the application Rome , Feb 27th, 2008 LangTech2008 27

  8. From signs to constituents Hypothesize a lattice of concept tags for semantic constituents and compose them into structures. Detection vs. translation AM LM trans KS speech speech to MRL constituents AM LM tr KS signs words concept tags ASR transl word tag lattice lattice Rome , Feb 27th, 2008 LangTech2008 28

  9. WORDS TO CONCEPTS (SEMANTIC CONSTITUENTS) TRANSLATION Rome , Feb 27th, 2008 LangTech2008 29

  10. History Systems developed in the seventies reviewed in (Klatt, 1977) and the eighties, early nineties (EVAR, SUNDIAL) mostly performed syntactic analysis on the best sequence of words hypothesized by an ASR system and used non probabilistic rules, semantic networks, pragmatic and semantic grammars for mapping syntactic structures into semantic ones expressed in logic form. In the nineties, the need emerged for testing SLU processes on large corpora that could also be used for automatically estimating some model parameters. Probabilistic finite-state interpretation models and grammars were also introduced for dealing with ambiguities introduced by model imprecision. Rome , Feb 27th, 2008 LangTech2008 30

  11. Probabilistic interpretation in the Chronous system Org. Dest. The probability P(CW) is computed using Markov models as Markov models P(CW)=P(W|C)P(C) else Date (Pieraccini et al., 1991, Pieraccini, E. Levin, E. Vidal, 1993). Rome , Feb 27th, 2008 LangTech2008 31

  12. Semantic Classification trees City? no yes from City? yes no to City? Origin no yes Dest. (Kuhn and De Mori, 1995) Rome , Feb 27th, 2008 LangTech2008 32

  13. SEMANTIC GRAMMARS Rome , Feb 27th, 2008 LangTech2008 33

  14. Interpretation as a translation process Interpretation of written text can be seen as a process that uses procedures for translating a sequence of words in natural language into a set of semantic hypotheses (just constituents or structures) described by a semantic language. W:[S[VP [V give, PR me] NP [ART a, N restaurant] PP[PREP near, NP [N Montparnasse, N station]]]] Γ :[Action REQUEST ([Thing RESTAURANT], [Path NEAR ([Place IN ([Thing MONTPARNASSE])])]] Interesting discussion in (Jackendoff, 1990) Each major syntactic constituent of a sentence maps into a conceptual constituent, but the inverse is not true. Rome , Feb 27th, 2008 LangTech2008 34

  15. Using grammars for NLU Adding semantic building structures to cfg Categorial grammars (Lambek, 1958) Montague Grammars (Montague, 1974) Augmented Transition Network Grammars (Woods 1970) Semantic grammars for SLU (Woods, 1976) Tree Adjoining grammars (TAG) integrate syntax and logic form (LF) semantics. Links can be established between the two representations and operations carried out synchronously (Shabes and Joshi, 1990). Rome , Feb 27th, 2008 LangTech2008 35

  16. Robust parsing (early ATIS) A robust fallback module has been incorporated in successive versions (Delphi Bates et al., 1994). The system developed at SRI consists of two semantic modules yoked together: a unification-grammar-based module called ”Gemini”, and the ”Template Matcher” which acts as a fallback if Gemini can't produce an acceptable database query (Appelt, 1996). When a sentence parser fails, constraints on the parser are relaxed to permit the recovery of parsable phrases and clauses (TINA Seneff, 90). Fragments are then fused together. Local parsing (Abney, 1991). Rome , Feb 27th, 2008 LangTech2008 36

  17. Stochastic semantic context-free grammars The linguistic analyzer TINA, (MIT, Seneff, 1989), has a grammar written as a set of probabilistic context free rewrite rules with constraints. The grammar is converted automatically at run-time to a network form in which each node represents a syntactic or semantic category. The probabilities associated with rules are calculated from training data, and serve to constrain search during recognition (without them, all possible parses would have to be considered). History grammars (Black et al., 1993) Robust partial parser Rome , Feb 27th, 2008 LangTech2008 37

  18. Parsing with ATIS stochastic semantic gramamrs show show flight flight Non-terminal Non-terminal Dest. Dest. Date Date nodes nodes Terminal Terminal Show Show Flight Flight Dest. Dest. City City Date Date Day Day indicator indicator Indicator Indicator Indicator Indicator Name Name Indicator Indicator nodes nodes Please show me Please show me the flights the flights to to Boston Boston on on Monday Monday Rome , Feb 27th, 2008 LangTech2008 38

  19. Stochastic semantic context-free grammars The Hidden Understanding Model (HUM) system, developed at BBN, is based Hidden Markov Models (Miller et al., 1994). In the HUM system, after a parse tree is obtained, bigram probabilities of a partial path towards the root, given another partial path are used. Interpretation is guided by a strategy represented by a stochastic decision tree . The semantic language model employs tree structured meaning representations : concepts are represented as nodes in a tree, with sub-concepts represented as child nodes. Pr(M|W) = Pr(W|M)Pr(M)/Pr(W) M: meaning Rome , Feb 27th, 2008 LangTech2008 39

  20. Hidden vector state model Each vector state is viewed as a hidden variable and represents the state of a push-down automaton. Such a vector is the result of pushing non-terminal symbols starting from the root symbol and ending with the pre-terminal symbol. Non-terminal symbols correspond to semantic compositions like FLIGHTS while pre- terminal symbols correspond to semantic constituents like CITY. (He and Young, 2006) An example of state vector representing a path for a composition to the start symbol S is: ⎡ ⎤ CITY ⎢ ⎥ FROM _ LOCATION _ ⎢ ⎥ ⎢ ⎥ FLIGHTS ⎢ ⎥ ⎣ S ⎦ Rome , Feb 27th, 2008 LangTech2008 40

  21. Microsoft stochastic grammar Semantic structures are defined by schemata. Each schema is an object (Y.Y. Wang, A. Acero, 2003). Object structures are defined by an XML schema. Given a semantic schema, a semantic CFG is derived using templates. Details of the schemata are learned automatically. An entity is the basic component of a schema which defines relations among entities. An entity consists of a head, optional modifiers and optional properties defined recursively so that they finally incorporate a different sequence of schema slots. Each slot is bracketed by an optional pre-amble and post-amble which are originally place holders. Rome , Feb 27th, 2008 LangTech2008 41

  22. Concurrent or sequential use of syntax and semantic knowledge Semantic parsing is discussed in (Tait, 1983). A semantic first parser is described in (Lytinen, 1992). a race-based parser is described in (McRoy and Hirst, 1990). The Delphi system (Bobrow et al., 1990), contains a number of levels, namely, syntactic (using Definite Clause Grammar, DCG), general semantics, domain semantics and action. Rules transform syntactic into semantic representations Recent works introduce actions in parsers for generating predicate/argument hypotheses. Strategies for parsing actions are obtained by automatic learning from annotated corpora (FrameNet, VerbNet ….) Rome , Feb 27th, 2008 LangTech2008 42

  23. Predicate/argument structures and parsers Recently, classifiers were proposed for detecting concepts and roles. Such detection process was integrated with a stochastic parser (e.g. Charniak 2001). A solution using this parser and tree-kernel based classifiers for predicate argument detection in SLU is proposed in (Moschitti et al. ASRU 2007). Other relevant contributions on stochastic semantic parsing can be found in (Goddeau and Zue. 1992, . Goodman. 1996,. Chelba and Jelinek, 2000,. Roark, 2002, Collins, 2003) Lattice-based parsers are reviewed in (Hall, 2005) Rome , Feb 27th, 2008 LangTech2008 43

  24. Semantic building actions in parsing S NP [agent] VP[action] NP[theme] det N V det N the customer accepts the contract Use tree kernel methods for learning argument matching (Moschitti, Raymond, Riccardi, ASRU 2007) Rome , Feb 27th, 2008 LangTech2008 44

  25. Important questions There is no evidence yet that there is an approach that is superior to all others. Where are the signs? Are they only words? Many system architectures are ASR + NLU How effective is the use of syntactic structures with spoken language and ASR? How important are inference and composition? Relevant NLU literature exists on these topics. To what extent can they be used? PROPOSED SOLUTION : COMBINE DIFFERENT SHALLOW PARSING METHODS TO IMPROVE ROBUSTNESS Rome , Feb 27th, 2008 LangTech2008 45

  26. Generation of semantic constituent hypotheses Rome , Feb 27th, 2008 LangTech2008 46

  27. Finite-state conceptual language models ASR algorithms compute probabilities of word hypotheses using finite state language models. It is important to perform interpretation from a lattice of scored words and to take, possibly redundant, word contexts into account (Drenth and Ruber, 1997, Nasr et al., 1999). Other interesting contributions are in (Prieto et al., 1993, Kawahara et al., 1999). Finite state approximations of context-free or context-sensitive grammars (Pereira, 1990, reviewed in Erdogan, 2005), Finite state parser (TAG) with application semantics (Rambow et al. 2002). Rome , Feb 27th, 2008 LangTech2008 47

  28. Conceptual Language Models C L M 0 C L M 1 … … … … … … … … … … … … .. C L M j C L M J This architecture is used also for separating in domain from out domain message segments (Damnati, 2007) and for spoken opinion analysis (Camelin et al., 2006). The whole ASR knowledge models in this way a relation between signal features and meaning. Rome , Feb 27th, 2008 LangTech2008 48

  29. Hypothesis generation from lattices An initial ASR activity generates a word graph (WG) of scored word hypotheses with a generic LM. The network is composed with WG resulting in the assignment of semantic tags to paths in WG ⎛ ⎞ C = ⎜ ⎟ o SEMG WG U CLM ⎝ ⎠ c = c 0 SWG=OUTPROJ(SEMG) (Special issue Speech Communication, 3 2006, Béchet et al., Furui) Rome , Feb 27th, 2008 LangTech2008 49

  30. NL - MRL translation In (Papineni et al. , 1998) statistical translation models are used to translate a source sentence S into a target, artificial language T by maximizing the following probability : Pr(T|S) = Pr(S|T)P(T) Pr(S) The central task in training is to determine correlations between group of words in one language and groups of words in the other. The source channel fails in capturing such correlations, so a direct model has been built to directly compute the posterior probability P(T|S). Intresting solutions also in (Macherey et al., 2001, Sudoh and Tsukada, 2005 for attribute/value pairs, LUNA) Rome , Feb 27th, 2008 LangTech2008 50

  31. CRF Possibility of having features from long-term dependences Results for LUNA from Riccardi, Raymond, Ney, Hann ⎛ ⎞ 1 ∑∑ = ⎜ λ ⎟ ( | ) exp ( , , , ) p y x f y y x i − 1 k k i i ⎝ ⎠ ( ) Z x ∈ c C k ⎛ ⎞ ∑ ∑∑ = ⎜ λ ⎟ ( ) exp ( , , , ) Z x f y y x i − 1 k k i i ⎝ ⎠ ∈ y c C k = ⎧ 1 if . y ARRIVE CITY i ⎪ = ⎨ ( , , , ) and ... contain {arrive | to} f y y x i x x − − 1 1 k i i i i ⎪ ⎩ 0 otherwise Rome , Feb 27th, 2008 LangTech2008 51

  32. Method comparison and combination Results on the French MEDIA corpus, LUNA project, NLU RWTH • Aachen results Approaches: • – Linear chain CRF Raymond C., Riccardi G. “Generative and Discriminative Algorithms for Spoken Language – FST Understanding”, Proc. INTERSPEECH, Antwerp, 2007. – SVM – Log-linear on positional level – MT Moschitti A., Riccardi G., Raymond C. “Spoken language understanding with kernels for syntactic/ – SVM with tree kernel semantic structures”, Proc. IEEE ASRU, Kyoto, 2007. Comparison Incremental oracle performance Rome , Feb 27th, 2008 LangTech2008 52

  33. Sequential approach with 1-best ASR Comparison of interpretation results obtained in the MEDIA corpus 1 best ASR output concept error rate (CER) Conditional Random Fields 25.2 % Finite State Transducers 29.5 % Support Vector Machines 29.6 % CER close to 20 when N-best concepts (N<10) are obtained with FSMs. Possiiblity of further imprevement by combination with CRFs and using dialog constraints Rome , Feb 27th, 2008 LangTech2008 53

  34. Demo LUNAVIZ Rome , Feb 27th, 2008 LangTech2008 54

  35. SEMANTIC COMPOSITION AND INFERENCE Rome , Feb 27th, 2008 LangTech2008 55

  36. [ ] ( [ ] ) F r G r ( v ) j jk x xk xkh Frame structures and slot chains Instances of semantic structures are represented by slot chains with facets (Koler, Pfeiffer, 1998) [ ] [ ] ( ) F r G r ( v ) j jk x xk xkh ( ) { ( ) } ( ) σ = ∧ σ F , v F , v / r ( F , G ) G , v j xkh j xkh jk j x x xkh Rome , Feb 27th, 2008 LangTech2008 56

  37. Composition Γ : REQUEST.[agent(speaker), recipient (system), theme (KNOW j [theme ITEM [theme (LODGING [])])] : LODGING[ldg_structure (HOTEL[]), ldg_room (ROOM[]), G x ldg_lux (good)] Obtained by inference after constituent detection ⊃ ∧ Speaker(user) chambre-standing[bon] LODGING [ldg_structure (HOTEL[]), ldg_room (ROOM[]), ldg_lux (good)] Rome , Feb 27th, 2008 LangTech2008 57

  38. [ ] ( ) Γ sup R j G , x Support for Composition REQUEST.[agent(speaker), recipient (system), theme (KNOW [theme ITEM [theme (LODGING [ldg_structure (HOTEL[]), ldg_room (ROOM[]), ldg_lux (good)])])] Composition is performed if there is a support in the data for their relation [ ] { ( ) } ( ) Γ sup R sup , sup G j x Relation support have general word patterns (e.g. specificarion, inclusion…) which are often independent from the application domain Rome , Feb 27th, 2008 LangTech2008 58

  39. Demo FRIZ Rome , Feb 27th, 2008 LangTech2008 59

  40. Simple frame probabilistic model In (Thompson et al., 2003 ) it is suggested that a frame F is instantiated by a predictor word S and roles R are related to phrases C. S F [R1 ……. Rn] roles [C1 ……. Cn] chunks predictor frame Probability model with Markov assumption = P ( C , R , F , S ) P ( S ) P ( F S ) P ( R FS ) P ( C RFS ) ≈ ∏ P ( R FS ) P ( R R F ) − i i 1 ≈ ≈ = ∏ ∏ P ( C RFS ) P ( R FS ) P ( C R ) P ( C R ) i i Rome , Feb 27th, 2008 LangTech2008 60

  41. Logic based approaches to interpretation Logic based approaches to NLU were proposed for representing semantic knowledge and performing inference on it. In (Norvig, 1987) inferences are considered for asserting implicit meaning of a sentence or implicit connections between sentences. In (Palmer, 1983), it is suggested to detect relationships between semantic roles by inference. In (Koller and Pfeffer, 1998) is noticed that one of the limits of the expressive power of frames is the inability to represent and reason about uncertain and noisy information. Probability distributions were introduced in slot facets to represent constraints on possible role values. An algorithm was proposed for obtaining a Bayesian Network (BN) from a list of dependences between frame slots. Rome , Feb 27th, 2008 LangTech2008 61

  42. Probabilistic frame based systems In probabilistic frame-based systems, (Koller 1998 ) a frame slot S of a frame F is associated a facet Q with value Z: Q(F,S,Y). A probability model is part of a facet as it represents a restriction on the values Y. It is possible to have a probability model for a slot value which depends on a slot chain. It is also possible to inherit probability models from classes to subclasses, to use probability models in multiple instances and to have probability distributions representing structural uncertainty about a set of entities. Rome , Feb 27th, 2008 LangTech2008 62

  43. → Y k → Dependency graph with cycles W k → Y → W → C → γ k k k i , j , k γ Y → W → C → l l l l i , j , Y → W → C → γ m m m i , j , m Acoustic_evidence support concept filled-slot If the dependence graph has cycles, then possible worlds can be considered. The computation of probabilities of possible worlds is discussed in (Nilsson, 1986). A general method for computing probabilities of possible worlds based on Markov logic networks (MLN) is proposed in (Richardson, 2006). Rome , Feb 27th, 2008 LangTech2008 63

  44. Probabilistic models of relational data Probability of relational data can be estimated in various ways, depending on the data available and on the complexity of the domain. For simple domains it is possible to use a naïve Bayes approach. Otherwise, it is possible to use the disjunctive interaction model (Pearl, 1988), or relational Markov networks (RMN) (Taskar, 2002) Methods for probabilistic logic learning are reviewed in (De Raedt, 2003). Rome , Feb 27th, 2008 LangTech2008 64

  45. OTHER MODULAR SYSTEMS Rome , Feb 27th, 2008 LangTech2008 65

  46. Combinations of approaches NLU Rule-based approaches to interpretation suffer from their brittleness and the significant cost of authoring and maintaining complex rule sets. Data-driven approaches are robust. However, the reliance on domain- specific data is also one of the significant bottlenecks of data-driven approaches. Combining different approaches makes it possible to get the best out of them. Simple grammars are used for detecting possible clauses, then classification-based parsing completes the analysis with inference (Kasper and Hovy, 1990). Shallow semantic parsing was proposed by (Gildea and Jurafsky, 2002, Hacioglu and Ward, 2003, ; Pradhan et al. 2004) ] Rome , Feb 27th, 2008 LangTech2008 66

  47. Microsoft SLU In (Wang et al., 2002), stochastic semantic grammars are combined with classifiers for recognizing concepts. their combination with ROVER (the hypothesis which gets the majority of votes wins). SVM alone resulted to be the best even if ROVER is applied. Important improvement was found by replacing certain words with their semantic categories found by the parser. Concepts detected in this way are used to filter the rules of the semantic grammar applied to find slot fillers Rome , Feb 27th, 2008 LangTech2008 67

  48. Colorado A parser based on tagging actions producing non-overlapping shallow tree structures is proposed in (Hacioglu, K. (2004) , at lexical, syntactic and semantic levels to represent the language. The goal is to improve the portability of semantic processing to other applications, domains and languages. The new structure is complex enough to capture crucial (non- exclusive) semantic knowledge for intended applications and simple enough to allow flat, easier and fast annotation. Rome , Feb 27th, 2008 LangTech2008 68

  49. ATT The use of just a grammar is not sufficient, (Bangalore et al.,) because recognition needs to be more robust to extragrammaticality and language variation in user’s utterances and the interpretation needs to be more robust to speech recognition errors. For this reason, a class-based trigram LM is built with in-domain data. In order to improve recognition rates, sentences are generated with the grammar to provide data for training the classifiers. In (Shapire et al. 2005), authors explore the use of human-crafted knowledge to compensate for the lack of data in building robust classifiers. Rome , Feb 27th, 2008 LangTech2008 69

  50. IBM In (Sarikaya et al, 2004), a system is proposed which generates an N-best (N=34) list of word hypotheses with a dialogue state dependent trigram LM and rescores them with two semantic models. 1 An Embedded context-free semantic Grammar (EG) is defined for each of 17 concepts and performs concept spotting by searching for phrase patterns corresponding to concepts. 2 A second LM, called Maximum Entropy (ME) LM (MELM), computes probabilities of a word, given the history, using a ME model. Rome , Feb 27th, 2008 LangTech2008 70

  51. SPEECH ACTS Rome , Feb 27th, 2008 LangTech2008 71

  52. Speech acts Negotiation dialogues are characterized by a hierarchy of illocutory (speech) acts (Chang, 2004). They are discourse actions identified by verbs, other lexical units or implied by other concepts expressed in a sentence. These speech acts (SA) determine the sentence type. Various attempts have been made to identify SAs which are domain independent. A possible taxonomy of them is formulated in the Dialogue Act Markup in Several Layers (DAMSL). Rome , Feb 27th, 2008 LangTech2008 72

  53. Speech acts In (Cohen and Perrault, 1979), a notation of formulating dialogue acts as plan operators is proposed. A negotiation dialogue follows a partially ordered plan represented by a Hierarchy of Tasks (HT) (Sacerdoti, ijcai75). Each task is characterized by a SA whose effect is the instantiation, modification or finalization of conceptual structures required for performing transactions. HT is a generative structure of possible sequences of SAs characterizing the sentences of a dialogue with which a system and a user negotiate for defining a possible transaction. Rome , Feb 27th, 2008 LangTech2008 73

  54. Speech acts The main purpose of a service is to satisfy a user goal. If a service can satisfy many goals, it has to hypothesize/identify actual user goals and, for each goal consider a mean to achieve it. Such a mean can be a plan whose actions are executed following a policy and have the objective of gathering all the necessary details for specifying an instance of a goal which corresponds to a user intention . In the considered applicarions the goals are performing transactions and the dialogue involves negotiations represented by non-linear, partially ordered hierarchies of tasks whose possible sequences can be generated by rules Rome , Feb 27th, 2008 LangTech2008 74

  55. ∨ ⊕ Negotiation dialogues N_Dialogue := Open - Negotiation - Commit - Close Negociation := Formulation (Formulation | Repair)* Formulation := ( Assert |Request | Propose ) ( Assert | Request | Propose )* Request := ( Know | Reserve | Confirm ) ( Know | Reserve | Confirm )* Repair := ( Repeat + Hold + Correct )* ( Repeat + Hold Correct + Reject) Commit := Accept Rome , Feb 27th, 2008 LangTech2008 75

  56. PROPOSED APPROACH Compose semantic structures headed by speech acts Use these structures for composing/modifying instances of transaction models based on understanding actions Use transaction model instances for deciding system actions Use instances of speech acts and their roles for obtaining summaries of dialogue histories and their probabilities. They will be used by the Dialogue Manager (POMDP ASRU 2007) Rome , Feb 27th, 2008 LangTech2008 76

  57. Speech and dialogue acts (history) A speech act is a dialogue fact expressing an action. Speech acts and other dialog facts to be used in reasoning activities have to be hypothesized from discourse analysis. •Semantic classification trees [Mast et. al ’96], (Wiebe at al., 1997) •Decision trees [Stolcke et. al ’98, Ang et. al ’05], •HMMs [Stolcke et. al ’98], •Classification trees (Tanigaki and Sagisaka, 1999), •Neural networks [Stolcke et. al ’98, Wang et. al ’99] •Fuzzy fragment-class Markov models [Wu et. al ’02] •Maximum entropy models [Stolcke et. al ’98, Ang et. al ’05] •Bayesian belief networks (Bilmes et al., 2005), •Bayesian belief model (BBM) (Li and Chou, 2002) Rome , Feb 27th, 2008 LangTech2008 77

  58. Dialogue event tagging In (Zimmermann et al., 2005) prosodic features (pause durations) are used in addition to word dependent events. A Hidden-Event Language Model (HELM) is used in a process of simultaneous segmentation and classification. After each word, the HE-LM predicts either a non-boundary event or the boundary event corresponding to any of the DA types under consideration Mapping words into actions (Potamianos et al., 1999, Meng et al., 1999). Latent Semantic Analysis is proposed in (Bellegarda, 2002, Zhang and Rudnicky, 2002) Rome , Feb 27th, 2008 LangTech2008 78

  59. Sentence boundary detection Using prosody (Shriberg rt al., 2000) Approaches to boundary detection have used finite-state sequence modeling approaches, including Hidden Markov Models (HMM) and Conditional Random Fields (CRF) (Roark et al. 2006) Sentences are often short, providing relatively impoverished state sequence information. A Maximum Entropy (MaxEnt) model that did not use state sequence information, was able to outperform an HMM by including additional rich information. Features from (Charniak, 2000) parser were used. Rome , Feb 27th, 2008 LangTech2008 79

  60. Sentence classification Call routing is an important and practical example of spoken message categorization. In applications of this type, the dialog act expressed by one or more sentences is classified to generate a semantic primitive action belonging to a well defined set. •Connectionist models (Gorin et al. 1995) •SVD (Chu-Carroll and Carpenter, 1999) •Latent Semantic Analysis (LSA) (Bellegarda 2002) •SVM, cosine similarity metric (used in IR) and Beta-classifier (IBM, 2005, 2006) •Cluster of sentences is proposed in (He and Young, 2006) Rome , Feb 27th, 2008 LangTech2008 80

  61. FT/LIA System 3000 Béchet et al. ICASSP 2007 Γ k is a composition word lattice concept lattice interpretation lattice dialog state lattice Rome , Feb 27th, 2008 LangTech2008 81

  62. CONFIDENCE AND LEARNING Rome , Feb 27th, 2008 LangTech2008 82

  63. unsupervised semantic role labelling Interpretation modules have parameters estimated by automatic learning (Chronus, Chanel, HUM and successor systems ) Semantic annotation is time consuming. The process should be semi-automatic starting with bootstrapping (e.g., Hindle and Rooth, 1993; Yarowsky, 1995; Jones et al., 1999) Initially make only the role assignments that are unambiguous according to a verb lexicon ((Kate and Mooney, 2007). A probability model is created based on the currently annotated semantic roles. When unlabeled test examples are also available during training, a transductive framework for learning can further improve the performance on the test examples Rome , Feb 27th, 2008 LangTech2008 83

  64. Active Learning Hakkani-Tür, Riccardi Gorin, 2002) Rome , Feb 27th, 2008 LangTech2008 84

  65. Certainty-Based Active Learning for SLU Rome , Feb 27th, 2008 LangTech2008 85

  66. Sequential decision using different features sets Confidence is used to define reliability situations based on which dialogue actions can be DU1 decided. P( Γ |F1) high P(G|F1) low RU1 P(G|F2) low DU2 validated corrected others Rome , Feb 27th, 2008 LangTech2008 86

  67. Confidence Evaluate confidence of components and compositions Γ Φ P ( ) conf Φ represents the confidence indicators or a function of them. conf Notice that it is difficult to compare competing interpretation P Γ hypotheses based on the probability where Y is a time ( Y ) sequence of acoustic features, because different semantic constituents may have been hypothesized on different time segments of stream Y. Rome , Feb 27th, 2008 LangTech2008 87

  68. Confidence measures Two basic steps: 1) generate as many features as possible based on the speech recognition and/or natural language understanding process and 2) Estimate correctness probabilities with these features, using a combination model. Rome , Feb 27th, 2008 LangTech2008 88

  69. Features for confidence Many features are based on empirical considerations: semantic weights assigned to words, uncovered word percentage, gap number, slot number, word, word-pair and word-triplet occurrence counts, Rome , Feb 27th, 2008 LangTech2008 89

  70. Features for confidence Word counts in an N-best list, lattice density, phone perplexity, language model back-off behaviour, and posterior probabilities Measures related to the fact that sentences that are grammatically correct and free of recognition errors tend to be easier to parse and the corresponding scores in the parse tree are higher than those of the ungrammatical sentences containing errors generated by the speech recognizer (IBM). Rome , Feb 27th, 2008 LangTech2008 90

  71. Other features for confidence In (Lieb, 2005), during slot-value pair extraction, semantic tree node confidences are translated into corresponding slot and value confidences, using a rule-based policy. In (Higashinaka et al., 2006) it is proposed to incorporate discourse features into the confidence scoring of intention recognition results. Lin and Wang (2001) propose a concept-based probabilistic verification model, which exploits concept N-grams. A confidence model is a kind of a classifier that scores or classifies words/concepts based on training data (Hazen, 2002) Rome , Feb 27th, 2008 LangTech2008 91

  72. Other features for confidence Use of pragmatic analysis to score concepts uttered by the user (Ammicht et al., 2001). When an already recognized concept seems to have been implicitly confirmed, the confidence of that concept is augmented. Hirschberg et al. (2004) introduce a number of prosodic features, such as F0, the length of a pause preceding the turn, and the speaking rate. Combining Confidence Scores with Contextual Features (Purver et al. 2006) Rome , Feb 27th, 2008 LangTech2008 92

  73. Define confidence-related situations Consensus among classifiers and SFST is used to produce confidence indicators in a sequential interpretation strategy (Raymond et al. 2005, 2007). Classifiers used are SCT, SVM, adaboost. Committee-Based Active Learning uses multiple classifiers to select samples (Seung et al. 1992) Fusion strategy FSM SCT SVM adaboost Rome , Feb 27th, 2008 LangTech2008 93

  74. Committee-Based Active Learning Call classification (Tur, Schapire, and Hakkani- Tür, 2003) Rome , Feb 27th, 2008 LangTech2008 94

  75. Unsupervised Learning (Tur and Hakkani-Tür, Riccardi and Hakkani-Tür, 2003) Rome , Feb 27th, 2008 LangTech2008 95

  76. Co-Training Assume there are multiple views for classification 1. Train multiple models using each view 2. Classify unlabeled data 3. Enlarge training set of the other using each classifier’s predictions 4. Goto Step 1 Rome , Feb 27th, 2008 LangTech2008 96

  77. Combining Active and Unsupervised Learning Train a classifier using initial training data While (labelers/data available) do Select k samples for labeling using active learning Label and add these selected ones to the training data and retrain Exploit the unselected data using unsupervised learning Update the pool. Rome , Feb 27th, 2008 LangTech2008 97

  78. Adaptive Learning in Practice (Riccardi et al, 2005) Rome , Feb 27th, 2008 LangTech2008 98

  79. Solutions for applications The simple use of semantic constituents is sufficient for applications such as call routing, utterance classification with a mapping to disjoint categories and perhaps to speech-to-speech translation and speech information retrieval. Semantic composition is useful for applications like spoken opinion analysis, call routing with utterance characterization (finer-grain comprehension), question/answering, inquiry qualification. A broad context is taken into account for context-sensitive validation in complex spoken dialog applications and inquiry qualification considering an utterance as a set of sub-utterances and the interpretation of one sub-utterance being context-sensitive to the others. Rome , Feb 27th, 2008 LangTech2008 99

  80. Conclusions A modular SLU architecture can exploit the benefits of combined use of CRFs, classifiers and stochastic FSMs, which are approximations of more complex grammars. Grammars should perhaps be used in conjunction with processes having inference capabilities. Recent results and applications of probabilistic logic appear interesting, but its effective use for SLU still has to be demonstrated. Annotating corpora for these tasks is time consuming suggesting that it is suitable to use a combination of knowledge acquired by a machine learning procedures and human knowledge. Rome , Feb 27th, 2008 LangTech2008 10 0

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend