Using Parallel Features in Parsing of Machine-Translated Sentences - - PowerPoint PPT Presentation

using parallel features in parsing of machine translated
SMART_READER_LITE
LIVE PREVIEW

Using Parallel Features in Parsing of Machine-Translated Sentences - - PowerPoint PPT Presentation

Rudolf Rosa, Ondej Duek, David Mareek, Martin Popel {rosa,odusek,marecek,popel}@ufal.mff.cuni.cz Using Parallel Features in Parsing of Machine-Translated Sentences for Correction of Grammatical Errors Charles University in Prague


slide-1
SLIDE 1

Rudolf Rosa, Ondřej Dušek, David Mareček, Martin Popel {rosa,odusek,marecek,popel}@ufal.mff.cuni.cz

Using Parallel Features in Parsing

  • f Machine-Translated Sentences

for Correction of Grammatical Errors

Charles University in Prague Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics SSST, Jeju, 12th July 2012

slide-2
SLIDE 2

Parsing of SMT Outputs

 can be useful in many applications

 automatic classification of translation errors  automatic correction of translation errors (Depfix)  confidence estimation, multilingual question

answering...

✔ we have the source sentence available

 Can we use it to help parsing?

✗ SMT outputs noisy (errors in fluency, grammar...)

 parsers trained on gold standard treebanks  Can we adapt parser to noisy sentences?

slide-3
SLIDE 3

MST Parser

 Maximum Spanning Tree dependency parser  by Ryan McDonald

slide-4
SLIDE 4

(1) Words and Tags

Rudolph NNP relaxes VBZ abroad RB # root

words = nodes

slide-5
SLIDE 5

(2) (Nearly) Complete Graph

Rudolph NNP relaxes VBZ abroad RB # root

all possible edges = directed edges

slide-6
SLIDE 6

(3) Assign Edge Weights

Rudolph NNP relaxes VBZ abroad RB # root

  • 7
  • 1659

325 1490 1154

  • 263

24 185 368

Margin Infused Relaxed Algorithm (MIRA) edge weight = sum of edge features weights

slide-7
SLIDE 7

(4) Maximum Spanning Tree

Rudolph NNP relaxes VBZ abroad RB # root

  • 7
  • 1659

325 1490 1154

  • 263

24 185 368

non-projective trees: Chu-Liu-Edmonds algorithm (projective trees: Eisner algorithm)

slide-8
SLIDE 8

(5) Unlabeled Dependency Tree

Rudolph NNP relaxes VBZ abroad RB # root

dependency tree = maximum spanning tree

slide-9
SLIDE 9

(6) Labeled Dependency Tree

Rudolph NNP relaxes VBZ abroad RB # root Predicate Subject Adverbial

labels asigned by a second stage labeler

slide-10
SLIDE 10

RUR Parser

 reimplementation of MST Parser

 (so far only) first-order, non-projective

 adapted for SMT outputs parsing  parallel features  ”worsening” the training treebank

slide-11
SLIDE 11

English-to-Czech SMT

 Czech language

 highly flective

 4 genders, 2 numbers, 7 cases, 3 persons...  Czech grammar requires agreement in related words

 word order relatively free: word order errors not crucial

 Phrase-Based SMT often makes inflection errors:

➔ Rudolph's car is black.

✗ Rudolfova/fem auto/neut je černý/masc.

✔ Rudolfovo/neut auto/neut je černé/neut.

slide-12
SLIDE 12

Parser Training Data

 Prague Czech-English Dependency Treebank

 parallel treebank  50k sentences, 1.2M words  morphological tags, surface syntax, deep syntax  word alignment

slide-13
SLIDE 13

Parallel Features

 word alignment (using GIZA++)  additional features (if aligned node exists):

 aligned tag (NNS, VBD...)  aligned dependency label (Subject, Attribute...)  aligned edge existence (0/1)

slide-14
SLIDE 14

Parallel Features Example

Rudolf NN M S 1 relaxuje VB S 3 v RR 6 zahraničí NN N S 6 Rudolph NNP relaxes VBZ abroad RB # root # root Pred AuxP Adv Subj Subj Pred Adv

slide-15
SLIDE 15

Worsening the Treebank

 treebank used for training contains correct

sentences

 SMT output is noisy

 grammatical errors  incorrect word order  missing/superfluous words  …

 let's introduce similar errors into the treebank!

 so far, we have only tried inflection errors

slide-16
SLIDE 16

Worsen (1): Apply SMT

 translate English side of PCEDT to Czech

 by an SMT system (we used Moses)

 now we have (e.g.):

 Gold English

 Rudolph's car is black.

 Gold Czech

 RudolfovoNEUT autoNEUT je černéNEUT.

 SMT Czech

 RudolfovaFEM autoNEUT je černýMASC.

slide-17
SLIDE 17

Worsen (2): Align SMT to Gold

 align SMT Czech to Gold Czech  Monolingual Greedy Aligner

 alignment link score = linear combination of:

 similarity of word forms (or lemmas)  similarity of morphological tags (fine-grained)  similarity of positions in the sentence  indication whether preceding/following words aligned

 repeat: align best scoring pair until below threshold  no training: weights and threshold set manually

slide-18
SLIDE 18

Worsen (3): Create Error Model

 for each tag:

 estimate probabilities of SMT system using an

incorrect tag instead of the correct tag (Maximum Likelihood Estimate)

 Czech tagset: fine-grained morphological tags

 part-of-speech, gender, number, case, person,

tense, voice...

 1500 different tags in training data

slide-19
SLIDE 19

Worsen (3): Error Model

 Adjective, Masculine, Plural, Instrumental case

(AAMP7), e.g. lingvistickými (linguistic)

➔ 0.2 Adjective, Masculine, Singular, Nominative case

➔ e.g. lingvistický

➔ 0.1 Adjective, Masculine, Plural, Nominative case

➔ e.g. lingvističtí

➔ 0.1 Adjective, Neuter, Singular, Accusative case

➔ e.g. lingvistické

 … altogether 2000 such change rules

slide-20
SLIDE 20

Worsen (4): Apply Error Model

 take Gold Czech  for each word:

 assign a new tag randomly sampled according to

Tag Error Model

 generate a new word form

 rule-based generator, generates even unseen forms  new_form = generate_form(lemma, tag) || old_form

 → get Worsened Czech  use resulting Gold English-Worsened Czech

parallel treebank to train the parser

slide-21
SLIDE 21

Direct Evaluation by Inspection

 manual inspection of several parse trees

 comparing baseline and adapted parser ouputs

 examples of improvements:

 subject identification even if not in nominative case  adjective-noun dependence identification even if

agreement violated (gender, number, case)

 hard to do reliably

 trying to find a correct parse tree for an (often)

incorrect sentence – not well defined

slide-22
SLIDE 22

Indirect Evaluation: in Depfix

 rule-based grammar correction of SMT outputs  input = aligned, tagged and parsed sentences:

 target (Czech) sentence – to be corrected  source (English) sentence – additional information

 applies 20 correction rules:

 noun – adjective agreement (gender, number, case)  subject – predicate agreement (gender, number)  preposition – noun agreement (case)  …

slide-23
SLIDE 23

Depfix: Rudolph's Car

Rudolph NNP car NN Rudolfova AA fem auto NN neut Atr Atr 's POS Rudolfovo AA neut auto NN neut Atr Atr Adjective – Noun Agreement

slide-24
SLIDE 24

Indirect Evaluation Results

 differences in Depfix corrections evaluated by

humans: better / worse / indefinite

 three different parsers

 RUR + parallel features + worsened treebank  – original McDonald's MST Parser  RUR – our baseline setup

RUR + parallel features + worsened treebank better worse indefinite 51% 30% 18% RUR 54% 28% 18%

slide-25
SLIDE 25

Conclusion

 SMT outputs often hard to parse  RUR parser – adapted to parsing SMT outputs

 parallel features (tag, dep. label, edge existence)  worsening the training treebank (tag error model)

 outputs of English-to-Czech translation  evaluated in Depfix

 SMT errors correction system

slide-26
SLIDE 26

Future Work

 more sophisticated parallel features  more experiments on worsening  more languages  parallel tagging

slide-27
SLIDE 27

Thank you for your attention

For this presentation and other information, visit: http://ufal.mff.cuni.cz/~rosa/depfix/

Rudolf Rosa, Ondřej Dušek, David Mareček, Martin Popel {rosa,odusek,marecek,popel}@ufal.mff.cuni.cz Charles University in Prague Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics