Recursive Neural Structural Correspondence Network for Cross-domain - - PowerPoint PPT Presentation

recursive neural structural correspondence network for
SMART_READER_LITE
LIVE PREVIEW

Recursive Neural Structural Correspondence Network for Cross-domain - - PowerPoint PPT Presentation

Recursive Neural Structural Correspondence Network for Cross-domain Aspect and Opinion Co-extraction Wenya Wang and Sinno Jialin Pan Nanyang Technological University, Singapore SAP Innovation Center Singapore { wa0001ya, sinnopan


slide-1
SLIDE 1

Recursive Neural Structural Correspondence Network for Cross-domain Aspect and Opinion Co-extraction

Wenya Wang†‡ and Sinno Jialin Pan†

†Nanyang Technological University, Singapore ‡SAP Innovation Center Singapore

{wa0001ya, sinnopan}@ntu.edu.sg

July 18, 2018

slide-2
SLIDE 2

Outline

1

Introduction Background Definition & Motivation Overview & Contribution

2

Model Architecture

3

Experiments

4

Conclusion

slide-3
SLIDE 3

Outline

1

Introduction Background Definition & Motivation Overview & Contribution

2

Model Architecture

3

Experiments

4

Conclusion

1 / 19

slide-4
SLIDE 4

Background: What is Aspect/Opinion Extraction

Fine-grained Opinion Mining

Figure 1: An example of review outputs.

◮ Our focus: Aspect and Opinion Terms Co-extraction ◮ Challenge: Limited resources for fine-grained annotations 2 / 19

slide-5
SLIDE 5

Background: What is Aspect/Opinion Extraction

Fine-grained Opinion Mining

Figure 1: An example of review outputs.

◮ Our focus: Aspect and Opinion Terms Co-extraction ◮ Challenge: Limited resources for fine-grained annotations

⇒ Cross-domain extraction

2 / 19

slide-6
SLIDE 6

Outline

1

Introduction Background Definition & Motivation Overview & Contribution

2

Model Architecture

3

Experiments

4

Conclusion

3 / 19

slide-7
SLIDE 7

Problem Definition

1 Task formulation: Sequence labeling

phone has a good screen N N N N BO B Input x Labels The

  • 2

3 4 5 6 Features B { Beginning of aspect size 7 I I { Inside of aspect BO { Beginning of opinion IO { Inside of opinion N { None

Figure 2: A deep learning model for sequence labeling.

2 Domain Adaptation ◮ Given: Labeled data in source domain DS ={(xSi, ySi)}nS

i=1, unlabeled

data in target domain DT ={xTj}nT

j=1

◮ Idea: Build bridges across domains, learn shared space 4 / 19

slide-8
SLIDE 8

Motivation: Domain Adaptation

1 Domain shift & bridges

Figure 3: Domain shift for different domains. Figure 4: Syntactic patterns.

5 / 19

slide-9
SLIDE 9

Motivation: Domain Adaptation

1 Domain shift & bridges

Figure 3: Domain shift for different domains. Figure 4: Syntactic patterns.

2 Related work ◮ Adaptive bootstrapping [Li et al., 2012] ◮ Auxiliary task with Recurrent neural network [Ding et al., 2017] 5 / 19

slide-10
SLIDE 10

Outline

1

Introduction Background Definition & Motivation Overview & Contribution

2

Model Architecture

3

Experiments

4

Conclusion

6 / 19

slide-11
SLIDE 11

Overview & Contribution

Recursive Neural Structural Correspondence Network (RNSCN)

◮ Structural correspondences are built based on common syntactic

structures

◮ Use relation vectors with auxiliary labels to learn a shared space across

domains

Label denoising auto-encoder

◮ Deal with auxiliary label noise ◮ Group relation vectors into their intrinsic clusters in an unsupervised

manner

A joint deep model

7 / 19

slide-12
SLIDE 12

Outline

1

Introduction Background Definition & Motivation Overview & Contribution

2

Model Architecture

3

Experiments

4

Conclusion

8 / 19

slide-13
SLIDE 13

Model Architecture: Recursive Neural Network

appetizers good

  • er

they

root dobj amod nsubj

h4 h3 h h2 r2 r43 r24 x x2 x3 x4

Figure 5: A recursive neural network.

Domain Adaptation Relation vectors: Relations as embeddings in the feature space

r43 = tanh(Whh3 + Wxx4) h4 = tanh(Wamodr43 + Wxx4 + b)

9 / 19

slide-14
SLIDE 14

Model Architecture: Recursive Neural Network

appetizers good

  • er

they

root dobj amod nsubj

h4 h3 h h2 r2 r43 r24 x x2 x3 x4 y

43

y

2

y

24

Figure 5: A recursive neural network.

Domain Adaptation Relation vectors: Relations as embeddings in the feature space

r43 = tanh(Whh3 + Wxx4) h4 = tanh(Wamodr43 + Wxx4 + b)

Auxiliary task: Dependency relation prediction

ˆ yR

43 = softmax(WRr43 + bR) 9 / 19

slide-15
SLIDE 15

Model Architecture: Learn Shared Representations

Recursive Neural Structural Correspondence Network (RNSCN)

appetizers good

  • er

they

root dobj amod nsubj

x x2 x3 x4 nice a has laptop

det amod nsubj

x2 x3 x4 x5 x6 x The screen

dobj det

RNSCN

  • urce

Target

Figure 6: An example of how RNSCN learns the correspondences.

10 / 19

slide-16
SLIDE 16

Model Architecture: Learn Shared Representations

Recursive Neural Structural Correspondence Network (RNSCN)

appetizers good

  • er

they

root dobj amod nsubj

h4 h3 r43 x x2 x3 x4 y

43

nice a has laptop

det amod nsubj

h6 h5 r65 r36 x2 x3 x4 x5 y

65

x6 x The screen

dobj det

h4 r64 y

64

RNSCN

  • urce

Target

Figure 6: An example of how RNSCN learns the correspondences.

10 / 19

slide-17
SLIDE 17

Model Architecture: Learn Shared Representations

Recursive Neural Structural Correspondence Network (RNSCN)

appetizers good

  • er

they

root dobj amod nsubj

h4 h3 h h2 r2 r43 r24 x x2 x3 x4 y

43

y

2

y

24

nice a has laptop

det amod nsubj

h6 h5 h h3 r2 r65 r36 x2 x3 x4 x5 y

2

y

65

y

36

x6 x The screen

dobj

h2 r32 y

32 det

h4 r64 y

64

RNSCN

  • urce

Target

Figure 6: An example of how RNSCN learns the correspondences.

10 / 19

slide-18
SLIDE 18

Model Architecture: Learn Shared Representations

appetizers good

  • er

they

root dobj amod nsubj

h4 h3 h h2 r2 r43 r24 x x2 x3 x4 y

43

y

2

y

24

nice a has laptop

det amod nsubj

h6 h5 h h3 r2 r65 r36 x2 x3 x4 x5 y

2

y

65

y

36

x6 x The screen

dobj

h2 r32 y

32 det

h4 r64 y

64

h

  • h

3

h

4

h

  • h

2

h

3

h

4

h

5

h

6

h

2

RNSCN GRU

  • urce

Target

Figure 6: An example of how RNSCN learns the correspondences.

10 / 19

slide-19
SLIDE 19

Model Architecture: Auxiliary Label Denoising

ppetizers good

amod

h4 h r4 x x4 y

4

nice

dobj

h6 h5 r65 x5 x6 screen

  • urce

Target y

65

correct lbel noisy lbel

Figure 7: An autoencoder for label denoising.

11 / 19

slide-20
SLIDE 20

Model Architecture: Auxiliary Label Denoising

ppetizers good

amod

h4 h r4 x x4 y

4

nice

dobj

h6 h5 r65 x5 x6 screen Source Target intrinsic group g4 Autoencoder y

65

intrinsic group g65 Autoencoder

Figure 7: An autoencoder for label denoising.

Reduce label noise: auto-encoders Encoding: gnm = fenc(Wenc, rnm) Decoding: r′

nm = fdec(Wdec, gnm)

Auxiliary task: ˆ yR

nm = softmax(WRgnm)

11 / 19

slide-21
SLIDE 21

Model Architecture: Auxiliary Label Denoising

y

nm

autoencoder rnm

g g2 gjj

Wenc gnm encode r

nm

decode Wdec autoencoder

group emedding

rnm hm xn hn y

nm

Figure 8: An autoencoder for relation grouping.

p(Gnm = i|rnm) = exp(r⊤

nmWencgi)

  • j∈G

exp(r⊤

nmWencgj)

(1) gnm =

|G|

  • i=1

p(Gnm = i|rnm)gi (2) ℓR = ℓR1 + αℓR2 + βℓR3 (3) ℓR1 = rnm − Wdecgnm2

2

ℓR2 =

K

  • k=1

−yR

nm[k] log ˆ

yR

nm[k]

ℓR3 =

  • I − ¯

G⊤ ¯ G

  • 2

F 12 / 19

slide-22
SLIDE 22

Outline

1

Introduction Background Definition & Motivation Overview & Contribution

2

Model Architecture

3

Experiments

4

Conclusion

13 / 19

slide-23
SLIDE 23

Experiments

Dataset Description # Sentences Training Testing R Restaurant 5,841 4,381 1,460 L Laptop 3,845 2,884 961 D Device 3,836 2,877 959

Table 1: Data statistics with number of sentences. Table 2: Comparisons with different baselines.

14 / 19

slide-24
SLIDE 24

Experiments

Injecting noise into syntactic relations

Models R→L R→D L→R L→D D→R D→L AS OP AS OP AS OP AS OP AS OP AS OP RNSCN-GRU 37.77 62.35 33.02 57.54 53.18 71.44 35.65 60.02 49.62 69.42 45.92 63.85 RNSCN-GRU (r) 32.97 50.18 26.21 53.58 35.88 65.73 32.87 57.57 40.03 67.34 40.06 59.18 RNSCN+-GRU 40.43 65.85 35.10 60.17 52.91 72.51 40.42 61.15 48.36 73.75 51.14 71.18 RNSCN+-GRU (r) 39.27 59.41 33.42 57.24 45.79 69.96 38.21 59.12 45.36 72.84 50.45 68.05

Table 3: Effect of auto-encoders for auxiliary label denoising.

Words grouping learned from auto-encoders

Group 1 this, the, their, my, here, it, I, our, not Group 2 quality, jukebox, maitre-d, sauces, portions, volume, friend, noodles, calamari Group 3 in, slightly, often, overall, regularly, since, back, much, ago Group 4 handy, tastier, white, salty, right, vibrant, first, ok Group 5 get, went, impressed, had, try, said, recommended, call, love Group 6 is, are, feels, believes, seems, like, will, would

Table 4: Case studies on word clustering

15 / 19

slide-25
SLIDE 25

Experiments

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.58 0.59 0.60 0.61 0.62 0.63 0.64

f1-opinion

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

trade-off parameter (γ)

0.36 0.37 0.38 0.39 0.40 0.41 0.42

f1-aspect

(a) trade-off

5 10 15 20 25 30 35 40 0.58 0.59 0.60 0.61 0.62 0.63 0.64

f1-opinion

5 10 15 20 25 30 35 40

number of groups (|G|)

0.36 0.37 0.38 0.39 0.40 0.41 0.42

f1-aspect

(b) Groups

Figure 9: Sensitivity studies for L→D.

16 / 19

slide-26
SLIDE 26

Domain Adaptation: Experiments

0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7

proportion of unlabeled target data

0.28 0.29 0.30 0.31 0.32 0.33 0.34

f1 (Hier-Joint)

0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7 0.35 0.36 0.37 0.38 0.39 0.40 0.41

f1 (ours)

(a) R→L

0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7

proportion of unlabeled target data

0.30 0.31 0.32 0.33 0.34 0.35 0.36

f1 (Hier-Joint)

0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7 0.46 0.47 0.48 0.49 0.50 0.51 0.52

f1 (ours)

(b) D→L

Figure 10: F1 vs proportion of unlabeled target data.

17 / 19

slide-27
SLIDE 27

Outline

1

Introduction Background Definition & Motivation Overview & Contribution

2

Model Architecture

3

Experiments

4

Conclusion

18 / 19

slide-28
SLIDE 28

Conclusion

A novel deep learning framework for Cross-domain aspect and opinion terms extraction. Embed syntactic structure into a deep model to bridge the gap between different domains. Apply auxiliary task to assist knowledge transfer. Address the problem of negative effect brought by label noise. Achieve promising results.

19 / 19

slide-29
SLIDE 29

References

Ding, Y., Yu, J., and Jiang, J. (2017). Recurrent neural networks with auxiliary labels for cross-domain opinion target extraction. In AAAI. Li, F., Pan, S. J., Jin, O., Yang, Q., and Zhu, X. (2012). Cross-domain co-extraction of sentiment and topic lexicons. In ACL. 19 / 19

slide-30
SLIDE 30

Appendix: Domain Adaptation

Models R→L R→D L→R L→D D→R D→L AS OP AS OP AS OP AS OP AS OP AS OP CrossCRF 19.72 59.20 21.07 52.05 28.19 65.52 29.96 56.17 6.59 39.38 24.22 46.67 (1.82) (1.34) (0.44) (1.67) (0.58) (0.89) (1.69) (1.49) (0.49) (3.06) (2.54) (2.43) RAP 25.92 62.72 22.63 54.44 46.90 67.98 34.54 54.25 45.44 60.67 28.22 59.79 (2.75) (0.49) (0.52) (2.20) (1.64) (1.05) (0.64) (1.65) (1.61) (2.15) (2.42) (4.18) Hier-Joint 33.66

  • 33.20
  • 48.10
  • 31.25
  • 47.97
  • 34.74
  • (1.47)
  • (0.52)
  • (1.45)
  • (0.49)
  • (0.46)
  • (2.27)
  • RNCRF

24.26 60.86 24.31 51.28 40.88 66.50 31.52 55.85 34.59 63.89 40.59 60.17 (3.97) (3.35) (2.57) (1.78) (2.09) (1.48) (1.40) (1.09) (1.34) (1.59) (0.80) (1.20) RNGRU 24.23 60.65 20.49 52.28 39.78 62.99 32.51 52.24 38.15 64.21 39.44 60.85 (2.41) (1.04) (2.68) (2.69) (0.61) (0.95) (1.12) (2.37) (2.82) (1.11) (2.79) (1.25) RNSCN-CRF 35.26 61.67 32.00 52.81 53.38 67.60 34.63 56.22 48.13 65.06 46.71 61.88 (1.31) (1.35) (1.48) (1.29) (1.49) (0.99) (1.38) (1.10) (0.71) (0.66) (1.16) (1.52) RNSCN-GRU 37.77 62.35 33.02 57.54 53.18 71.44 35.65 60.02 49.62 69.42 45.92 63.85 (0.45) (1.85) (0.58) (1.27) (0.75) (0.97) (0.77) (0.80) (0.34) (2.27) (1.14) (1.97) RNSCNh-GRU 39.13 63.65 33.97 59.24 55.74 75.20 40.30 60.57 51.23 71.93 48.35 68.20 (1.23) (1.36) (1.49) (1.59) (2.27) (1.03) (0.50) (0.93) (0.42) (1.55) (1.00) (1.11) RNSCN+-GRU 40.43 65.85 35.10 60.17 52.91 72.51 40.42 61.15 48.36 73.75 51.14 71.18 (0.96) (1.50) (0.62) (0.75) (1.82) (1.03) (0.70) (0.60) (1.14) (1.76) (1.68) (1.58)

Table 5: Comparisons with different baselines.

19 / 19