Towards an Efficient Combination of Similarity Measures for Semantic - - PowerPoint PPT Presentation

towards an efficient combination of similarity measures
SMART_READER_LITE
LIVE PREVIEW

Towards an Efficient Combination of Similarity Measures for Semantic - - PowerPoint PPT Presentation

Introduction Methodology Evaluation Results Conclusion and Further Research Towards an Efficient Combination of Similarity Measures for Semantic Relation Extraction Alexander Panchenko alexander.panchenko@student.uclouvain.be Universit


slide-1
SLIDE 1

Introduction Methodology Evaluation Results Conclusion and Further Research

Towards an Efficient Combination of Similarity Measures for Semantic Relation Extraction

Alexander Panchenko

alexander.panchenko@student.uclouvain.be Université catholique de Louvain & Bauman Moscow State Technical University

5th December 2011 / CLAIM Seminar, BMSTU

Alexander Panchenko 1/30

slide-2
SLIDE 2

Introduction Methodology Evaluation Results Conclusion and Further Research

Plan

1

Introduction

2

Methodology

3

Evaluation

4

Results

5

Conclusion and Further Research

Alexander Panchenko 2/30

slide-3
SLIDE 3

Introduction Methodology Evaluation Results Conclusion and Further Research

Reference Papers

Panchenko A. Method for Automatic Construction of Semantic Relations Between Concepts of an Information Retrieval Thesaurus. // In Herald of the Voronezh State University. Series “Systems Analysis and Information Technologies”, vol.2, pages 131–139, 2011.

http://www.vestnik.vsu.ru/program/view/view.asp?sec= analiz&year=2010&num=02&f_name=2010-02-26

Panchenko A. Comparison of the Knowledge-, Corpus-, and Web-based Similarity Measures for Semantic Relations Extraction // Proceedings of the GEMS 2011 Workshop on Geometrical Models of Natural Language Semantics, EMNLP 2011, pages 11-21, 2011.

http://aclweb.org/anthology/W/W11/W11-2502.pdf

Panchenko A. Towards an Efficient Combination of Similarity Measures for Semantic Relation Extraction // Submitted to the Student Workshop of EACL 2012.

Alexander Panchenko 3/30

slide-4
SLIDE 4

Introduction Methodology Evaluation Results Conclusion and Further Research

Semantic Relations

r = ci, t, cj – semantic relation, where ci, cj ∈ C, t ∈ T C – terms e.g. radio or receiver operating characteristic T – semantic relation types, e.g. hyponymy or synonymy R ⊆ C × T × C – set of semantic relations

Alexander Panchenko 4/30

slide-5
SLIDE 5

Introduction Methodology Evaluation Results Conclusion and Further Research

Semantic Relations Example: Thesaurus

Figure: A part of a the information retrieval thesaurus EuroVoc.

Alexander Panchenko 5/30

slide-6
SLIDE 6

Introduction Methodology Evaluation Results Conclusion and Further Research

Semantic Relations Example: Thesaurus

Figure: A part of a the information retrieval thesaurus EuroVoc.

R = energy-generating product, NT, energy industry energy technology, NT, energy industry petrolium, RT, fossil fuel energy technology, RT, oil technology ...

Alexander Panchenko 5/30

slide-7
SLIDE 7

Introduction Methodology Evaluation Results Conclusion and Further Research

General Problem: Automatic Thesaurus Construction

Figure: A technology of automatic thesaurus construction.

How thesaurus is used? Query expansion and query suggestion Navigation and browsing on the corpus Visualization of the corpus

Alexander Panchenko 6/30

slide-8
SLIDE 8

Introduction Methodology Evaluation Results Conclusion and Further Research

The Problem

Semantic Relations Extraction Input: terms C, semantic relation types T Ouput: lexico-semantic relations ^ R ∼ R

Alexander Panchenko 7/30

slide-9
SLIDE 9

Introduction Methodology Evaluation Results Conclusion and Further Research

The Problem

Semantic Relations Extraction Input: terms C, semantic relation types T Ouput: lexico-semantic relations ^ R ∼ R Pattern-based relations extraction, where patterns are built manually (Hearst, 1992) or semi-automatically (Snow, 2004)

(+) High precision (–) Complexity and cost pattern construction (–) Patterns are highly task and domain dependent

Alexander Panchenko 7/30

slide-10
SLIDE 10

Introduction Methodology Evaluation Results Conclusion and Further Research

The Problem

Semantic Relations Extraction Input: terms C, semantic relation types T Ouput: lexico-semantic relations ^ R ∼ R Pattern-based relations extraction, where patterns are built manually (Hearst, 1992) or semi-automatically (Snow, 2004)

(+) High precision (–) Complexity and cost pattern construction (–) Patterns are highly task and domain dependent

Similarity-based relation extraction (Philippovich and Prokhorov, 2002; Grefenstette, 1994; Curran and Moens, 2002)

(–) Less precise (+) Little or no manual work (+) More adaptive across domains

Alexander Panchenko 7/30

slide-11
SLIDE 11

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Relation Extraction

State of the Art: There exist many heterogeneous similarity measures based on corpus, knowledge, web, definitions, etc. Research Questions:

Alexander Panchenko 8/30

slide-12
SLIDE 12

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Relation Extraction

State of the Art: There exist many heterogeneous similarity measures based on corpus, knowledge, web, definitions, etc. Various measures provide complimentary types of semantic information. Research Questions:

Alexander Panchenko 8/30

slide-13
SLIDE 13

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Relation Extraction

State of the Art: There exist many heterogeneous similarity measures based on corpus, knowledge, web, definitions, etc. Various measures provide complimentary types of semantic information. This suggest their combination. Research Questions:

Alexander Panchenko 8/30

slide-14
SLIDE 14

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Relation Extraction

State of the Art: There exist many heterogeneous similarity measures based on corpus, knowledge, web, definitions, etc. Various measures provide complimentary types of semantic information. This suggest their combination. Research Questions: Which similarity measure is the best for relation extraction?

Alexander Panchenko 8/30

slide-15
SLIDE 15

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Relation Extraction

State of the Art: There exist many heterogeneous similarity measures based on corpus, knowledge, web, definitions, etc. Various measures provide complimentary types of semantic information. This suggest their combination. Research Questions: Which similarity measure is the best for relation extraction? How to efficiently combine similarity measures so as to improve relation extraction?

Alexander Panchenko 8/30

slide-16
SLIDE 16

Introduction Methodology Evaluation Results Conclusion and Further Research

The Key Contributions Up To Now

A protocol for evaluation of the similarity-based relation extraction Comparison of 34 single measures Two methods of combination – similarity and relation fusion Six best combinations outperforming single measures are found

Alexander Panchenko 9/30

slide-17
SLIDE 17

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Semantic Relations Extraction

Semantic Relations Extraction Algorithm Input: Terms C, Sim.parameters P, Threshold k, Min.similarity value γ Output: Semantic relations ^ R (unlabeled)

1 S ← sim(C, P) ; 2 S ← normalize(S) ; 3 ^

R ← threshold(S, k, γ) ;

4 return ^

R ;

Alexander Panchenko 10/30

slide-18
SLIDE 18

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Semantic Relations Extraction

Semantic Relations Extraction Algorithm Input: Terms C, Sim.parameters P, Threshold k, Min.similarity value γ Output: Semantic relations ^ R (unlabeled)

1 S ← sim(C, P) ; 2 S ← normalize(S) ; 3 ^

R ← threshold(S, k, γ) ;

4 return ^

R ; sim – a similarity measure

Alexander Panchenko 10/30

slide-19
SLIDE 19

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Semantic Relations Extraction

Semantic Relations Extraction Algorithm Input: Terms C, Sim.parameters P, Threshold k, Min.similarity value γ Output: Semantic relations ^ R (unlabeled)

1 S ← sim(C, P) ; 2 S ← normalize(S) ; 3 ^

R ← threshold(S, k, γ) ;

4 return ^

R ; sim – a similarity measure normalize – similarity score normalization

Alexander Panchenko 10/30

slide-20
SLIDE 20

Introduction Methodology Evaluation Results Conclusion and Further Research

Similarity-based Semantic Relations Extraction

Semantic Relations Extraction Algorithm Input: Terms C, Sim.parameters P, Threshold k, Min.similarity value γ Output: Semantic relations ^ R (unlabeled)

1 S ← sim(C, P) ; 2 S ← normalize(S) ; 3 ^

R ← threshold(S, k, γ) ;

4 return ^

R ; sim – a similarity measure normalize – similarity score normalization threshold – kNN thresholding R = |C|

i=1 {ci, t, cj : cj ∈ top k% terms ∧ sij ≥ γ} .

Alexander Panchenko 10/30

slide-21
SLIDE 21

Introduction Methodology Evaluation Results Conclusion and Further Research

Knowledge-based Measures (6)

Data: semantic network WordNet 3.0, corpus SemCor.

Alexander Panchenko 11/30

slide-22
SLIDE 22

Introduction Methodology Evaluation Results Conclusion and Further Research

Knowledge-based Measures (6)

Data: semantic network WordNet 3.0, corpus SemCor. Variables: len(ci, cj) – length of the shortest path between terms ci and cj len(ci, lcs(ci, cj)) – length of the shortest path from ci to the lowest common subsumer (LCS) of ci and cj len(croot, lcs(ci, cj)) – length of the shortest path from the root term croot to the LCS of ci and cj P(c) – probability of the term c, estimated from a corpus P(lcs(ci, cj)) – probability of the LCS of ci and cj

Alexander Panchenko 11/30

slide-23
SLIDE 23

Introduction Methodology Evaluation Results Conclusion and Further Research

Knowledge-based Measures (6)

Data: semantic network WordNet 3.0, corpus SemCor. Variables: len(ci, cj) – length of the shortest path between terms ci and cj len(ci, lcs(ci, cj)) – length of the shortest path from ci to the lowest common subsumer (LCS) of ci and cj len(croot, lcs(ci, cj)) – length of the shortest path from the root term croot to the LCS of ci and cj P(c) – probability of the term c, estimated from a corpus P(lcs(ci, cj)) – probability of the LCS of ci and cj Measures: Inverted Edge Count (Jurafsky and Martin, 2009), Leacock-Chodorow (1998), Wu-Palmer (1994), Resnik (1995), Jiang-Conrath (1997), Lin (1998).

Alexander Panchenko 11/30

slide-24
SLIDE 24

Introduction Methodology Evaluation Results Conclusion and Further Research

Web-based Measures (9)

Data: number of the hits returned by an information retrieval system (GOOGLE, YAHOO, YAHOO BOSS, BING).

Alexander Panchenko 12/30

slide-25
SLIDE 25

Introduction Methodology Evaluation Results Conclusion and Further Research

Web-based Measures (9)

Data: number of the hits returned by an information retrieval system (GOOGLE, YAHOO, YAHOO BOSS, BING). Variables: hi – number of hits returned by query "ci" hij – number of hits returned by the query "ci AND cj"

Alexander Panchenko 12/30

slide-26
SLIDE 26

Introduction Methodology Evaluation Results Conclusion and Further Research

Web-based Measures (9)

Data: number of the hits returned by an information retrieval system (GOOGLE, YAHOO, YAHOO BOSS, BING). Variables: hi – number of hits returned by query "ci" hij – number of hits returned by the query "ci AND cj" Measures: NGD (Cilibrasi and Vitanyi, 2007) PMI-IR (Turney, 2001)

Alexander Panchenko 12/30

slide-27
SLIDE 27

Introduction Methodology Evaluation Results Conclusion and Further Research

Corpus-based Measures (13)

Data: corpus WACYPEDIA (800M tokens) and UKWAC (2000M)

Alexander Panchenko 13/30

slide-28
SLIDE 28

Introduction Methodology Evaluation Results Conclusion and Further Research

Corpus-based Measures (13)

Data: corpus WACYPEDIA (800M tokens) and UKWAC (2000M) Variables: fi– context window feature vector of term ci fs

i – syntactic feature vector of ci

Alexander Panchenko 13/30

slide-29
SLIDE 29

Introduction Methodology Evaluation Results Conclusion and Further Research

Corpus-based Measures (13)

Data: corpus WACYPEDIA (800M tokens) and UKWAC (2000M) Variables: fi– context window feature vector of term ci fs

i – syntactic feature vector of ci

Measures: BDA (Sahlgren, 2006) SDA (Curran, 2003) LSA on the TASA corpus (Landauer and Dumais, 1997) NGD and PMI-IR on the Factiva corpus (Veksler et al., 2008).

Alexander Panchenko 13/30

slide-30
SLIDE 30

Introduction Methodology Evaluation Results Conclusion and Further Research

Corpus-based Measures: Distributional Analysis

Distributional Similarity Measure Input: Terms C, Corpus D, Number of features β, Min.term frequency θ, Feature matrix construction param. P Output: Similarity matrix, S [C × C]

1 F ← construct_fmatrix(C, D, β, θ, P) ; 2 F ← pmi(F) ; 3 S ← cos(F) ; 4 return S ;

PMI normalization fij = log P(ci,fj)

P(ci)P(fj) = log fij n(ci)

i fij

Cosine similarity: sij = cos(ci, cj) =

fi·fj fifj

Alexander Panchenko 14/30

slide-31
SLIDE 31

Introduction Methodology Evaluation Results Conclusion and Further Research

Definition-based Measures (6)

Data: definitions from WordNet, Wikipedia, and Wiktionary.

Alexander Panchenko 15/30

slide-32
SLIDE 32

Introduction Methodology Evaluation Results Conclusion and Further Research

Definition-based Measures (6)

Data: definitions from WordNet, Wikipedia, and Wiktionary. Variables: gloss(c) – definition of the term sim(gloss(ci), gloss(cj)) – similarity of terms’ glosses fi – context vector of ci, calculated on the corpus of all glosses fi bag-of-words vector, derived from the definition of ci exist(ci, cj) a relation between ci and cj in the dictionary

Alexander Panchenko 15/30

slide-33
SLIDE 33

Introduction Methodology Evaluation Results Conclusion and Further Research

Definition-based Measures (6)

Data: definitions from WordNet, Wikipedia, and Wiktionary. Variables: gloss(c) – definition of the term sim(gloss(ci), gloss(cj)) – similarity of terms’ glosses fi – context vector of ci, calculated on the corpus of all glosses fi bag-of-words vector, derived from the definition of ci exist(ci, cj) a relation between ci and cj in the dictionary Measures: BDA using Wiktionary and Wikipedia Extended Lesk using Wordnet (Banerjee and Pedersen, 2003) Gloss Vectors using Wordnet (Patwardhan and Pedersen, 2006)

Alexander Panchenko 15/30

slide-34
SLIDE 34

Introduction Methodology Evaluation Results Conclusion and Further Research

Definition-based Measures

Wiktionary-based Similarity Measure Input: Terms C, UseWikipedia, Number of features β Output: Similarity matrix, S [C × C]

1 D ← get_wiktionary_definitions(C) ; 2 if UseWikipedia then 3

D ← D ∪ get_wikipedia_definitions(C)

4 F ← construct_fmatrix(C, D, β) ; 5 F ← pmi(F) ; 6 S ← cos(F) ; 7 S ← update_similarity(S) ; 8 return S ;

Alexander Panchenko 16/30

slide-35
SLIDE 35

Introduction Methodology Evaluation Results Conclusion and Further Research

Combined Measures

Similarity Fusion: Scmb = 1

N

N

i=1 Si

Relation Fusion: Relation fusion measure Input: Sim.matrices produced by N measures {S1, . . . , SN}, kNN threshold k Output: Combined similarity matrix, Scmb

1 for i=1,N do 2

Ri ← threshold(Si, k, γ = 0) Ri ← relation_matrix(Ri)

3 Scmb ← 1 N

N

i=1 Ri ; 4 return Scmb ;

rij = 1 if ci, t, cj ∈ Rk else

Alexander Panchenko 17/30

slide-36
SLIDE 36

Introduction Methodology Evaluation Results Conclusion and Further Research

Combined Measures

Which of the 34 single measures should we combine? We present combinations of three groups of measures:

Group4 = WN-Resnik, BDA-3-5000, SDA-21-100000, Def-WktWiki-1000 Group8 = Group4 + WN-WuPalmer, LSA-Tasa, Def-GlossVec., and Def-Ext.Les Group14 = Group8 + WN-LeacockChodorow, WN-Lin, WN-JiangConrath, NGD-Factiva, NGD-Yahoo, and NGD-GoogleWiki.

Alexander Panchenko 18/30

slide-37
SLIDE 37

Introduction Methodology Evaluation Results Conclusion and Further Research

Evaluation with Human Judgments

term, ci term, cj human sim., s sim., s human rank, r sim.rank, ^ r tiger cat 7.35 0.85 1 3 book paper 7.46 0.95 2 2 computer keyboard 7.62 0.81 3 1 ... ... ... ... . . . . . . possibility girl 1.94 0.25 64 65 sugar approach 0.88 0.05 65 23

Alexander Panchenko 19/30

slide-38
SLIDE 38

Introduction Methodology Evaluation Results Conclusion and Further Research

Evaluation with Human Judgments

term, ci term, cj human sim., s sim., s human rank, r sim.rank, ^ r tiger cat 7.35 0.85 1 3 book paper 7.46 0.95 2 2 computer keyboard 7.62 0.81 3 1 ... ... ... ... . . . . . . possibility girl 1.94 0.25 64 65 sugar approach 0.88 0.05 65 23

Human judgments datasets: WordSim353 (Finkelstein, 2002) – 353 pairs Miller Charles (1991) – 30 pairs Rubenstein Goodenough (1965) – 65 pairs

Alexander Panchenko 19/30

slide-39
SLIDE 39

Introduction Methodology Evaluation Results Conclusion and Further Research

Evaluation with Human Judgments

term, ci term, cj human sim., s sim., s human rank, r sim.rank, ^ r tiger cat 7.35 0.85 1 3 book paper 7.46 0.95 2 2 computer keyboard 7.62 0.81 3 1 ... ... ... ... . . . . . . possibility girl 1.94 0.25 64 65 sugar approach 0.88 0.05 65 23

Human judgments datasets: WordSim353 (Finkelstein, 2002) – 353 pairs Miller Charles (1991) – 30 pairs Rubenstein Goodenough (1965) – 65 pairs Person’s correlation: ρ = cov(s,^

s) σ(s)σ(^ s)

Spearman’s correlation: r = cov(r,^

r) σ(r)σ(^ r)

Alexander Panchenko 19/30

slide-40
SLIDE 40

Introduction Methodology Evaluation Results Conclusion and Further Research

Evaluation with Semantic Relations

target term, ci relatum term, cj relation type, t judge adjudicate syn judge arbitrate syn judge asessor syn judge chancellor syn judge gendarmerie syn judge sheriff syn ... ... ... judge pc random judge fare random judge lemon random

Number of correct and random relations is equal for each target term! Semantic Relations Datasets: BLESS (Baroni and Lenci, 2011) – 26554 relations (hyper, coord, mero, event, attri, random) SN (Panchenko, ?) – 14682 relations (syn, random)

Alexander Panchenko 20/30

slide-41
SLIDE 41

Introduction Methodology Evaluation Results Conclusion and Further Research

Evaluation with Semantic Relations

Let R – all semantic relations, which are not random ^ R – extracted relations k – kNN threshold Evaluation Metrics Precision = |R∩^

R| |^ R|

Recall = |R∩^

R| |R|

F1 = 2 · Precision·Recall

Precision+Recall

MAP(M) = 1

M

M

k=1 Precision(k).

Alexander Panchenko 21/30

slide-42
SLIDE 42

Introduction Methodology Evaluation Results Conclusion and Further Research

Example: Evaluation with Semantic Relations

Precision(50%) = 1

7 ≈ 0.86

target word relatum word relation type sim aficionado enthusiast syn 0.07197 aficionado fan syn 0.05195 aficionado admirer syn 0.01964 aficionado addict syn 0.01326 aficionado devotee syn 0.01163 aficionado foundling random 0.00777 aficionado fanatic syn 0.00414 aficionado adherent syn 0.00353 aficionado capital random 0.00232 aficionado statute random 0.00029 aficionado blot random 0.00025 aficionado meddler random 0.00005 aficionado enlargement random 0.00003 aficionado bawdyhouse random 0.00000

Alexander Panchenko 22/30

slide-43
SLIDE 43

Introduction Methodology Evaluation Results Conclusion and Further Research

Results on the Human Judgements Datasets

Alexander Panchenko 23/30

slide-44
SLIDE 44

Introduction Methodology Evaluation Results Conclusion and Further Research

Results on the Semantic Relations Datasets

Alexander Panchenko 24/30

slide-45
SLIDE 45

Introduction Methodology Evaluation Results Conclusion and Further Research

Precision-Recall Curves

Figure: PR graphs of (on the left) the best single and combined measures; (on the right) Wiktionary measures.

Alexander Panchenko 25/30

slide-46
SLIDE 46

Introduction Methodology Evaluation Results Conclusion and Further Research

Precision-Recall Curves

Figure: PR graph of four combined measures.

Alexander Panchenko 26/30

slide-47
SLIDE 47

Introduction Methodology Evaluation Results Conclusion and Further Research

Conclusion:

The best single measures: Wordnet-based measure WN-Resnik Bag-of-word distributional measure BDA-3-5000 Syntactic distributional measure SDA-21-100000 Wiktionary measure Def-WktWiki-1000 The best combined measure: Relation fusion of 8 measures Comb-Rel-810 Very close to combined measures using 14 measures

Alexander Panchenko 27/30

slide-48
SLIDE 48

Introduction Methodology Evaluation Results Conclusion and Further Research

Further Research:

More Sophisticated Combination Methods: Unsupervised feature combination

Bag-of-word features of Distributional Analysis + Wikipedia/Wiktionary/Wordnet definitions Feature tensor: jointly co-occuring DA features, tensor decompositions for better fusion Similarity tensor: yet another similarity fusion technique

Alexander Panchenko 28/30

slide-49
SLIDE 49

Introduction Methodology Evaluation Results Conclusion and Further Research

Further Research:

More Sophisticated Combination Methods: Unsupervised feature combination

Bag-of-word features of Distributional Analysis + Wikipedia/Wiktionary/Wordnet definitions Feature tensor: jointly co-occuring DA features, tensor decompositions for better fusion Similarity tensor: yet another similarity fusion technique

Supervised linear combination of pairwise similarities

Alexander Panchenko 28/30

slide-50
SLIDE 50

Introduction Methodology Evaluation Results Conclusion and Further Research

Further Research:

More Sophisticated Combination Methods: Unsupervised feature combination

Bag-of-word features of Distributional Analysis + Wikipedia/Wiktionary/Wordnet definitions Feature tensor: jointly co-occuring DA features, tensor decompositions for better fusion Similarity tensor: yet another similarity fusion technique

Supervised linear combination of pairwise similarities Supervised linear combination of features used by single measures

Alexander Panchenko 28/30

slide-51
SLIDE 51

Introduction Methodology Evaluation Results Conclusion and Further Research

Further Research:

Evaluation Domain-specific terms and relations – Agrovoc, MeSH, etc. An application-based evaluation – query expansion

Alexander Panchenko 29/30

slide-52
SLIDE 52

Introduction Methodology Evaluation Results Conclusion and Further Research

Further Research:

Evaluation Domain-specific terms and relations – Agrovoc, MeSH, etc. An application-based evaluation – query expansion Methods Corpus-based:DA with n-grams, surface patterns, LSA, LDA, syntactic tree kernels Web-based: more experiments with Google hits Knowledge-based: SimRank, random walks and the like on the Wikipedia/Wiktionary/Wordnet category lattice Surface-based: edit distance, longest common substring etc.

Alexander Panchenko 29/30

slide-53
SLIDE 53

Introduction Methodology Evaluation Results Conclusion and Further Research

Further Research:

Evaluation Domain-specific terms and relations – Agrovoc, MeSH, etc. An application-based evaluation – query expansion Methods Corpus-based:DA with n-grams, surface patterns, LSA, LDA, syntactic tree kernels Web-based: more experiments with Google hits Knowledge-based: SimRank, random walks and the like on the Wikipedia/Wiktionary/Wordnet category lattice Surface-based: edit distance, longest common substring etc. Relation types: supervised model trained on a set of hyponyms, synonyms, etc.

Alexander Panchenko 29/30

slide-54
SLIDE 54

Introduction Methodology Evaluation Results Conclusion and Further Research

Questions

Thank you! Questions?

Alexander Panchenko 30/30