SINM 2020
Learning Context Effects in Triadic Closure
Kiran Tomlinson research with Austin R. Benson
Slides: Preprint: Code: Data: bit.ly/lcl-slides bit.ly/lcl-paper bit.ly/lcl-code bit.ly/lcl-data
Learning Context Effects in Triadic Closure Kiran Tomlinson SINM - - PowerPoint PPT Presentation
bit.ly/lcl-slides Slides: bit.ly/lcl-paper Preprint: bit.ly/lcl-code Code: bit.ly/lcl-data Data: Learning Context Effects in Triadic Closure Kiran Tomlinson SINM 2020 research with Austin R. Benson ? What factors drive edge formation?
Slides: Preprint: Code: Data: bit.ly/lcl-slides bit.ly/lcl-paper bit.ly/lcl-code bit.ly/lcl-data
(Barabási & Albert, Science 1999)
(McPherson et al., Annual Review of Sociology 2001) (Papadopoulos et al., Nature 2012)
(Barabási & Albert, Science 1999)
(Bianconi & Barabási, Europhysics Letters 2001) (Caldarelli et al., Physical Review Letters 2002)
(McPherson et al., Annual Review of Sociology 2001) (Papadopoulos et al., Nature 2012)
(Barabási & Albert, Science 1999)
(Bianconi & Barabási, Europhysics Letters 2001) (Caldarelli et al., Physical Review Letters 2002)
(McPherson et al., Annual Review of Sociology 2001) (Papadopoulos et al., Nature 2012)
(Barabási & Albert, Science 1999)
(Rapoport, Bulletin of Mathematical Biophysics 1953) (Jin et al., Physical Review E 2001)
(Overgoor et al., SINM ’19 & WWW ’19) (Gupta & Porter, arXiv 2020)
(Overgoor et al., SINM ’19 & WWW ’19)
(Gupta & Porter, arXiv 2020)
(Overgoor et al., SINM ’19 & WWW ’19)
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
node
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
node choice set
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
preferences
node choice set
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
preferences node features
node choice set
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Overgoor et al., SINM ’19 & WWW ’19)
preferences node features (similarity, in-degree, fitness…)
node choice set
(Gupta & Porter, arXiv 2020)
(Bruch & Feinberg, Annual Review of Sociology 2017)
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
(Simonson, Journal of Consumer Research 1989)
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20 $15 $20 $25
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20 $15 $20 $25
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20 $15 $20 $25
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
T xi)
T xj)
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20 $15 $20 $25
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
T xi)
T xj)
base preferences
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20 $15 $20 $25
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
T xi)
T xj)
base preferences context effect matrix
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20 $15 $20 $25
(Huber et al., Journal of Consumer Research 1982) (Simonson & Tversky, Journal of Marketing Research 1992)
T xi)
T xj)
base preferences context effect matrix mean features
(Simonson, Journal of Consumer Research 1989)
$10 $15 $20 $15 $20 $25
Datasets
email-enron email-eu email-w3c wiki-talk reddit-hyperlink bitcoin-alpha bitcoin-otc mathoverflow college-msg facebook-wall sms-a sms-b sms-c
bit.ly/lcl-data
Datasets
email-enron email-eu email-w3c wiki-talk reddit-hyperlink bitcoin-alpha bitcoin-otc mathoverflow college-msg facebook-wall sms-a sms-b sms-c
bit.ly/lcl-data
Datasets
email-enron email-eu email-w3c wiki-talk reddit-hyperlink bitcoin-alpha bitcoin-otc mathoverflow college-msg facebook-wall sms-a sms-b sms-c
bit.ly/lcl-data
Datasets
email-enron email-eu email-w3c wiki-talk reddit-hyperlink bitcoin-alpha bitcoin-otc mathoverflow college-msg facebook-wall sms-a sms-b sms-c
bit.ly/lcl-data
Datasets
email-enron email-eu email-w3c wiki-talk reddit-hyperlink bitcoin-alpha bitcoin-otc mathoverflow college-msg facebook-wall sms-a sms-b sms-c
bit.ly/lcl-data
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
(concave)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
(concave)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) context effect matrix A red: +, blue: -, white: 0 (column acts on row)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better) likelihood-ratio test vs MNL significance threshold (p < 0.001)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better) likelihood-ratio test vs MNL significance threshold (p < 0.001)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better) likelihood-ratio test vs MNL significance threshold (p < 0.001)
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better) likelihood-ratio test vs MNL significance threshold (p < 0.001) “popularity matters less when choosing from close connections” “close connections matter more when choosing from the popular”
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better) likelihood-ratio test vs MNL significance threshold (p < 0.001) “popularity matters less when choosing from close connections” “close connections matter more when choosing from the popular”
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better) likelihood-ratio test vs MNL significance threshold (p < 0.001) “popularity matters less when choosing from close connections” “close connections matter more when choosing from the popular”
MLE to infer LCL Estimation
−log∑
j∈C
exp([θ + AxC]
T xj)
ℓ(θ, A; ) = ∑
(i,C)∈
(θ + AxC)
T xi
Node features (left-right, top-bottom) 1. in-degree 2. shared neighbors 3. reciprocal weight 4. send recency 5. receive recency 6. reciprocal recency (concave) L1 regularization level context effect matrix A red: +, blue: -, white: 0 (column acts on row) LCL negative log-likelihood (lower = better) likelihood-ratio test vs MNL significance threshold (p < 0.001) “popularity matters less when choosing from close connections” “close connections matter more when choosing from the popular” “popularity matters less when your inbox is full of recent emails”
Kiran Tomlinson and Austin R. Benson Learning Interpretable Feature Context Effects in Discrete Choice arXiv: 2009.03417, September 2020
bit.ly/lcl-paper
Kiran Tomlinson and Austin R. Benson Learning Interpretable Feature Context Effects in Discrete Choice arXiv: 2009.03417, September 2020
bit.ly/lcl-paper
Kiran Tomlinson and Austin R. Benson Learning Interpretable Feature Context Effects in Discrete Choice arXiv: 2009.03417, September 2020
bit.ly/lcl-paper
Kiran Tomlinson and Austin R. Benson Learning Interpretable Feature Context Effects in Discrete Choice arXiv: 2009.03417, September 2020
bit.ly/lcl-paper
Kiran Tomlinson and Austin R. Benson Learning Interpretable Feature Context Effects in Discrete Choice arXiv: 2009.03417, September 2020
bit.ly/lcl-paper
Kiran Tomlinson and Austin R. Benson Learning Interpretable Feature Context Effects in Discrete Choice arXiv: 2009.03417, September 2020
bit.ly/lcl-paper
Slides: Preprint: Code: Data: bit.ly/lcl-slides bit.ly/lcl-paper bit.ly/lcl-code bit.ly/lcl-data