SLIDE 1 Mining the Temporal Statistics of Query Terms for Searching Social Media Posts
Jinfeng Rao Ferhan Ture Xing Niu Jimmy Lin
ICTIR’17 Amsterdam
SLIDE 2
Task: Ad-hoc Search on Social Media domain
A ranked list of tweets
……..
Interest Profile (~topic)
…
Interest Profiles (User’s queries) Example query MB001: BBC world service stuff cut Stream of T weets
……..
SLIDE 3 Background
- Challenges for Social Media Search
- Usually very short,140 characters for tweets.
- Posts are written in a highly concise way, sometimes can be quite noisy.
- Many abbreviations,misspellings, typos,emojis, hashtags,etc.
- Time is an important relevance signal
- Relevant posts are more likely to group together at the time shaking news happened.
- Example query MB001 fromTREC 2011: BBC world service stuff cut
- distribution of relevant docs (ground truth) in below.
- x axis denotes the number of days prior to query time.
- the height of a bar denotes the number of relevant docs during that time interval.
SLIDE 4 Combine Lexical and Temporal Evidence
- Recurrent Neural Networks, Rao et al. NeuIR’17 [4]
- However, these work all require two-stage retrieval:
- Initial retrieval: estimate the ground truth distribution (pseudo trend).
- Second retrieval: rerank docs with the estimated pseudo trend.
- Moving window, Dakka et al.TKDE’12 [2]
- Kernel density estimation, Efron et al. SIGIR’14 [3]
ˆ fω(x) = 1 nh
n
X
i=0
ωiK ✓x − xi h ◆
0.00 0.05 0.10 0.15 0.20
(pseudo trend)
SLIDE 5 Research Question
Strong correlation! ground truth term trends
- An example of ground truth and term trends for query MB127
“hagel nomination filibustered” fromTREC 2013 topic set.
- Research question:can we make use of the temporal statistics of query terms
(term trends) to predict the ground truth?
erm frequencies in the collection for each 5 minutes.
SLIDE 6
Approach: Temporal Modeling via Regression
ground truth term trends
Goal: Approximate the ground truth (Y) by taking a weighted sum of all term trends (ft).
SLIDE 7 Term Importance Modeling
- Bursty terms can be more informative.
- We adopt entropy definition to measure the importance of terms.
- Given the counts of a particular term t (unigram/bigram) {c1, c2,…, cn},
lower entropy = bursty term trend = more important
SLIDE 8 Approach: Temporal Modeling via Regression
wo questions in this non-linear regression modeling:
- Q1: How to model the weights of different query terms?
- Q2: How to differentiate the contribution from unigrams with bigrams?
- Q1 solution: exponential mapping from entropy to term weight
- Q2 solution: assume unigram weight ui, then bigram weight (1-ui)
where Ri is the difference between the maximum unigram entropy and maximum bigram entropy. Intuition:Ri > 0 => max(unigram_entropy) > max(bigram_entropy) => ui > 0.5
SLIDE 9 Approach: Temporal Modeling via Regression
- Problem reformulation:
- Objective Loss:
which can be solved with gradient descent algorithm (more details in paper).
SLIDE 10 Combine Term Trend with Pseudo Trend
- Combine term trend and pseudo trend in a linear ranking model:
- T
wo ways to estimate the ground truth distribution:
- Document-level: pseudo trend through an initial retrieval
- T
erm-level: regression over term trends
SLIDE 11 Experimental Setup
- T
- pic set: TREC Microblog Track 2013 and 2014, total 115 topics.
- Collection: T
weets2013 (~243 million tweets)
- Metrics: Mean Average Precision (AP) and Precision at 30 (P30)
- Three data splits:
- Odd-even: odd numbered topics (57 topics) for training, even (58 topics)
for testing
- Even-odd: switch train/test split
- Cross: 4-fold cross validation
SLIDE 12 Baselines
- 1. QL
- 2. Recency Prior,Li et al. CIKM’03 [1]
- 3. MovingWindow, Dakka et al.TKDE’12 [2]
- 4. Kernel Density Estimation (KDE), Efron et al. SIGIR’14 [3]
- Uniform-based weighting (IRDu)
- Score-based weighting (IRDs)
- Rank-based weighting (IRDr)
- Oracle (upper bound)
SLIDE 13 Main Results
- Conclusions:
- KDE with rank-based weights (IRDr) is the strongest baseline.
- Our approach (Reg-IRDr) significantly outperforms all baselines,and is even close
to the upper bound in some splits.
SLIDE 14
Randomized Experiments
Average improvement over QL baseline summarized over 30 random train/test splits.
SLIDE 15
Per-Topic Analysis
Per-topic P30 improvement against the Query Likelihood (QL) and the best KDE baseline (IRDr).
SLIDE 16 Analysis of the Best-Performing Topic 144
- How term trend signals help?
- red color for ground truth distribution
- green for pseudo trend estimated by
the best KDE method (IRDr)
- blue for term trends.
- Conclusion:A combination of pseudo trend
(KDE) and term trend (Our approaches) provides a more accurate estimation to the ground truth distribution.
SLIDE 17 Conclusion
- We are the first to study temporal statistics of query terms for social media
search.
- Our learning to rank and regression model show this new signal is effective.
- For efficiency purpose, use our term trending modeling technique
- For effectiveness purpose, use the combination of pseudo trend and term trend
modeling
SLIDE 18
Thanks for listening! Any question?
SLIDE 19
Reference
1. Xiaoyan Li and W. Bruce Cro . 2003. Time-Based Language Models. In CIKM. 469–475. 2. Wisam Dakka, Luis Gravano, and Panagiotis G. Ipeirotis. 2012. Answering General Time- Sensitive eries. TKDE 3. Miles Efron, Jimmy Lin, Jiyin He, and Arjen de Vries. 2014. Temporal Feedback for Tweet Search with Non-Parametric Density Estimation. In SIGIR. 33–42. 4. JinfengRao, Hua He, Haotian Zhang, Ferhan Ture, Royal Sequiera, Salman Mohammed, and Jimmy Lin. 2017. Integrating Lexical and Temporal Signals in Neural Ranking Models for Social Media Search. In SIGIR Workshop on Neural Information Retrieval (Neu-IR)