introduction to information retrieval
play

Introduction to Information Retrieval - PowerPoint PPT Presentation

Statistical language models Statistical language models in IR Discussion Introduction to Information Retrieval http://informationretrieval.org IIR 12: Language Models for IR Hinrich Sch utze Institute for Natural Language Processing,


  1. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that P (string) = 0 . 01 · 0 . 03 · 0 . 04 Sch¨ utze: Language models for IR 7 / 30

  2. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad P (string) = 0 . 01 · 0 . 03 · 0 . 04 Sch¨ utze: Language models for IR 7 / 30

  3. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 Sch¨ utze: Language models for IR 7 / 30

  4. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 Sch¨ utze: Language models for IR 7 / 30

  5. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 Sch¨ utze: Language models for IR 7 / 30

  6. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes frog P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 Sch¨ utze: Language models for IR 7 / 30

  7. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes frog P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 · 0 . 01 Sch¨ utze: Language models for IR 7 / 30

  8. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes frog STOP P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 · 0 . 01 Sch¨ utze: Language models for IR 7 / 30

  9. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes frog STOP P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 · 0 . 01 · 0 . 2 Sch¨ utze: Language models for IR 7 / 30

  10. Statistical language models Statistical language models in IR Discussion A probabilistic language model w P ( w | q 1 ) w P ( w | q 1 ) STOP 0.2 toad 0.01 the 0.2 said 0.03 a 0.1 likes 0.02 q 1 frog 0.01 that 0.04 . . . . . . This is a one-state probabilistic finite-state automaton – a unigram language model – and the state emission distribution for its one state q 1 . STOP is not a word, but a special symbol indicating that the automaton stops. frog said that toad likes frog STOP P (string) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 · 0 . 01 · 0 . 2 = 0 . 0000000000048 Sch¨ utze: Language models for IR 7 / 30

  11. Statistical language models Statistical language models in IR Discussion A different language model for each document language model of d 1 language model of d 2 P ( w | . ) P ( w | . ) P ( w | . ) P ( w | . ) w w w w STOP .2 toad .01 STOP .2 toad .02 the .2 said .03 the .15 said .03 a .1 likes .02 a .08 likes .02 frog .01 that .04 frog .01 that .05 . . . . . . . . . . . . query: frog said that toad likes frog STOP Sch¨ utze: Language models for IR 8 / 30

  12. Statistical language models Statistical language models in IR Discussion A different language model for each document language model of d 1 language model of d 2 P ( w | . ) P ( w | . ) P ( w | . ) P ( w | . ) w w w w STOP .2 toad .01 STOP .2 toad .02 the .2 said .03 the .15 said .03 a .1 likes .02 a .08 likes .02 frog .01 that .04 frog .01 that .05 . . . . . . . . . . . . query: frog said that toad likes frog STOP P (query | M d 1 ) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 · 0 . 01 · 0 . 2 = 0 . 0000000000048 = 4 . 8 · 10 − 12 Sch¨ utze: Language models for IR 8 / 30

  13. Statistical language models Statistical language models in IR Discussion A different language model for each document language model of d 1 language model of d 2 P ( w | . ) P ( w | . ) P ( w | . ) P ( w | . ) w w w w STOP .2 toad .01 STOP .2 toad .02 the .2 said .03 the .15 said .03 a .1 likes .02 a .08 likes .02 frog .01 that .04 frog .01 that .05 . . . . . . . . . . . . query: frog said that toad likes frog STOP P (query | M d 1 ) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 · 0 . 01 · 0 . 2 = 0 . 0000000000048 = 4 . 8 · 10 − 12 P (query | M d 2 ) = 0 . 01 · 0 . 03 · 0 . 05 · 0 . 02 · 0 . 02 · 0 . 01 · 0 . 2 = 0 . 0000000000120 = 12 · 10 − 12 Sch¨ utze: Language models for IR 8 / 30

  14. Statistical language models Statistical language models in IR Discussion A different language model for each document language model of d 1 language model of d 2 P ( w | . ) P ( w | . ) P ( w | . ) P ( w | . ) w w w w STOP .2 toad .01 STOP .2 toad .02 the .2 said .03 the .15 said .03 a .1 likes .02 a .08 likes .02 frog .01 that .04 frog .01 that .05 . . . . . . . . . . . . query: frog said that toad likes frog STOP P (query | M d 1 ) = 0 . 01 · 0 . 03 · 0 . 04 · 0 . 01 · 0 . 02 · 0 . 01 · 0 . 2 = 0 . 0000000000048 = 4 . 8 · 10 − 12 P (query | M d 2 ) = 0 . 01 · 0 . 03 · 0 . 05 · 0 . 02 · 0 . 02 · 0 . 01 · 0 . 2 = 0 . 0000000000120 = 12 · 10 − 12 P (query | M d 1 ) < P (query | M d 2 ) Thus, document d 2 is “more relevant” to the query “frog said that toad likes frog STOP” than d 1 is. Sch¨ utze: Language models for IR 8 / 30

  15. Statistical language models Statistical language models in IR Discussion Outline Statistical language models 1 Statistical language models in IR 2 Discussion 3 Sch¨ utze: Language models for IR 9 / 30

  16. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Sch¨ utze: Language models for IR 10 / 30

  17. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Sch¨ utze: Language models for IR 10 / 30

  18. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Rank documents based on P ( d | q ) Sch¨ utze: Language models for IR 10 / 30

  19. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Rank documents based on P ( d | q ) P ( d | q ) = P ( q | d ) P ( d ) P ( q ) Sch¨ utze: Language models for IR 10 / 30

  20. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Rank documents based on P ( d | q ) P ( d | q ) = P ( q | d ) P ( d ) P ( q ) P ( q ) is the same for all documents, so ignore Sch¨ utze: Language models for IR 10 / 30

  21. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Rank documents based on P ( d | q ) P ( d | q ) = P ( q | d ) P ( d ) P ( q ) P ( q ) is the same for all documents, so ignore P ( d ) is the prior – often treated as the same for all d Sch¨ utze: Language models for IR 10 / 30

  22. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Rank documents based on P ( d | q ) P ( d | q ) = P ( q | d ) P ( d ) P ( q ) P ( q ) is the same for all documents, so ignore P ( d ) is the prior – often treated as the same for all d But we can give a higher prior to “high-quality” documents, e.g., those with high PageRank. Sch¨ utze: Language models for IR 10 / 30

  23. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Rank documents based on P ( d | q ) P ( d | q ) = P ( q | d ) P ( d ) P ( q ) P ( q ) is the same for all documents, so ignore P ( d ) is the prior – often treated as the same for all d But we can give a higher prior to “high-quality” documents, e.g., those with high PageRank. P ( q | d ) is the probability of q given d . Sch¨ utze: Language models for IR 10 / 30

  24. Statistical language models Statistical language models in IR Discussion Using language models in IR Each document is treated as (the basis for) a language model. Given a query q Rank documents based on P ( d | q ) P ( d | q ) = P ( q | d ) P ( d ) P ( q ) P ( q ) is the same for all documents, so ignore P ( d ) is the prior – often treated as the same for all d But we can give a higher prior to “high-quality” documents, e.g., those with high PageRank. P ( q | d ) is the probability of q given d . Under the assumptions we made, ranking documents according to P ( q | d ) P ( d ) and P ( d | q ) is equivalent. Sch¨ utze: Language models for IR 10 / 30

  25. Statistical language models Statistical language models in IR Discussion How to compute P ( q | d ) Sch¨ utze: Language models for IR 11 / 30

  26. Statistical language models Statistical language models in IR Discussion How to compute P ( q | d ) We will make the same conditional independence assumption as in BIM. Sch¨ utze: Language models for IR 11 / 30

  27. Statistical language models Statistical language models in IR Discussion How to compute P ( q | d ) We will make the same conditional independence assumption as in BIM. � P ( q | M d ) = P ( � t 1 , . . . , t | q | �| M d ) = P ( t k | M d ) 1 ≤ k ≤| q | ( | q | : length of q ; t k : the token occurring at position k in q ) Sch¨ utze: Language models for IR 11 / 30

  28. Statistical language models Statistical language models in IR Discussion How to compute P ( q | d ) We will make the same conditional independence assumption as in BIM. � P ( q | M d ) = P ( � t 1 , . . . , t | q | �| M d ) = P ( t k | M d ) 1 ≤ k ≤| q | ( | q | : length of q ; t k : the token occurring at position k in q ) This is equivalent to: � P ( t | M d ) tf t , q P ( q | M d ) = distinct term t in q Sch¨ utze: Language models for IR 11 / 30

  29. Statistical language models Statistical language models in IR Discussion How to compute P ( q | d ) We will make the same conditional independence assumption as in BIM. � P ( q | M d ) = P ( � t 1 , . . . , t | q | �| M d ) = P ( t k | M d ) 1 ≤ k ≤| q | ( | q | : length of q ; t k : the token occurring at position k in q ) This is equivalent to: � P ( t | M d ) tf t , q P ( q | M d ) = distinct term t in q tf t , q : term frequency (# occurrences) of t in q Sch¨ utze: Language models for IR 11 / 30

  30. Statistical language models Statistical language models in IR Discussion How to compute P ( q | d ) We will make the same conditional independence assumption as in BIM. � P ( q | M d ) = P ( � t 1 , . . . , t | q | �| M d ) = P ( t k | M d ) 1 ≤ k ≤| q | ( | q | : length of q ; t k : the token occurring at position k in q ) This is equivalent to: � P ( t | M d ) tf t , q P ( q | M d ) = distinct term t in q tf t , q : term frequency (# occurrences) of t in q Multinomial model (omitting constant factor) Sch¨ utze: Language models for IR 11 / 30

  31. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Sch¨ utze: Language models for IR 12 / 30

  32. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Start with maximum likelihood estimates Sch¨ utze: Language models for IR 12 / 30

  33. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Start with maximum likelihood estimates P ( t | M d ) = tf t , d ˆ | d | ( | d | : length of d ; tf t , d : # occurrences of t in d ) Sch¨ utze: Language models for IR 12 / 30

  34. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Start with maximum likelihood estimates P ( t | M d ) = tf t , d ˆ | d | ( | d | : length of d ; tf t , d : # occurrences of t in d ) We have a problem with zeros. Sch¨ utze: Language models for IR 12 / 30

  35. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Start with maximum likelihood estimates P ( t | M d ) = tf t , d ˆ | d | ( | d | : length of d ; tf t , d : # occurrences of t in d ) We have a problem with zeros. A single t in the query with P ( t | M d ) = 0 will make P ( q | M d ) = � P ( t | M d ) zero. Sch¨ utze: Language models for IR 12 / 30

  36. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Start with maximum likelihood estimates P ( t | M d ) = tf t , d ˆ | d | ( | d | : length of d ; tf t , d : # occurrences of t in d ) We have a problem with zeros. A single t in the query with P ( t | M d ) = 0 will make P ( q | M d ) = � P ( t | M d ) zero. We would give a single term in the query “veto power”. Sch¨ utze: Language models for IR 12 / 30

  37. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Start with maximum likelihood estimates P ( t | M d ) = tf t , d ˆ | d | ( | d | : length of d ; tf t , d : # occurrences of t in d ) We have a problem with zeros. A single t in the query with P ( t | M d ) = 0 will make P ( q | M d ) = � P ( t | M d ) zero. We would give a single term in the query “veto power”. For example, for query [Michael Jackson top hits] a document about “Michael Jackson top songs” (but not using the word “hits”) would have P ( q | M d ) = 0. – That’s bad. Sch¨ utze: Language models for IR 12 / 30

  38. Statistical language models Statistical language models in IR Discussion Parameter estimation Missing piece: Where do the parameters P ( t | M d ) come from? Start with maximum likelihood estimates P ( t | M d ) = tf t , d ˆ | d | ( | d | : length of d ; tf t , d : # occurrences of t in d ) We have a problem with zeros. A single t in the query with P ( t | M d ) = 0 will make P ( q | M d ) = � P ( t | M d ) zero. We would give a single term in the query “veto power”. For example, for query [Michael Jackson top hits] a document about “Michael Jackson top songs” (but not using the word “hits”) would have P ( q | M d ) = 0. – That’s bad. We need to smooth the estimates to avoid zeros. Sch¨ utze: Language models for IR 12 / 30

  39. Statistical language models Statistical language models in IR Discussion Smoothing Key intuition: A nonoccurring term is possible (even though it didn’t occur), . . . Sch¨ utze: Language models for IR 13 / 30

  40. Statistical language models Statistical language models in IR Discussion Smoothing Key intuition: A nonoccurring term is possible (even though it didn’t occur), . . . . . . but no more likely than would be expected by chance in the collection. Sch¨ utze: Language models for IR 13 / 30

  41. Statistical language models Statistical language models in IR Discussion Smoothing Key intuition: A nonoccurring term is possible (even though it didn’t occur), . . . . . . but no more likely than would be expected by chance in the collection. Notation: M c : the collection model; cf t : the number of occurrences of t in the collection; T = � t cf t : the total number of tokens in the collection. Sch¨ utze: Language models for IR 13 / 30

  42. Statistical language models Statistical language models in IR Discussion Smoothing Key intuition: A nonoccurring term is possible (even though it didn’t occur), . . . . . . but no more likely than would be expected by chance in the collection. Notation: M c : the collection model; cf t : the number of occurrences of t in the collection; T = � t cf t : the total number of tokens in the collection. P ( t | M c ) = cf t ˆ T Sch¨ utze: Language models for IR 13 / 30

  43. Statistical language models Statistical language models in IR Discussion Smoothing Key intuition: A nonoccurring term is possible (even though it didn’t occur), . . . . . . but no more likely than would be expected by chance in the collection. Notation: M c : the collection model; cf t : the number of occurrences of t in the collection; T = � t cf t : the total number of tokens in the collection. P ( t | M c ) = cf t ˆ T We will use ˆ P ( t | M c ) to “smooth” P ( t | d ) away from zero. Sch¨ utze: Language models for IR 13 / 30

  44. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing P ( t | d ) = λ P ( t | M d ) + (1 − λ ) P ( t | M c ) Sch¨ utze: Language models for IR 14 / 30

  45. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing P ( t | d ) = λ P ( t | M d ) + (1 − λ ) P ( t | M c ) Mixes the probability from the document with the general collection frequency of the word. Sch¨ utze: Language models for IR 14 / 30

  46. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing P ( t | d ) = λ P ( t | M d ) + (1 − λ ) P ( t | M c ) Mixes the probability from the document with the general collection frequency of the word. High value of λ : “conjunctive-like” search – tends to retrieve documents containing all query words. Sch¨ utze: Language models for IR 14 / 30

  47. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing P ( t | d ) = λ P ( t | M d ) + (1 − λ ) P ( t | M c ) Mixes the probability from the document with the general collection frequency of the word. High value of λ : “conjunctive-like” search – tends to retrieve documents containing all query words. Low value of λ : more disjunctive, suitable for long queries Sch¨ utze: Language models for IR 14 / 30

  48. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing P ( t | d ) = λ P ( t | M d ) + (1 − λ ) P ( t | M c ) Mixes the probability from the document with the general collection frequency of the word. High value of λ : “conjunctive-like” search – tends to retrieve documents containing all query words. Low value of λ : more disjunctive, suitable for long queries Tuning λ is important for good performance. Sch¨ utze: Language models for IR 14 / 30

  49. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing: Summary � P ( q | d ) ∝ ( λ P ( t k | M d ) + (1 − λ ) P ( t k | M c )) 1 ≤ k ≤| q | Sch¨ utze: Language models for IR 15 / 30

  50. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing: Summary � P ( q | d ) ∝ ( λ P ( t k | M d ) + (1 − λ ) P ( t k | M c )) 1 ≤ k ≤| q | What we model: The user has a document in mind and generates the query from this document. Sch¨ utze: Language models for IR 15 / 30

  51. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer smoothing: Summary � P ( q | d ) ∝ ( λ P ( t k | M d ) + (1 − λ ) P ( t k | M c )) 1 ≤ k ≤| q | What we model: The user has a document in mind and generates the query from this document. P ( q | d ) is the probability that the document that the user had in mind was in fact this one. Sch¨ utze: Language models for IR 15 / 30

  52. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 Sch¨ utze: Language models for IR 16 / 30

  53. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 d 1 : Jackson was one of the most talented entertainers of all time Sch¨ utze: Language models for IR 16 / 30

  54. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 d 1 : Jackson was one of the most talented entertainers of all time d 2 : Michael Jackson anointed himself King of Pop Sch¨ utze: Language models for IR 16 / 30

  55. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 d 1 : Jackson was one of the most talented entertainers of all time d 2 : Michael Jackson anointed himself King of Pop Query q : Michael Jackson Sch¨ utze: Language models for IR 16 / 30

  56. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 d 1 : Jackson was one of the most talented entertainers of all time d 2 : Michael Jackson anointed himself King of Pop Query q : Michael Jackson Use mixture model with λ = 1 / 2 Sch¨ utze: Language models for IR 16 / 30

  57. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 d 1 : Jackson was one of the most talented entertainers of all time d 2 : Michael Jackson anointed himself King of Pop Query q : Michael Jackson Use mixture model with λ = 1 / 2 P ( q | d 1 ) = [(0 / 11 + 1 / 18) / 2] · [(1 / 11 + 2 / 18) / 2] ≈ 0 . 003 Sch¨ utze: Language models for IR 16 / 30

  58. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 d 1 : Jackson was one of the most talented entertainers of all time d 2 : Michael Jackson anointed himself King of Pop Query q : Michael Jackson Use mixture model with λ = 1 / 2 P ( q | d 1 ) = [(0 / 11 + 1 / 18) / 2] · [(1 / 11 + 2 / 18) / 2] ≈ 0 . 003 P ( q | d 2 ) = [(1 / 7 + 1 / 18) / 2] · [(1 / 7 + 2 / 18) / 2] ≈ 0 . 013 Sch¨ utze: Language models for IR 16 / 30

  59. Statistical language models Statistical language models in IR Discussion Example Collection: d 1 and d 2 d 1 : Jackson was one of the most talented entertainers of all time d 2 : Michael Jackson anointed himself King of Pop Query q : Michael Jackson Use mixture model with λ = 1 / 2 P ( q | d 1 ) = [(0 / 11 + 1 / 18) / 2] · [(1 / 11 + 2 / 18) / 2] ≈ 0 . 003 P ( q | d 2 ) = [(1 / 7 + 1 / 18) / 2] · [(1 / 7 + 2 / 18) / 2] ≈ 0 . 013 Ranking: d 2 > d 1 Sch¨ utze: Language models for IR 16 / 30

  60. Statistical language models Statistical language models in IR Discussion Dirichlet smoothing Sch¨ utze: Language models for IR 17 / 30

  61. Statistical language models Statistical language models in IR Discussion Dirichlet smoothing P ( t | d ) = tf t , d + α P ( t | M c ) L d + α Sch¨ utze: Language models for IR 17 / 30

  62. Statistical language models Statistical language models in IR Discussion Dirichlet smoothing P ( t | d ) = tf t , d + α P ( t | M c ) L d + α The background distribution P ( t | M c ) is the prior for P ( t | d ). Sch¨ utze: Language models for IR 17 / 30

  63. Statistical language models Statistical language models in IR Discussion Dirichlet smoothing P ( t | d ) = tf t , d + α P ( t | M c ) L d + α The background distribution P ( t | M c ) is the prior for P ( t | d ). Intuition: Before having seen any part of the document we start with the background distribution as our estimate. Sch¨ utze: Language models for IR 17 / 30

  64. Statistical language models Statistical language models in IR Discussion Dirichlet smoothing P ( t | d ) = tf t , d + α P ( t | M c ) L d + α The background distribution P ( t | M c ) is the prior for P ( t | d ). Intuition: Before having seen any part of the document we start with the background distribution as our estimate. As we read the document and count terms we update the background distribution. Sch¨ utze: Language models for IR 17 / 30

  65. Statistical language models Statistical language models in IR Discussion Dirichlet smoothing P ( t | d ) = tf t , d + α P ( t | M c ) L d + α The background distribution P ( t | M c ) is the prior for P ( t | d ). Intuition: Before having seen any part of the document we start with the background distribution as our estimate. As we read the document and count terms we update the background distribution. The weighting factor α determines how strong an effect the prior has. Sch¨ utze: Language models for IR 17 / 30

  66. Statistical language models Statistical language models in IR Discussion Jelinek-Mercer or Dirichlet? Dirichlet performs better for keyword queries, Jelinek-Mercer performs better for verbose queries. Both models are sensitive to the smoothing parameters – you shouldn’t use these models without parameter tuning. Sch¨ utze: Language models for IR 18 / 30

  67. Statistical language models Statistical language models in IR Discussion Sensitivity of Dirichlet to smoothing parameter µ is the Dirichlet smoothing parameter (called α on the previous slides) Sch¨ utze: Language models for IR 19 / 30

  68. Statistical language models Statistical language models in IR Discussion Vector space (tf-idf) vs. LM precision significant Rec. tf-idf LM %chg 0.0 0.7439 0.7590 +2.0 0.1 0.4521 0.4910 +8.6 0.2 0.3514 0.4045 +15.1 * 0.4 0.2093 0.2572 +22.9 * 0.6 0.1024 0.1405 +37.1 * 0.8 0.0160 0.0432 +169.6 * 1.0 0.0028 0.0050 +76.9 11-point average 0.1868 0.2233 +19.6 * Sch¨ utze: Language models for IR 20 / 30

  69. Statistical language models Statistical language models in IR Discussion Vector space (tf-idf) vs. LM precision significant Rec. tf-idf LM %chg 0.0 0.7439 0.7590 +2.0 0.1 0.4521 0.4910 +8.6 0.2 0.3514 0.4045 +15.1 * 0.4 0.2093 0.2572 +22.9 * 0.6 0.1024 0.1405 +37.1 * 0.8 0.0160 0.0432 +169.6 * 1.0 0.0028 0.0050 +76.9 11-point average 0.1868 0.2233 +19.6 * The language modeling approach always does better in these experiments . . . Sch¨ utze: Language models for IR 20 / 30

  70. Statistical language models Statistical language models in IR Discussion Vector space (tf-idf) vs. LM precision significant Rec. tf-idf LM %chg 0.0 0.7439 0.7590 +2.0 0.1 0.4521 0.4910 +8.6 0.2 0.3514 0.4045 +15.1 * 0.4 0.2093 0.2572 +22.9 * 0.6 0.1024 0.1405 +37.1 * 0.8 0.0160 0.0432 +169.6 * 1.0 0.0028 0.0050 +76.9 11-point average 0.1868 0.2233 +19.6 * The language modeling approach always does better in these experiments . . . . . . but note that where the approach shows significant gains is at higher levels of recall. Sch¨ utze: Language models for IR 20 / 30

  71. Statistical language models Statistical language models in IR Discussion Summary: IR language models Sch¨ utze: Language models for IR 21 / 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend