inf3800 inf4800 s keteknologi
play

INF3800/INF4800 Sketeknologi 2017.01.16 Foreleser Aleksander hrn, - PowerPoint PPT Presentation

INF3800/INF4800 Sketeknologi 2017.01.16 Foreleser Aleksander hrn, Professor II aleksaoh@ifi.uio.no Gruppelrere Camilla Emina Stenberg Eirik Isene camilest@student.matnat.uio.no eirikise@ifi.uio.no


  1. INF3800/INF4800 Søketeknologi 2017.01.16

  2. Foreleser Aleksander Øhrn, Professor II aleksaoh@ifi.uio.no

  3. Gruppelærere Camilla Emina Stenberg Eirik Isene camilest@student.matnat.uio.no eirikise@ifi.uio.no

  4. http://nlp.stanford.edu/IR-book/information-retrieval-book.html Pensum +

  5. Introduksjon

  6. The Sweetspot Distributed Systems Information Language Retrieval Technology

  7. Web Search

  8. alltheweb.com 1999-2003

  9. Enterprise Search Much more than intranets

  10. Data Centers alltheweb.com 2000

  11. Data Centers Microsoft 2010 http://www.youtube.com/watch?v=K3b5Ca6lzqE http://www.youtube.com/watch?v=PPnoKb9fTkA

  12. Search Platform Anatomy The 50,000 Foot View Document Crawler Indexer Processing Result Data Mining Index Processing Query Search Front End Processing

  13. Scaling Content Volume • – How many documents are there? How large are the documents? – Content Complexity • How many fields does each document have? – How complex are the field structures? – Query Traffic • How many queries per second are there? – – What is the latency per query? Update Frequency • – How often does the content change? Indexing Latency • – How quickly must new data become searchable? Query Complexity • How many query terms are there? – What is the type and structure of the query terms? –

  14. Scaling Scale through replicating the partitions Query Traffic Content Volume Scale through partitioning the data

  15. Crawling The Web

  16. Processing The Content HTML, PDF, Word, UTF-8, ISCII, English, Polish, Title, headings, Excel, PowerPoint, KOI8-R, Shift-JIS, Danish, Japanese, body, navigation, XML, Zip, … ISO-8859-1, … Norwegian, … ads, footnotes, … Format detection Encoding detection Language detection Parsing “buljongterning”, “30,000”, Go, went, gone “Rindfleischetikett “L’Hôpital’s rule”, Øhrn, Ohrn, Car, cars ierungsüberwachu Oehrn, Öhrn, … Silly, sillier, silliest “ 台湾研究 “, … ngsaufgabenübert ragungsgesetz”, … Tokenization Character normalization Lemmatization Decompounding Persons, Sports, Health, Who said what, companies, World, Politics, Positive or who works where, events, locations, negative, liberal Entertainment, what happened dates, quotations, or conservative, … Spam, Offensive when, … … Content, … Entity extraction Relationship extraction Sentiment analysis Classification

  17. Creating The Index Word Document Position tea 4 22 4 32 4 76 8 3 teacart 8 7 teach 2 102 2 233 8 77 teacher 2 57

  18. Deploying The Index

  19. Processing The Query “I am looking for “LED TVs between fish restaurants $1000 and $2000” near Majorstua” “hphotos-snc3 fbcdn” “brintney speers pics” “23445 + 43213”

  20. Searching The Content http://www.stanford.edu/class/cs276/handouts/lecture2-dictionary.pdf Assess relevancy as we go along

  21. Searching The Content Federation Query processing Result processing Dispatching Merging Searching Caption generation “Divide and conquer”

  22. Searching The Content Tiering Organize the search nodes in a row into multiple • tiers Tier 1 • Top tier nodes may have fewer documents and run on better hardware Fall through? Keep the good stuff in the top tiers • • Only fall through to the lower tiers if not enough Tier 2 good hits are not found in the top tiers Analyze query logs to decide which documents • Fall through? that belong in which tiers Tier 3 “All search nodes are equal, but some are more equal than others”

  23. Searching The Content Context Drilling Body, headings, title, click-through queries, anchor texts Headings, title, click- through queries, anchor texts Title, click-through queries, anchor texts Click-through queries, anchor texts “If the result set is too large, only consider the superior contexts”

  24. Relevancy Anchor texts, click- through queries, tags, … Page rank, link Title, anchor texts, cardinality, item profit headings, body, … margin, popularity, … Crowdsourced annotations Document quality Match context Term frequency, inverse document Freshness, date of frequency, publication, buzz completeness in factor, … superior contexts, proximity, … Basic statistics Timeliness Relevancy score “Maximize the normalized discounted cumulative gain (NDCG)”

  25. Processing The Results Faceted browsing • – What are the distributions of data across the various document fields? “Local” versus “global” meta data – Result arbitration • Which results from which sources should – be displayed in a federation setting? How should the SERP layout be rendered? – Unsupervised clustering • Can we automatically organize the results – set by grouping similar items together? Last-minute security trimming • Does the user still have access to each – result?

  26. Data Mining

  27. Applications

  28. http://www.google.com/jobs/britney.html Spellchecking

  29. Spellchecking britnay spears vidios Generate candidates britney shears videos bridney speaks vidoes birtney vidies Find the best path 1. Generate a set of candidates per query term using approximate matching techniques. Score each candidate according to, e.g., “distance” from the query term and usage frequency. 2. Find the best path in the lattice using the Viterbi algorithm. Use, e.g., candidate scores and bigram statistics to guide the search.

  30. Entity Extraction … … … … … Levels of abstraction MAN FOOD N/proper V/past/eat DET ADJ N/singular Richard ate some bad curry 1. Logically annotate the text with zero or more computed layers of meta data. The original surface form of the text can be viewed as trivial meta data. 2. Apply a pattern matcher or grammar over selected layers. Use, e.g., handcrafted rules or machine-trained models. Extract the surface forms that correspond to the matching patterns.

  31. Sentiment Analysis “What is the current perception of my brand?” “I want to stay at a hotel whose user reviews have a definite positive tone.” http://research.microsoft.com/en-us/projects/blews/ “What are the most 1. To construct a sentiment vocabulary, start by defining a small seed emotionally charged set of known polar opposites. issues in American politics right now?” 2. Expand the vocabulary by, e.g., looking at the context around the seeds in a training corpus. 3. Use the expanded vocabulary to build a classifier. Apply special heuristics to take care of, e.g., negations and irony.

  32. Contextual Search “Sentences where someone says “Dates and locations something positive related to D-Day.” about Adidas.” xml:sentence:(“adidas” and sentiment:@degree:>0) xml:sentence:(”d-day” and (scope(date) or scope(location))) “Paragraphs that discuss a company “Sentences where the merger or “Paragraphs that acronym MIT is acquisition.” contain quotations by defined.” Alan Greenspan, where he mentions a xml:paragraph:(string(“merger”, linguistics=“on”) and monetary amount.” scope(company) and scope(price)) xml:sentence:acronym:(@base:”mit” and scope(@definition)) xml:paragraph:quotation:(@speaker:”greenspan” and scope(price)) Persons that appear in Persons that appear in documents that contain paragraphs that contain the word {soccer} the word {soccer} Example from Wikipedia

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend