data mining
play

DATA MINING LECTURE 11 Link Analysis Ranking PageRank -- Random - PowerPoint PPT Presentation

DATA MINING LECTURE 11 Link Analysis Ranking PageRank -- Random walks HITS Absorbing Random Walks and Label Propagation Network Science A number of complex systems can be modeled as networks (graphs). The Web (Online) Social


  1. DATA MINING LECTURE 11 Link Analysis Ranking PageRank -- Random walks HITS Absorbing Random Walks and Label Propagation

  2. Network Science • A number of complex systems can be modeled as networks (graphs). • The Web • (Online) Social Networks • Biological systems • Communication networks (internet, email) • The Economy • We cannot truly understand such complex systems unless we understand the underlying network. • Everything is connected, studying individual entities gives only a partial view of a system • Data mining for networks is a very popular area • Applications to the Web is one of the success stories for network data mining.

  3. How to organize the web • First try: Manually curated Web Directories

  4. How to organize the web • Second try: Web Search • Information Retrieval investigates: • Find relevant docs in a small and trusted set e.g., Newspaper articles, Patents, etc. (“needle -in-a- haystack”) • Limitation of keywords (synonyms, polysemy, etc) • But: Web is huge, full of untrusted documents, random things, web spam, etc .  Everyone can create a web page of high production value  Rich diversity of people issuing queries  Dynamic and constantly-changing nature of web content

  5. How to organize the web • Third try (the Google era): using the web graph • Sift from relevance to authoritativeness • It is not only important that a page is relevant, but that it is also important on the web • For example, what kind of results would we like to get for the query “ greek newspapers”?

  6. Link Analysis • Not all web pages are equal on the web What is the simplest way to measure importance of a page on the web?

  7. Link Analysis Ranking • Use the graph structure in order to determine the relative importance of the nodes • Applications: Ranking on graphs (Web, Twitter, FB, etc) • Intuition: An edge from node p to node q denotes endorsement • Node p endorses/recommends/confirms the authority/centrality/importance of node q • Use the graph of recommendations to assign an authority value to every node

  8. Rank by Popularity • Rank pages according to the number of incoming edges (in-degree, degree centrality) 𝑤 2 1. Red Page 𝑤 1 2. Yellow Page 𝑤 3 3. Blue Page 4. Purple Page 5. Green Page 𝑤 5 𝑤 4

  9. Popularity • It is not important only how many link to you, but how important are the people that link to you. • Good authorities are pointed by good authorities • Recursive definition of importance

  10. PAGERANK

  11. PageRank • Good authorities should be pointed by good authorities • The value of a node is the value of the nodes that point to it. • How do we implement that? • Assume that we have a unit of authority to distribute to all nodes. 1 • Initially each node gets 𝑜 amount of authority • Each node distributes the authority value they have to their neighbors • The authority value of each node is the sum of the authority fractions it collects from its neighbors.

  12. The PageRank algorithm Think of the nodes in the graph as containers of capacity of 1 liter. We distribute a liter of liquid equally to all containers

  13. The PageRank algorithm The edges act like pipes that transfer liquid between nodes.

  14. The PageRank algorithm The edges act like pipes that transfer liquid between nodes. The contents of each node are distributed to its neighbors.

  15. The PageRank algorithm The edges act like pipes that transfer liquid between nodes. The contents of each node are distributed to its neighbors.

  16. The PageRank algorithm The edges act like pipes that transfer liquid between nodes. The contents of each node are distributed to its neighbors.

  17. The PageRank algorithm The system will reach an equilibrium state where the amount of liquid in each node remains constant.

  18. The PageRank algorithm The amount of liquid in each node determines the importance of the node. Large quantity means large incoming flow from nodes with large quantity of liquid.

  19. PageRank • Good authorities should be pointed by good authorities • The value of a node is the value of the nodes that point to it. • How do we implement that? • Assume that we have a unit of authority to distribute to all nodes. 1 • Initially each node gets 𝑜 amount of authority • Each node distributes the authority value they have to their neighbors • The authority value of each node is the sum of the authority fractions it collects from its neighbors. 1 𝑥 𝑤 = 𝑒 𝑝𝑣𝑢 𝑣 𝑥 𝑣 𝑥 𝑤 : the PageRank value of node 𝑤 𝑣→𝑤 Recursive definition

  20. Example 𝑤 2 𝑤 1 w 1 = 1/3 w 4 + 1/2 w 5 𝑤 3 w 2 = 1/2 w 1 + w 3 + 1/3 w 4 w 3 = 1/2 w 1 + 1/3 w 4 w 4 = 1/2 w 5 w 5 = w 2 𝑤 5 𝑤 4 1 𝑥 𝑤 = 𝑒 𝑝𝑣𝑢 𝑣 𝑥 𝑣 𝑣→𝑤

  21. Computing PageRank weights • A simple way to compute the weights is by iteratively updating the weights • PageRank Algorithm 1 Initialize all PageRank weights to 𝑜 Repeat: 1 𝑥 𝑤 = 𝑒 𝑝𝑣𝑢 𝑣 𝑥 𝑣 𝑣→𝑤 Until the weights do not change • This process converges

  22. Example 1 𝑥 𝑤 = 𝑒 𝑝𝑣𝑢 𝑣 𝑥 𝑣 𝑤 2 𝑣→𝑤 w 1 = 1/3 w 4 + 1/2 w 5 𝑤 1 w 2 = 1/2 w 1 + w 3 + 1/3 w 4 𝑤 3 w 3 = 1/2 w 1 + 1/3 w 4 w 4 = 1/2 w 5 w 5 = w 2 𝒙 𝟐 𝒙 𝟑 𝒙 𝟒 𝒙 𝟓 𝒙 𝟔 𝑤 5 𝑤 4 t=0 0.2 0.2 0.2 0.2 0.2 t=1 0.16 0.36 0.16 0.1 0.2 Think of the weight as a fluid: there is t=2 0.13 0.28 0.11 0.1 0.36 constant amount of it in the graph, t=3 0.22 0.22 0.1 0.18 0.28 but it moves around until it stabilizes t=4 0.2 0.27 0.17 0.14 0.22

  23. Example 1 𝑥 𝑤 = 𝑒 𝑝𝑣𝑢 𝑣 𝑥 𝑣 𝑤 2 𝑣→𝑤 w 1 = 1/3 w 4 + 1/2 w 5 𝑤 1 w 2 = 1/2 w 1 + w 3 + 1/3 w 4 𝑤 3 w 3 = 1/2 w 1 + 1/3 w 4 w 4 = 1/2 w 5 w 5 = w 2 𝑤 5 𝑤 4 𝒙 𝟐 𝒙 𝟑 𝒙 𝟒 𝒙 𝟓 𝒙 𝟔 t=25 0.18 0.27 0.13 0.13 0.27 Think of the weight as a fluid: there is constant amount of it in the graph, but it moves around until it stabilizes

  24. Random Walks on Graphs • The algorithm defines a random walk on the graph • Random walk: • Start from a node chosen uniformly at random with probability 1 𝑜 . • Pick one of the outgoing edges uniformly at random • Move to the destination of the edge • Repeat. • The Random Surfer model • Users wander on the web, following links.

  25. Example • Step 0 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  26. Example • Step 0 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  27. Example • Step 1 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  28. Example • Step 1 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  29. Example • Step 2 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  30. Example • Step 2 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  31. Example • Step 3 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  32. Example • Step 3 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  33. Example • Step 4… 𝑤 2 𝑤 1 𝑤 3 𝑤 5 𝑤 4

  34. Random walk 𝑢 of being at • Question: what is the probability 𝑞 𝑗 node 𝑗 after 𝑢 steps? 𝑤 2 0 = 1 𝑢 = 1 𝑢−1 + 1 𝑤 1 𝑢−1 𝑞 1 𝑞 1 3 𝑞 4 2 𝑞 5 𝑤 3 5 0 = 1 𝑢 = 1 𝑢−1 + 1 𝑢−1 𝑢−1 𝑞 2 𝑞 2 2 𝑞 1 + 𝑞 3 3 𝑞 4 5 𝑢 = 1 𝑢−1 + 1 0 = 1 𝑢−1 𝑞 3 2 𝑞 1 3 𝑞 4 𝑞 3 5 0 = 1 𝑢 = 1 𝑢−1 𝑞 4 𝑞 4 2 𝑞 5 𝑤 5 𝑤 4 5 0 = 1 𝑢 = 𝑞 2 𝑢−1 𝑞 5 𝑞 5 The equations are the same as those for the 5 PageRank computation

  35. Markov chains • A Markov chain describes a discrete time stochastic process over a set of states 𝑇 = {𝑡 1 , 𝑡 2 , … , 𝑡 𝑜 } according to a transition probability matrix 𝑄 = {𝑄 𝑗𝑘 } • 𝑄 𝑗𝑘 = probability of moving to state 𝑘 when at state 𝑗 • Matrix 𝑄 has the property that the entries of all rows sum to 1 𝑄 𝑗, 𝑘 = 1 𝑘 A matrix with this property is called stochastic 𝑢 , 𝑞 2 𝑢 , … , 𝑞 𝑜 𝑢 ) that stores the • State probability distribution: The vector 𝑞 𝑢 = (𝑞 𝑗 probability of being at state 𝑡 𝑗 after 𝑢 steps • Memorylessness property: The next state of the chain depends only at the current state and not on the past of the process (first order MC) • Higher order MCs are also possible • Markov Chain Theory: After infinite steps the state probability vector converges to a unique distribution if the chain is irreducible and aperiodic

  36. Random walks • Random walks on graphs correspond to Markov Chains • The set of states 𝑇 is the set of nodes of the graph 𝐻 • The transition probability matrix is the probability that we follow an edge from one node to another 1 𝑄 𝑗, 𝑘 = d 𝑝𝑣𝑢 𝑗 • We can compute the vector 𝑞 𝑢 at step t using a vector-matrix multiplication 𝑞 𝑢+1 = 𝑞 𝑢 𝑄

  37. An example 𝑤 2 0 1 1 0 0     𝑤 1 0 0 0 0 1   𝑤 3   A  0 1 0 0 0   1 1 1 0 0     1 0 0 1 0     0 1 2 1 2 0 0   𝑤 5 𝑤 4 0 0 0 0 1      P 0 1 0 0 0   1 3 1 3 1 3 0 0       1 2 0 0 1 2 0

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend