CS347 Lecture 6 April 25 2001 Prabhakar Raghavan Today s topic Link based ranking in web search engines Web idiosyncrasies Distributed authorship Millions of people creating pages with their own style grammar vocabulary opinions facts falsehoods Not all have the purest motives in providing highquality information commercial motives drive spamming The open web is largely a marketing tool IBM s home page does not contain computer More web idiosyncrasies Some pages have little or no text gifs may embed text Variety of languages lots of distinct terms Over 100M distinct terms Long lists of links Size 1B pages each with 1K terms Growing at a few million pages day Link analysis Two basic approaches Universal query independent ordering on all web pages based on link analysis Of two pages meeting a text query one will always win over the other regardless of the query Query specific ordering on web pages Of two pages meeting a query the relative ordering may vary from query to query Query independent ordering First generation using link counts as simple measures of popularity Two basic suggestions Undirected popularity Each page gets a score the number of in links plus the number of out links 3 2 5 Directed popularity Score of a page number of its in links 3 Query processing First retrieve all pages meeting the text query say venture capital Order these by their link popularity either variant on the previous page Spamming simple popularity Exercise How do you spam each of the following heuristics so your page gets a high score Each page gets a score the number of inlinks plus the number of out links Score of a page number of its in links Pagerank scoring Imagine a browser doing a random walk on 1 3 web pages 1 3 1 3 Start at a random page At each step go out of the current page along one of the links on that page equiprobably In the steady state each page has a longterm visit rate use this as the page s score Not quite enough The web is full of dead ends Random walk can get stuck in dead ends Makes no sense to talk about long term visit rates Teleporting At each step with probability 10 jump to a random web page With remaining probability 90 go out on a random link If no out link stay put in this case Result of teleporting Now cannot get stuck locally There is a long term rate at which any page is visited not obvious will show this How do we compute this visit rate Markov chains A Markov chain consists of n states plus an n n transition probability matrix P At each step we are in exactly one of the states For 1 i j n the matrix entry Pij tells us the probability of j being the next state given we are currently in state i i Pij j Pii 0 is OK Markov chains n Clearly for all i Pij 1 j 1 Markov chains are abstractions of random walks Exercise represent the teleporting random walk from 3 slides ago as a Markov chain for this case Ergodic Markov chains A Markov chain is ergodic if you have a path from any state to any other you can be in any state at every time step with non zero probability Not ergodic even odd Ergodic Markov chains For any ergodic Markov chain there is a unique long term visit rate for each state Steady state distribution Over a long time period we visit each state in proportion to this rate It doesn t matter where we start Probability vectors A probability vector x x1 xn tells us where the walk is at any point E g 000 1 000 means we re in state i 1 i n More generally the vector x x1 xn means the walk is in state i with probability xi n x i i 1 1 Change in probability vector If the probability vector is x x1 xn at this step what is it at the next step Recall that row i of the transition prob Matrix P tells us where we go next from state i So from x our next state is distributed as xP Computing the visit rate The steady state looks like a vector of probabilities a a1 an ai is the probability that we are in state i 1 4 1 3 4 2 3 4 1 4 For this example a1 1 4 and a2 3 4 How do we compute this vector Let a a1 an denote the row vector of steady state probabilities If we our current position is described by a then the next step is distributed as aP But a is the steady state so a aP Solving this matrix equation gives us a So a is the left eigenvector for P Another way of computing a Recall regardless of where we start we eventually reach the steady state a Start with any distribution say x 10 0 After one step we re at xP after two steps at xP2 then xP3 and so on Eventually means for large k xPk a Algorithm multiply x by increasing powers of P until the product looks stable Pagerank summary Preprocessing Given graph of links build matrix P From it compute a The entry ai is a number between 0 and 1 the pagerank of page i Query processing Retrieve pages meeting query Rank them by their pagerank Order is query independent The reality Pagerank is used in google but so are many other clever heuristics more on these heuristics later Query dependent link analysis In response to a query instead of an ordered list of pages each meeting the query find two sets of inter related pages Hub pages are good lists of links on a subject e g Bob s list of cancer related links Authority pages occur recurrently on good hubs for the subject Hubs and Authorities Thus a good hub page for a topic points to many authoritative pages for that topic A good authority page for a topic is pointed to by many good hubs for that topic Circular definition will turn this into an iterative computation The hope AT T Alice Authorities Hubs Sprint Bob MCI Long distance telephone companies High level scheme Extract from the web a base set of pages that could be good hubs or authorities From these identify a small set of top hub and authority pages iterative algorithm Base set Given text query say browser use a text index to get all pages containing browser Call this the root set of pages Add in any page that either points to a page in the root set or is pointed to by a page in the root set Call this the base set Visualization Root set Base set Assembling the base set Root set typically 200 1000 nodes Base set may have up to 5000 nodes How do you find the base set nodes Follow out links by parsing root set pages Get in links and out links from a connectivity server Actually suffices to …
View Full Document