CS6322: CS6322: Information Retrieval Information Retrieval Sanda Sanda HarabagiuHarabagiuLecture 11: Crawling and web Lecture 11: Crawling and web indexesindexesCS6322: Information RetrievalCS6322: Information RetrievalToday’s lecture Crawling Connectivity serversCS6322: Information RetrievalCS6322: Information RetrievalBasic crawler operation Begin with known “seed” URLs Fetch and parse them Extract URLs they point to Place the extracted URLs on a queue Fetch each URL on the queue and repeatSec. 20.2CS6322: Information RetrievalCS6322: Information RetrievalCrawling pictureWebURLs crawledand parsedURLs frontierUnseen WebSeedpagesSec. 20.2CS6322: Information RetrievalCS6322: Information RetrievalSimple picture – complications Web crawling isn’t feasible with one machine All of the above steps distributed Malicious pages Spam pages Spider traps – incl dynamically generated Even non-malicious pages pose challenges Latency/bandwidth to remote servers vary Webmasters’ stipulations How “deep” should you crawl a site’s URL hierarchy? Site mirrors and duplicate pages Politeness – don’t hit a server too oftenSec. 20.1.1CS6322: Information RetrievalCS6322: Information RetrievalWhat any crawler must do Be Polite: Respect implicit and explicit politeness considerations Only crawl allowed pages Respect robots.txt (more on this shortly) Be Robust: Be immune to spider traps and other malicious behavior from web serversSec. 20.1.1CS6322: Information RetrievalCS6322: Information RetrievalWhat any crawler should do Be capable of distributed operation: designed to run on multiple distributed machines Be scalable: designed to increase the crawl rate by adding more machines Performance/efficiency: permit full use of available processing and network resourcesSec. 20.1.1CS6322: Information RetrievalCS6322: Information RetrievalWhat any crawler should do Fetch pages of “higher quality” first Continuous operation: Continue fetching fresh copies of a previously fetched page Extensible: Adapt to new data formats, protocolsSec. 20.1.1CS6322: Information RetrievalCS6322: Information RetrievalUpdated crawling pictureURLs crawledand parsedUnseen WebSeedPagesURL frontierCrawling threadSec. 20.1.1CS6322: Information RetrievalCS6322: Information RetrievalURL frontier Can include multiple pages from the same host Must avoid trying to fetch them all at the same time Must try to keep all crawling threads busySec. 20.2CS6322: Information RetrievalCS6322: Information RetrievalExplicit and implicit politeness Explicit politeness: specifications from webmasters on what portions of site can be crawled robots.txt Implicit politeness: even with no specification, avoid hitting any site too oftenSec. 20.2CS6322: Information RetrievalCS6322: Information RetrievalRobots.txt Protocol for giving spiders (“robots”) limited access to a website, originally from 1994 www.robotstxt.org/wc/norobots.html Website announces its request on what can(not) be crawled For a URL, create a file URL/robots.txt This file specifies access restrictionsSec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalRobots.txt example No robot should visit any URL starting with "/yoursite/temp/", except the robot called “searchengine": User-agent: *Disallow: /yoursite/temp/ User-agent: searchengineDisallow:Sec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalProcessing steps in crawling Pick a URL from the frontier Fetch the document at the URL Parse the URL Extract links from it to other docs (URLs) Check if URL has content already seen If not, add to indexes For each extracted URL Ensure it passes certain URL filter tests Check if it is already in the frontier (duplicate URL elimination)E.g., only crawl .edu, obey robots.txt, etc.Which one?Sec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalBasic crawl architectureWWWDNSParseContentseen?DocFP’sDupURLelimURLsetURL FrontierURLfilterrobotsfiltersFetchSec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalDNS (Domain Name Server) A lookup service on the internet Given a URL, retrieve its IP address Service provided by a distributed set of servers – thus, lookup latencies can be high (even seconds) Common OS implementations of DNS lookup are blocking: only one outstanding request at a time Solutions DNS caching Batch DNS resolver – collects requests and sends them out togetherSec. 20.2.2CS6322: Information RetrievalCS6322: Information RetrievalParsing: URL normalization When a fetched document is parsed, some of the extracted links are relative URLs E.g., at http://en.wikipedia.org/wiki/Main_Pagewe have a relative link to /wiki/Wikipedia:General_disclaimer which is the same as the absolute URL http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer During parsing, must normalize (expand) such relative URLsSec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalContent seen? Duplication is widespread on the web If the page just fetched is already in the index, do not further process it This is verified using document fingerprints or shinglesSec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalFilters and robots.txt Filters – regular expressions for URL’s to be crawled/not Once a robots.txt file is fetched from a site, need not fetch it repeatedly Doing so burns bandwidth, hits web server Cache robots.txt filesSec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalDuplicate URL elimination For a non-continuous (one-shot) crawl, test to see if an extracted+filtered URL has already been passed to the frontier For a continuous crawl – see details of frontier implementationSec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalDistributing the crawler Run multiple crawl threads, under different processes – potentially at different nodes Geographically distributed nodes Partition hosts being crawled into nodes Hash used for partition How do these nodes communicate?Sec. 20.2.1CS6322: Information RetrievalCS6322: Information RetrievalCommunication between nodes The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all
View Full Document