DOC PREVIEW
Duke CPS 108 - The Anatomy of a Large-Scale Hypertextual

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

The Anatomy of a Large-Scale HypertextualWeb Search EngineSergey Brin and Lawrence Page Computer Science Department,Stanford University, Stanford, CA 94305, [email protected] and [email protected] Abstract In this paper, we present Google, a prototype of a large-scale search engine which makes heavyuse of the structure present in hypertext. Google is designed to crawl and index the Web efficientlyand produce much more satisfying search results than existing systems. The prototype with a fulltext and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/To engineer a search engine is a challenging task. Search engines index tens to hundreds ofmillions of web pages involving a comparable number of distinct terms. They answer tens ofmillions of queries every day. Despite the importance of large-scale search engines on the web,very little academic research has been done on them. Furthermore, due to rapid advance intechnology and web proliferation, creating a web search engine today is very different from threeyears ago. This paper provides an in-depth description of our large-scale web search engine -- thefirst such detailed public description we know of to date. Apart from the problems of scalingtraditional search techniques to data of this magnitude, there are new technical challenges involvedwith using the additional information present in hypertext to produce better search results. Thispaper addresses this question of how to build a practical large-scale system which can exploit theadditional information present in hypertext. Also we look at the problem of how to effectively dealwith uncontrolled hypertext collections where anyone can publish anything they want. Keywords World Wide Web, Search Engines, Information Retrieval, PageRank, Google 1. Introduction(Note: There are two versions of this paper -- a longer full version and a shorter printed version. Thefull version is available on the web and the conference CD-ROM.) The web creates new challenges for information retrieval. The amount of information on the web isgrowing rapidly, as well as the number of new users inexperienced in the art of web research. People arelikely to surf the web using its link graph, often starting with high quality human maintained indicessuch as Yahoo! or with search engines. Human maintained lists cover popular topics effectively but aresubjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics.Automated search engines that rely on keyword matching usually return too many low quality matches.To make matters worse, some advertisers attempt to gain people’s attention by taking measures meant tomislead automated search engines. We have built a large-scale search engine which addresses many ofthe problems of existing systems. It makes especially heavy use of the additional structure present inhypertext to provide much higher quality search results. We chose our system name, Google, because itis a common spelling of googol, or 10100 and fits well with our goal of building very large-scale searchengines. 1.1 Web Search Engines -- Scaling Up: 1994 - 2000Search engine technology has had to scale dramatically to keep up with the growth of the web. In 1994,one of the first web search engines, the World Wide Web Worm (WWWW) [McBryan 94] had an indexof 110,000 web pages and web accessible documents. As of November, 1997, the top search enginesclaim to index from 2 million (WebCrawler) to 100 million web documents (from Search EngineWatch). It is foreseeable that by the year 2000, a comprehensive index of the Web will contain over abillion documents. At the same time, the number of queries search engines handle has grown incrediblytoo. In March and April 1994, the World Wide Web Worm received an average of about 1500 queriesper day. In November 1997, Altavista claimed it handled roughly 20 million queries per day. With theincreasing number of users on the web, and automated systems which query search engines, it is likelythat top search engines will handle hundreds of millions of queries per day by the year 2000. The goal ofour system is to address many of the problems, both in quality and scalability, introduced by scalingsearch engine technology to such extraordinary numbers. 1.2. Google: Scaling with the WebCreating a search engine which scales even to today’s web presents many challenges. Fast crawlingtechnology is needed to gather the web documents and keep them up to date. Storage space must be usedefficiently to store indices and, optionally, the documents themselves. The indexing system must processhundreds of gigabytes of data efficiently. Queries must be handled quickly, at a rate of hundreds tothousands per second. These tasks are becoming increasingly difficult as the Web grows. However, hardware performance andcost have improved dramatically to partially offset the difficulty. There are, however, several notableexceptions to this progress such as disk seek time and operating system robustness. In designing Google,we have considered both the rate of growth of the Web and technological changes. Google is designed toscale well to extremely large data sets. It makes efficient use of storage space to store the index. Its datastructures are optimized for fast and efficient access (see section 4.2). Further, we expect that the cost toindex and store text or HTML will eventually decline relative to the amount that will be available (seeAppendix B). This will result in favorable scaling properties for centralized systems like Google. 1.3 Design Goals1.3.1 Improved Search QualityOur main goal is to improve the quality of web search engines. In 1994, some people believed that acomplete search index would make it possible to find anything easily. According to Best of the Web1994 -- Navigators, "The best navigation service should make it easy to find almost anything on theWeb (once all the data is entered)." However, the Web of 1997 is quite different. Anyone who has useda search engine recently, can readily testify that the completeness of the index is not the only factor inthe quality of search results. "Junk results" often wash out any results that a user is interested in. In fact,as of November 1997, only one of the top four commercial search engines finds itself (returns its ownsearch page in response to its name in the top ten results). One of the main causes of this problem is thatthe number of documents in


View Full Document

Duke CPS 108 - The Anatomy of a Large-Scale Hypertextual

Download The Anatomy of a Large-Scale Hypertextual
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view The Anatomy of a Large-Scale Hypertextual and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view The Anatomy of a Large-Scale Hypertextual 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?