DOC PREVIEW
Stanford CS 276 - Introducing Information Retrieval and Web Search

This preview shows page 1-2-3-4-26-27-28-54-55-56-57 out of 57 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

PowerPoint PresentationInformation RetrievalUnstructured (text) vs. structured (database) data in the mid-ninetiesUnstructured (text) vs. structured (database) data todayBasic assumptions of Information RetrievalThe classic search modelHow good are the retrieved docs?Slide 8Unstructured data in 1620Term-document incidence matricesIncidence vectorsAnswers to queryBigger collectionsCan’t build the matrixSlide 15Inverted indexSlide 17Inverted index constructionInitial stages of text processingIndexer steps: Token sequenceIndexer steps: SortIndexer steps: Dictionary & PostingsWhere do we pay in storage?Slide 24The index we just builtQuery processing: ANDThe mergeIntersecting two postings lists (a “merge” algorithm)Slide 29Boolean queries: Exact matchExample: WestLaw http://www.westlaw.com/Slide 32Boolean queries: More general mergesMergingQuery optimizationQuery optimization exampleMore general optimizationExerciseQuery processing exercisesSlide 40Slide 41Phrase queriesA first attempt: Biword indexesLonger phrase queriesIssues for biword indexesSolution 2: Positional indexesPositional index exampleProcessing a phrase queryProximity queriesPositional index sizeSlide 51Rules of thumbCombination schemesSlide 54IR vs. databases: Structured vs unstructured dataUnstructured dataSemi-structured dataIntroduction toInformation RetrievalIntroducing Information Retrieval and Web SearchInformation Retrieval•Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers).–These days we frequently think first of web search, but there are many other cases:•E-mail search•Searching your laptop•Corporate knowledge bases•Legal information retrieval2Unstructured (text) vs. structured (database) data in the mid-nineties3Unstructured (text) vs. structured (database) data today4Basic assumptions of Information Retrieval•Collection: A set of documents–Assume it is a static collection for the moment•Goal: Retrieve documents with information that is relevant to the user’s information need and helps the user complete a task5Sec. 1.1how trap mice aliveThe classic search modelCollectionUser task Info needQueryResultsSearchengineQueryrefinement Get rid of mice in a politically correct wayInfo about removing micewithout killing them Misconception?Misformulation?SearchHow good are the retrieved docs?Precision : Fraction of retrieved docs that are relevant to the user’s information needRecall : Fraction of relevant docs in collection that are retrievedMore precise definitions and measurements to follow later7Sec. 1.1Introduction toInformation RetrievalTerm-document incidence matricesUnstructured data in 1620•Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia?•One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia?•Why is that not the answer?–Slow (for large corpora)–NOT Calpurnia is non-trivial–Other operations (e.g., find the word Romans near countrymen) not feasible–Ranked retrieval (best documents to return)•Later lectures9Sec. 1.1Term-document incidence matricesAntony and Cleopatra J ulius Caesar The Tempest Hamlet Othello MacbethAntony1 1 0 0 0 1Brutus1 1 0 1 0 0Caesar1 1 0 1 1 1Calpurnia0 1 0 0 0 0Cleopatra1 0 0 0 0 0mercy1 0 1 1 1 1worser1 0 1 1 1 01 if play contains word, 0 otherwiseBrutus AND Caesar BUT NOT CalpurniaSec. 1.1Incidence vectors•So we have a 0/1 vector for each term.•To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented)  bitwise AND.–110100 AND–110111 AND–101111 = –10010011Sec. 1.1Antony and Cleopatra J ulius Caesar The Tempest Hamlet Othello MacbethAntony 1 1 0 0 0 1Brutus 1 1 0 1 0 0Caesar 1 1 0 1 1 1Calpurnia 0 1 0 0 0 0Cleopatra 1 0 0 0 0 0mercy 1 0 1 1 1 1worser 1 0 1 1 1 0Answers to query•Antony and Cleopatra, Act III, Scene iiAgrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain.•Hamlet, Act III, Scene iiLord Polonius: I did enact Julius Caesar I was killed i’ the Capitol; Brutus killed me.12Sec. 1.1Bigger collections•Consider N = 1 million documents, each with about 1000 words.•Avg 6 bytes/word including spaces/punctuation –6GB of data in the documents.•Say there are M = 500K distinct terms among these.13Sec. 1.1Can’t build the matrix•500K x 1M matrix has half-a-trillion 0’s and 1’s.•But it has no more than one billion 1’s.–matrix is extremely sparse.•What’s a better representation?–We only record the 1 positions.14Why?Sec. 1.1Introduction toInformation RetrievalThe Inverted IndexThe key data structure underlying modern IRInverted index•For each term t, we must store a list of all documents that contain t.–Identify each doc by a docID, a document serial number•Can we used fixed-size arrays for this?16What happens if the word Caesar is added to document 14? Sec. 1.2BrutusCalpurniaCaesar 1 2 4 5 6 16 57 1321 2 4 11 31 45 1732 3117454101Inverted index•We need variable-size postings lists–On disk, a continuous run of postings is normal and best–In memory, can use linked lists or variable length arrays•Some tradeoffs in size/ease of insertion17Dictionary PostingsSorted by docID (more later on why).PostingPostingSec. 1.2BrutusCalpurniaCaesar1 2 4 5 6 16 57 1321 2 4 11 31 45 1732 3117454101TokenizerToken streamFriendsRomans CountrymenInverted index constructionLinguistic modulesModified tokensfriendroman countrymanIndexerInverted indexfriendromancountryman2 4213161Documents tobe indexedFriends, Romans, countrymen.Sec. 1.2Initial stages of text processing•Tokenization–Cut character sequence into word tokens•Deal with “John’s”, a state-of-the-art solution•Normalization–Map text and query term to same form•You want U.S.A. and USA to match•Stemming–We may wish different forms of a root to match•authorize, authorization•Stop words–We may omit very common words (or not)•the, a, to, ofIndexer steps: Token sequence•Sequence of (Modified token, Document ID) pairs.I did enact JuliusCaesar I was killed i’ the Capitol; Brutus killed me.Doc 1So let it be withCaesar. The nobleBrutus hath told youCaesar was ambitiousDoc 2Sec. 1.2Indexer steps: Sort•Sort by terms–And then docID Core


View Full Document

Stanford CS 276 - Introducing Information Retrieval and Web Search

Documents in this Course
Load more
Download Introducing Information Retrieval and Web Search
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Introducing Information Retrieval and Web Search and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Introducing Information Retrieval and Web Search 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?