DOC PREVIEW
CU-Boulder CSCI 5417 - Lecture 7

This preview shows page 1-2-21-22 out of 22 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 CSCI 5417 Information Retrieval Systems Jim Martin!Lecture 7 9/13/2011 9/14/11 CSCI5417 2Today  Review  Efficient scoring schemes  Approximate scoring  Evaluating IR systems2 9/14/11 CSCI5417 3Normal Cosine Scoring 9/14/11 CSCI5417 4Speedups...  Compute the cosines faster  Don’t compute as many cosines3 9/14/11 CSCI5417 5Generic Approach to Reducing Cosines  Find a set A of contenders, with  K < |A| << N  A does not necessarily contain the top K, but has many docs from among the top K  Return the top K docs in A  Think of A as pruning likely non-contenders 9/14/11 CSCI5417 6Impact-Ordered Postings  We really only want to compute scores for docs for which wft,d is high enough  Low scores are unlikely to change the ordering or reach the top K  So sort each postings list by wft,d  How do we compute scores in order to pick off top K?  Two ideas follow4 9/14/11 CSCI5417 71. Early Termination  When traversing t’s postings, stop early after either  After a fixed number of docs or  wft,d drops below some threshold  Take the union of the resulting sets of docs  from the postings of each query term  Compute only the scores for docs in this union 9/14/11 CSCI5417 82. IDF-ordered terms  When considering the postings of query terms  Look at them in order of decreasing IDF  High IDF terms likely to contribute most to score  As we update score contribution from each query term  Stop if doc scores relatively unchanged5 Evaluation 9/14/11 CSCI5417 99/14/11 CSCI5417 10Evaluation Metrics for Search Engines  How fast does it index?  Number of documents/hour  Realtime search  How fast does it search?  Latency as a function of index size  Expressiveness of query language  Ability to express complex information needs  Speed on complex queries6 9/14/11 CSCI5417 11Evaluation Metrics for Search Engines  All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise  But the key really is user happiness  Speed of response/size of index are factors  But blindingly fast, useless answers won’t make a user happy  What makes people come back?  Need a way of quantifying user happiness 9/14/11 CSCI5417 12Measuring user happiness  Issue:  Who is the user we are trying to make happy?  Web engine: user finds what they want and returns often to the engine  Can measure rate of return users  eCommerce site: user finds what they want and makes a purchase  Measure time to purchase, or fraction of searchers who become buyers?7 9/14/11 CSCI5417 13Measuring user happiness  Enterprise (company/govt/academic): Care about “user productivity”  How much time do my users save when looking for information?  Many other criteria having to do with breadth of access, secure access, etc. 9/14/11 CSCI5417 14Happiness: Difficult to Measure  Most common proxy for user happiness is relevance of search results  But how do you measure relevance?  We will detail one methodology here, then examine its issues  Relevance measurement requires 3 elements: 1. A benchmark document collection 2. A benchmark suite of queries 3. A binary assessment of either Relevant or Not relevant for query-doc pairs  Some work on more-than-binary, but not typical8 9/14/11 CSCI5417 15Evaluating an IR system  The information need is translated into a query  Relevance is assessed relative to the information need not the query  E.g., Information need: I'm looking for information on whether drinking red wine is more effective at reducing your risk of heart attacks than white wine.  Query: wine red white heart attack effective  You evaluate whether the doc addresses the information need, not whether it has those words 9/14/11 CSCI5417 16Standard Relevance Benchmarks  TREC - National Institute of Standards and Testing (NIST) has run a large IR test-bed for many years  Reuters and other benchmark doc collections used  “Retrieval tasks” specified  sometimes as queries  Human experts mark, for each query and for each doc, Relevant or Irrelevant  For at least for subset of docs that some system returned for that query9 9/14/11 CSCI5417 17Unranked Retrieval Evaluation  As with any such classification task there are 4 possible system outcomes: a, b, c and d  a and d represent correct responses. c and b are mistakes.  False pos/False neg  Type 1/Type 2 errors Relevant Not Relevant Retrieved a b Not Retrieved c d 9/14/11 CSCI5417 18Accuracy/Error Rate  Given a query, an engine classifies each doc as “Relevant” or “Irrelevant”.  Accuracy of an engine: the fraction of these classifications that is correct. a+d/a+b+c+d The number of correct judgments out of all the judgments made. Why is accuracy useless for evaluating large search engings?10 9/14/11 CSCI5417 19Unranked Retrieval Evaluation: Precision and Recall  Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved)  Recall: fraction of relevant docs that are retrieved = P(retrieved|relevant)  Precision P = a/(a+b)  Recall R = a/(a+c) Relevant Not Relevant Retrieved a b Not Retrieved c d 9/14/11 CSCI5417 20Precision/Recall  You can get high recall (but low precision) by retrieving all docs for all queries!  Recall is a non-decreasing function of the number of docs retrieved  That is, recall either stays the same or increases as you return more docs  In a most systems, precision decreases with the number of docs retrieved  Or as recall increases  A fact with strong empirical confirmation11 9/14/11 CSCI5417 21Difficulties in Using Precision/Recall  Should average over large corpus/query ensembles  Need human relevance assessments  People aren’t really reliable assessors  Assessments have to be binary  Heavily skewed by collection-specific facts 


View Full Document

CU-Boulder CSCI 5417 - Lecture 7

Download Lecture 7
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 7 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 7 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?