Unformatted text preview:

CSCI 5417 Information Retrieval Systems Jim MartinToday 9/15EvaluationTypical (good) 11 point precisionsYet more evaluation measures…Recall/PrecisionVarianceFinallyFrom corpora to test collectionsPoolingTRECCritique of Pure RelevanceSearch Engines…Evaluation at large search enginesA/B testingQuery to think aboutSources of Errors (unranked)Retrieved/Not Relevant (b)Not Retrieved/Relevant (c)Ranked ResultsDiscussion ExamplesExamples: Doc 1Examples: Doc 2So...Slide 25BreakQuestions?ReadingsImproving ThingsRelevance FeedbackRelevance Feedback: ExampleResults for Initial QuerySlide 33Results after Relevance FeedbackTheoretical Optimal QueryRelevance Feedback in vector spacesRocchio 1971 Algorithm (SMART)Positive vs. Negative FeedbackAd hoc results for query canine source: Fernando DiazSlide 40User feedback: Select what is relevant source: Fernando DiazResults after relevance feedback source: Fernando DiazRelevance Feedback: AssumptionsViolation of AssumptionsRelevance Feedback: Practical ProblemsRelevance Feedback: ProblemsRelevance Feedback SummaryPseudo Relevance FeedbackQuery ExpansionTypes of Query ExpansionControlled VocabularyThesaurus-based Query ExpansionAutomatic Thesaurus GenerationAutomatic Thesaurus Generation DiscussionQuery Expansion: SummarySo…CSCI 5417Information Retrieval SystemsJim MartinLecture 89/15/2011Today 9/15Finish evaluation discussionQuery improvementRelevance feedbackPseudo-relevance feedbackQuery expansion01/14/19 2CSCI 5417- IREvaluationSummary measuresPrecision at fixed retrieval levelPerhaps most appropriate for web search: all people want are good matches on the first one or two results pagesBut has an arbitrary parameter of k11-point interpolated average precisionThe standard measure in the TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average themEvaluates performance at all recall levels01/14/19 3CSCI 5417- IRTypical (good) 11 point precisionsSabIR/Cornell 8A1 11pt precision from TREC 8 (1999) 00.20.40.60.810 0.2 0.4 0.6 0.8 1RecallPrecision01/14/19 4CSCI 5417- IRYet more evaluation measures…Mean average precision (MAP)Average of the precision value obtained for the top k documents, each time a relevant doc is retrievedAvoids interpolation, use of fixed recall levelsMAP for query collection is arithmetic avg.Macro-averaging: each query counts equally01/14/19 5CSCI 5417- IR01/14/19 CSCI 5417 6Recall/Precision 1 R2 N3 N4 R5 R6 N7 R8 N9 N10 NR P MAP10%100% 10010 5010 3320 50 5030 60 6030 5040 57 5740 5040 4440 40.667501/14/19 CSCI 5417 77VarianceFor a test collection, it is usual that a system does poorly on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7)Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query.That is, there are easy information needs and hard ones!FinallyAll of these measures are used for distinct comparison purposesSystem A vs System BSystem A (1.1) vs System A (1.2)Approach A vs. Approach BVector space approach vs. Probabilistic approachesSystems on different collections?System A on med vs. trec vs web text?They don’t represent absolute measures01/14/19 CSCI 5417 801/14/19 CSCI 5417 9From corpora to test collectionsStill needTest queriesRelevance assessmentsTest queriesMust be germane to docs availableBest designed by domain expertsRandom query terms generally not a good ideaRelevance assessmentsHuman judges, time-consumingHuman panels are not perfect01/14/19 CSCI 5417 10PoolingWith large datasets it’s impossible to really assess recall.You would have to look at every document.So TREC uses a technique called pooling.Run a query on a representative set of state of the art retrieval systems.Take the union of the top N results from these systems.Have the analysts judge the relevant docs in this set.01/14/19 CSCI 5417 11TRECTREC Ad Hoc task from first 8 TRECs is standard IR task50 detailed information needs a yearHuman evaluation of pooled results returnedMore recently other related things: Web track, HARD, Bio, Q/AA TREC query (TREC 5)<top><num> Number: 225<desc> Description:What is the main function of the Federal Emergency Management Agency (FEMA) and the funding level provided to meet emergencies? Also, what resources are available to FEMA such as people, equipment, facilities?</top>01/14/19 CSCI 5417 12Critique of Pure RelevanceRelevance vs Marginal RelevanceA document can be redundant even if it is highly relevantDuplicatesThe same information from different sourcesMarginal relevance is a better measure of utility for the user.Using facts/entities as evaluation units more directly measures true relevance.But harder to create evaluation set01/14/19 CSCI 5417 13Search Engines…How does any of this apply to the big search engines?01/14/19 CSCI 5417 14Evaluation at large search enginesRecall is difficult to measure for the webSearch engines often use precision at top k, e.g., k = 10Or measures that reward you more for getting rank 1 right than for getting rank 10 right.NDCG (Normalized Cumulative Discounted Gain)Search engines also use non-relevance-based measuresClickthrough on first resultNot very reliable if you look at a single clickthrough … but pretty reliable in the aggregate.Studies of user behavior in the labA/B testingFocus groupsDiary studies1401/14/19 CSCI 5417 15A/B testingPurpose: Test a single innovationPrerequisite: You have a system up and running.Have most users use old systemDivert a small proportion of traffic (e.g., 1%) to the new system that includes the innovationEvaluate with an “automatic” measure like clickthrough on first resultNow we can directly see if the innovation does improve user happiness.Probably the evaluation methodology that large search engines trust most15Query to think aboutE.g., Information need: I'm looking for information on whether drinking red wine is more effective at reducing your risk of heart attacks than white wine.Query: wine red white heart attack effective01/14/19 16CSCI 5417-


View Full Document

CU-Boulder CSCI 5417 - Lecture 8

Download Lecture 8
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 8 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 8 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?