DOC PREVIEW
UB CSE 574 - Two frameworks for analyzing learning algorithms

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Two frameworks for analyzing learning algorithms1. Probably Approximately Correct (PAC) framework• Identify classes of hypotheses that can/cannot be learned from a polynomial number of training samples• Finite hypothesis space• Infinite hypotheses (VC dimension)• Define natural measure of complexity for hypothesis spaces (VC dimension) that allows bounding the number of training examples required for inductive learning2. Mistake bound (MB) framework• Number of training errors made by a learner before it determines correct hypothesis2Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Different learning settings considered in COLT1. How training samples are generated?• Passive observation of random examples• Active querying by the learner2. Noise in the data• Noisy• Error Free3. Definition of success• Target concept must be learned exactly• Only probably and approximately4. Assumptions made by the learner• For distribution of instances• Whether C <H5. Measure according to which learner is evaluated• No training examples, no of mistakes, total time3Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Mistake Bound (MB) Model Of Learning• Problem setting:• Learner receives a sequence of training examples• Upon receiving each sample x, learner must predict target value c(x) before it is shown correct target value by trainer• How many mistakes will learner make in its predictions before it learns the target concept?4Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004In the MB model, learning is in stagesIn each stage: 1. Learner gets unlabeled example2. Learner predicts classification3. Learner is told correct label Goal: Bound the total number of mistakes5Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Practical use of MB learning model• Significant in practical settings where learning must be done while the system is in actual use, rather than in an off-line training stage• Example: system to learn to approve credit card purchases based on data collected during use• How many mistakes in approving credit card purchases before system learns?• Total number of mistakes is more important than number of training examples6Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Study of MB Model: Problem Setting• Number of mistakes made before learning the target concept exactly (rather than PAC)• DEFINITION:Algorithm A has mistake-bound M for learning class C if A makes at most M mistakes on any sequence that is consistent with a function in C7Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Mistake Bound for Find-S algorithm• Find-S• Initialize h to the most specific hypothesis• For each positive training instance x• Remove from h any literal that is not satisfied by x• Output hypothesis h• Total no. of mistakes can be at most n+1nnllllll ¬∧∧¬∧∧¬∧ L22118Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Mistake Bound for HALVING Algorithm• HALVING ALGORITHM:predict using majority vote over all concepts in C consistent with past data • Candidate Elimination and List-then-eliminate are halving algorithms• Candidate Elimination• maintains a description of the version space, incrementally refining the version space as each new sample is encountered• Assume majority vote is used among all hypotheses in the current version space9Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Mistake Bound for HALVING Algorithm• Total no. of mistakes can be at most log2|H|10Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Optimal Mistake Bound• For any target concept c, let MA (c) denote the maximum number of mistakes, over all possible sequences, made by learning algorithm A to exactly learn c• Definition: The optimal mistake bound for C, denoted Opt (C) is the minimum over all possible learning algorithms A of MA(C)• Then VC (C) < Opt(C) < log2(|C|)11Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Weighted Majority Algorithm• A classifier combination method• Takes a weighted vote among a pool of prediction algorithms, e.g., alternative hypotheses in H, or alternative learning algorithms• It begins by weighting each algorithm by 1• Whenever an algorithm misclassifies its weight is decreased by β, where 0 < β < 112Machine Learning, Chapter 7, Part 3 CSE 574, Spring 2004Weighted Majority Algorithm• If A is any set of n prediction algorithms• If k is the minimum number of mistakes made by any algorithm in A• The number of mistakes made over any training sequence is at


View Full Document

UB CSE 574 - Two frameworks for analyzing learning algorithms

Download Two frameworks for analyzing learning algorithms
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Two frameworks for analyzing learning algorithms and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Two frameworks for analyzing learning algorithms 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?