DOC PREVIEW
UNCC ITCS 3153 - Lecture Notes

This preview shows page 1-2-3-4-5 out of 15 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

ITCS 3153 Artificial IntelligenceAI: Creating rational agentsSearching for explanation of observationsRunning example: CandyStatisticsBayesian LearningPrediction of an unknown quantity XDetails of Bayes’ ruleExampleSlide 10Prediction of 11th candyOverfittingOverfitting ExampleLearning with DataSlide 15ITCS 3153Artificial IntelligenceLecture 24Lecture 24Statistical LearningStatistical LearningChapter 20Chapter 20Lecture 24Lecture 24Statistical LearningStatistical LearningChapter 20Chapter 20AI: Creating rational agentsThe pursuit of autonomous, rational, agentsThe pursuit of autonomous, rational, agents•It’s all about searchIt’s all about search–Varying amounts of Varying amounts of modelmodel information informationtree searching (informed/uninformed)tree searching (informed/uninformed)simulated annealingsimulated annealingvalue/policy iterationvalue/policy iteration–Searching for an explanation of observationsSearching for an explanation of observationsUsed to develop a modelUsed to develop a modelThe pursuit of autonomous, rational, agentsThe pursuit of autonomous, rational, agents•It’s all about searchIt’s all about search–Varying amounts of Varying amounts of modelmodel information informationtree searching (informed/uninformed)tree searching (informed/uninformed)simulated annealingsimulated annealingvalue/policy iterationvalue/policy iteration–Searching for an explanation of observationsSearching for an explanation of observationsUsed to develop a modelUsed to develop a modelSearching for explanation of observationsIf I can explain observations…If I can explain observations… can I predict the future?can I predict the future?•Can I explain why ten coin tosses are 6 H and 4 T?Can I explain why ten coin tosses are 6 H and 4 T?–Can I predict the 11Can I predict the 11thth coin toss coin tossIf I can explain observations…If I can explain observations… can I predict the future?can I predict the future?•Can I explain why ten coin tosses are 6 H and 4 T?Can I explain why ten coin tosses are 6 H and 4 T?–Can I predict the 11Can I predict the 11thth coin toss coin tossRunning example: CandySurprise CandySurprise Candy•Comes in two flavorsComes in two flavors–cherry (yum)cherry (yum)–lime (yuk)lime (yuk)•All candy is wrapped in same opaque wrapperAll candy is wrapped in same opaque wrapper•Candy is packaged in large bags containing five different Candy is packaged in large bags containing five different allocations of cherry and limeallocations of cherry and limeSurprise CandySurprise Candy•Comes in two flavorsComes in two flavors–cherry (yum)cherry (yum)–lime (yuk)lime (yuk)•All candy is wrapped in same opaque wrapperAll candy is wrapped in same opaque wrapper•Candy is packaged in large bags containing five different Candy is packaged in large bags containing five different allocations of cherry and limeallocations of cherry and limeStatisticsGiven a bag of candy, what distribution of flavors will it Given a bag of candy, what distribution of flavors will it have?have?•Let H be the random variable corresponding to your hypothesisLet H be the random variable corresponding to your hypothesis–HH11 = all cherry, H = all cherry, H22 = all lime, H = all lime, H33 = 50/50 cherry/lime = 50/50 cherry/lime•As you open pieces of candy, let each observation of data: DAs you open pieces of candy, let each observation of data: D11, D, D22, D, D33, … , … be either cherry or limebe either cherry or lime–DD11 = cherry, D = cherry, D22 = cherry, D = cherry, D33 = lime, … = lime, …•Predict the flavor of the next piece of candyPredict the flavor of the next piece of candy–If the data caused you to believe HIf the data caused you to believe H11 was correct, you’d pick cherry was correct, you’d pick cherryGiven a bag of candy, what distribution of flavors will it Given a bag of candy, what distribution of flavors will it have?have?•Let H be the random variable corresponding to your hypothesisLet H be the random variable corresponding to your hypothesis–HH11 = all cherry, H = all cherry, H22 = all lime, H = all lime, H33 = 50/50 cherry/lime = 50/50 cherry/lime•As you open pieces of candy, let each observation of data: DAs you open pieces of candy, let each observation of data: D11, D, D22, D, D33, … , … be either cherry or limebe either cherry or lime–DD11 = cherry, D = cherry, D22 = cherry, D = cherry, D33 = lime, … = lime, …•Predict the flavor of the next piece of candyPredict the flavor of the next piece of candy–If the data caused you to believe HIf the data caused you to believe H11 was correct, you’d pick cherry was correct, you’d pick cherryBayesian LearningUse available data to calculate the probability of each Use available data to calculate the probability of each hypothesis and make a predictionhypothesis and make a prediction•Because each hypothesis has an independent likelihood, we use all their Because each hypothesis has an independent likelihood, we use all their relative likelihoods when making a predictionrelative likelihoods when making a prediction•Probabilistic inference using Bayes’ rule:Probabilistic inference using Bayes’ rule:–P(hP(hii | | dd) = ) = P(P(dd | h | hii) P(h) P(hii))–The probability of of hypothesis hThe probability of of hypothesis hii being active given you observed being active given you observed sequence sequence dd equals the probability of seeing data sequence equals the probability of seeing data sequence d d generated by hypothesis hgenerated by hypothesis hii multiplied by the likelihood of hypothesis i multiplied by the likelihood of hypothesis i being activebeing activeUse available data to calculate the probability of each Use available data to calculate the probability of each hypothesis and make a predictionhypothesis and make a prediction•Because each hypothesis has an independent likelihood, we use all their Because each hypothesis has an independent likelihood, we use all their relative likelihoods when making a predictionrelative likelihoods when making a prediction•Probabilistic inference using Bayes’ rule:Probabilistic inference using Bayes’ rule:–P(hP(hii | | dd) = ) = P(P(dd | h | hii) P(h) P(hii))–The probability of of hypothesis hThe probability of of hypothesis hii being active given you observed being active given you observed sequence sequence dd equals the


View Full Document

UNCC ITCS 3153 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?