1CS 188: Artificial IntelligenceFall 2006Lecture 17: Bayes Nets III10/26/2006Dan Klein – UC BerkeleyRepresenting Knowledge2Inference Inference: calculating some statistic from a joint probability distribution Examples: Posterior probability: Most likely explanation:RTBDLT’Reminder: Alarm Network3Inference by Enumeration Given unlimited time, inference in BNs is easy Recipe: State the marginal probabilities you need Figure out ALL the atomic probabilities you need Calculate and combine them Example:ExampleWhere did we use the BN structure?We didn’t!4Example In this simple method, we only need the BN to synthesize the joint entriesNormalization TrickNormalize5Inference by Enumeration?Nesting Sums Atomic inference is extremely slow! Slightly clever way to save work: Move the sums as far right as possible Example:6Evaluation Tree View the nested sums as a computation tree: Still repeated work: calculate P(m | a) P(j | a) twice, etc.Variable Elimination: Idea Lots of redundant work in the computation tree We can save time if we cache all partial results This is the basic idea behind variable elimination7Basic Objects Track objects called factors Initial factors are local CPTs During elimination, create new factors Anatomy of a factor:Variables introducedVariables summed outArgument variables, always non-evidence variables4 numbers, one for each value of D and EBasic Operations First basic operation: join factors Combining two factors: Just like a database join Build a factor over the union of the domains Example:8Basic Operations Second basic operation: marginalization Take a factor and sum out a variable Shrinks a factor to a smaller one A projection operation Example:Example9ExampleVariable Elimination What you need to know: VE caches intermediate computations Polynomial time for tree-structured graphs! Saves time by marginalizing variables ask soon as possible rather than at the end We will see special cases of VE later You’ll have to implement the special cases Approximations Exact inference is slow, especially when you have a lot of hidden nodes Approximate methods give you a (close) answer, faster10Sampling Basic idea: Draw N samples from a sampling distribution S Compute an approximate posterior probability Show this converges to the true probability P Outline: Sampling from an empty network Rejection sampling: reject samples disagreeing with evidence Likelihood weighting: use evidence to weight samplesPrior SamplingCloudySprinklerRainWetGrassCloudySprinklerRainWetGrass11Prior Sampling This process generates samples with probability…i.e. the BN’s joint probability Let the number of samples of an event be Then I.e., the sampling procedure is consistentExample We’ll get a bunch of samples from the BN:c, ¬s, r, wc, s, r, w¬c, s, r, ¬wc, ¬s, r, w¬c, s, ¬r, w If we want to know P(W) We have counts <w:4, ¬w:1> Normalize to get P(W) = <w:0.8, ¬w:0.2> This will get closer to the true distribution with more samples Can estimate anything else, too What about P(C| ¬r)? P(C| ¬r, ¬w)?CloudySprinklerRainWetGrassCSRW12Rejection Sampling Let’s say we want P(C) No point keeping all samples around Just tally counts of C outcomes Let’s say we want P(C| s) Same thing: tally C outcomes, but ignore (reject) samples which don’t have S=s This is rejection sampling It is also consistent (correct in the limit)c, ¬s, r, wc, s, r, w¬c, s, r, ¬wc, ¬s, r, w¬c, s, ¬r, wCloudySprinklerRainWetGrassCSRWLikelihood Weighting Problem with rejection sampling: If evidence is unlikely, you reject a lot of samples You don’t exploit your evidence as you sample Consider P(B|a) Idea: fix evidence variables and sample the rest Problem: sample distribution not consistent! Solution: weight by probability of evidence given parentsBurglary AlarmBurglary Alarm13Likelihood SamplingCloudySprinklerRainWetGrassCloudySprinklerRainWetGrassLikelihood Weighting Sampling distribution if z sampled and e fixed evidence Now, samples have weights Together, weighted sampling distribution is consistentCloudyRainCSRW14Likelihood Weighting Note that likelihood weighting doesn’t solve all our problems Rare evidence is taken into account for downstream variables, but not upstream ones A better solution is Markov-chain Monte Carlo (MCMC), more advanced We’ll return to sampling for robot localization and tracking in dynamic
View Full Document