DOC PREVIEW
Berkeley COMPSCI 188 - Bayes nets (2pp)

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceSpring 2006Lecture 15: Bayes’ Nets3/9/2006Dan Klein – UC BerkeleyOutline Rest of course: Bayes Nets Speech Recognition / HMMs Reinforcement learning Applications: NLP, Vision, Games Today: Bayes Nets Introduction2Models Models are descriptions of how(a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactionsbetween variables Why worry about probabilistic models? We (or our agents) need to reason about unknown variables, given evidence Example: explanation (diagnostic reasoning) Example: prediction (causal reasoning) Example: value of informationReminder: CSPs CSPs were a kind of model Describe legal interactions between variables Usually we just look for some legal assignment But, also can reason using all assignments, or find assignments consistent with evidence Key idea of CSPs: Model global behavior using local constraints Recurring idea in AI: compact local models interact to give efficient, interesting global behavior3Probabilistic Models A probabilistic model is a joint distribution over a set of variables Given a joint distribution, we can reason about unobserved variables given observations (evidence) General form of a query: This kind of posterior distribution is also called the belief function of an agent which uses this modelStuff you care aboutStuff you already knowBayes’ Nets: Big Picture Two problems with generic probabilistic models: Unless there are only a few variables, the joint is too big to represent explicitly Hard to estimate anything empirically about more than a few variables at a time Bayes’ nets are a technique for describing complex joint distributions (models) using a bunch of simple, local distributions We describe how variables locally interact Local interactions chain together to give global, indirect interactions For about 10 min, we’ll be very vague about how these interactions are specified4Graphical Model Notation Nodes: variables (with domains) Can be assigned (observed) or unassigned (unobserved) Arcs: interactions Similar to constraints Indicate “direct influence” between variables For now: imagine that arrows mean causationExample: Coin FlipsX1X2Xn N independent coin flips No interactions between variables: absolute independence5Example: Traffic Variables: R: It rains T: There is traffic Model 1: independence Model 2: rain causes traffic Why is an agent using model 2 better?RTExample: Traffic II Let’s build a causal graphical model Variables T: Traffic R: It rains L: Low pressure D: Roof drips B: Ballgame C: Cavity6Example: Alarm Network Variables B: Burglary A: Alarm goes off M: Mary calls J: John calls E: Earthquake!Bayes’ Net Semantics Let’s formalize the semantics of a Bayes’ net A set of nodes, one per variable X A directed, acyclic graph A conditional distribution for each node A distribution over X, for each combination of parents’ values CPT: conditional probability table Description of a noisy “causal” processA1XAnA Bayes net = Topology (graph) + Local Conditional Probabilities7Probabilities in BNs Bayes’ nets implicitly encode joint distributions As a product of local conditional distributions To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together: Example: This lets us reconstruct any entry of the full joint Not every BN can represent every full joint The topology enforces certain conditional independenciesExample: Coin Flips0.5t0.5h0.5t0.5h0.5t0.5hX1X2XnOnly distributions whose variables are absolutely independent can be represented by a Bayes’ net with no arcs.8Exam ple: TrafficRT3/4¬r1/4rr1/4¬t3/4t¬r1/2¬t1/2tExample: Alarm Network9Example: Naïve Bayes Let’s figure out what the Bayes’ net for naïve Bayes is:Example: Traffic II Variables T: Traffic R: It rains L: Low pressure D: Roof drips B: BallgameRTBDL10Size of a Bayes’ Net How big is a joint distribution over N Boolean variables? How big is a Bayes net if each node has k parents? Both give you the power to calculate BNs: Huge space savings! Also easier to elicit local CPTs Also turns out to be faster to answer queries (next class)Building the (Entire) Joint We can take a Bayes’ net and build the full joint distribution it encodes Typically, there’s no reason to do this But it’s important to know you could! To emphasize: every BN over a domain implicitly represents some joint distribution over that domain11Example: Traffic Basic traffic net Let’s multiply out the jointRT3/4¬r1/4rr1/4¬t3/4t¬r1/2¬t1/2t¬tt¬tt¬r¬rrr6/166/161/163/16Example: Reverse Traffic Reverse causality?TR7/16¬t9/16tt2/3¬r1/3r¬t6/7¬r1/7r¬tt¬tt¬r¬rrr6/166/161/163/1612Causality? When Bayes’ nets reflect the true causal patterns: Often simpler (nodes have fewer parents) Often easier to think about Often easier to elicit from experts BNs need not actually be causal Sometimes no causal net exists over the domain E.g. consider the variables Traffic and Drips End up with arrows that reflect correlation, not causation What do the arrows really mean? Topology may happen to encode causal structure Topology really encodes conditional independenciesCreating Bayes’ Nets So far, we talked about how any fixed Bayes’ net encodes a joint distribution Next: how to represent a fixed distribution as a Bayes’ net Key ingredient: conditional independence The exercise we did in “causal” assembly of BNs was a kind of intuitive use of conditional independence Now we have to formalize the process After that: how to answer queries


View Full Document

Berkeley COMPSCI 188 - Bayes nets (2pp)

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Bayes nets (2pp)
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Bayes nets (2pp) and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Bayes nets (2pp) 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?