DOC PREVIEW
Berkeley COMPSCI 188 - CS 188 Final Solutions

This preview shows page 1 out of 2 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 188 Introduction to AIFall 2005 Stuart Russell Final Solutions1. (12 pts.) Some Easy Questions to Start With(a) (2) True. This follows from the property that each variable is independent of its predecessors given itsparents. Since X1, . . . , Xkhave no parents, they are absolutely independent.(b) (2) True. Since both players are perfectly rational, each player receives the minimax value of the gamefrom their point of view. The other player has a choice of any optimal strategy (there may be several),but they all have the same value.(c) (2) True. This follows from monotonicity.(d) (2) False. Consider α ≡ A, β ≡ B, γ ≡ (A ∧ B).(e) (2) False. They may contain identical ground literals.(f) (2) False. New elements may be added to categories, but the set of categories almost never changes.2. (15 pts.) Search(a) (4) (iii) n2n. There are n vehicles in n2locations, so roughly (ignoring the one-per-square constraint)(n2)n= n2nstates.(b) (3) (iii) 5n.(c) (2) Manhattan distance, i.e., |(n − i + 1) − xi| + |n − yi|. This is exact for a lone vehicle.(d) (2) Only (iii) min{h1, . . . , hn}.(e) (4) The explanation is nontrivial as it requires two observations: first, the total work required to moveall n vehicles is ≥ nmin{h1, ..., hn}; second, the total work we can get done per step is ≤ n. Hence,completing all the work requires at least nmin{h1, ..., hn}/n = min{h1, ..., hn} steps.3. (16 pts.) Propositional Logic(a) (4) St+1⇔ [(St∧ at) ∨ (¬St∧ bt)].(b) (4) Because the agent can do exactly one action, we know that bt≡ ¬atso we replace btthroughout. Weobtain four clauses:1: (¬St+1∨ St∨ ¬at)2: (¬St+1∨ ¬St∨ at)3: (St+1∨ ¬St∨ ¬at)4: (St+1∨ St∨ at)(c) (8) The goal is (¬St∧ at) ⇒ ¬St+1. Negated, this becomes three clauses: 5: ¬St; 6: at; 7: St+1.Resolving 5, 6, 7 against 1, we obtain the empty clause.4. (15 pts.) Pruning in search trees(a) (2) No pruning. In a max tree, the value of the root is the value of the best leaf. Any unseen leaf mightbe the best, so we have to see them all.(b) (2) No pruning. An unseen leaf might have a value arbitrarily higher or lower than any other leaf, which(assuming non-zero outcome probabilities) means that there is no bound on the value of any incompletelyexpanded chance or max node.(c) (2) No pruning. Same argument as in (a).(d) (2) No pruning. Nonnegative values allow lower bounds on the values of chance nodes, but a lower bounddoes not allow any pruning.1(e) (2) Yes. If the first successor has value 1, the root has value 1 and all remaining successors can be pruned.(f) (2) Yes. Suppose the first action at the root has value 0.6, and the first outcome of the second action hasprobability 0.5 and value 0; then all other outcomes of the second action can be pruned.(g) (3) (ii) Highest probability first. This gives the strongest bound on the value of the node, all other thingsbeing equal.5. (8 pts.) MDPsπ0Vπ0π1Vπ1π2S a 6 a 6 a¬S a 4 b 5 b6. (16 pts.) Probabilistic inference(a) (3) (ii) and (iii). (For (iii), consider the Markov blanket of M.)(b) (2) P (b, i, ¬m, g, j) = P (b)P (¬m)P (i|b, ¬m)P (g|b, i, ¬m)P (j|g)= .9 × .9 × .5 × .8 × .9 = .2916(c) (4) Since B, I, M are fixed true in the evidence, we can treat G as having a prior of 0.9 and just look atthe submodel with G, J:P(J|b, i, m) = αPgP(J, g) = α[P(J, g) + P(J, ¬g)]= α[hP (j, g), P (¬j, g)i + hP (j, ¬g), P (¬j, ¬g)i= α[h.81, .09i + h0, 0.1i] = h.81, .19iThat is, the probability of going to jail is 0.81.(d) (2) Intuitively, a person cannot be found guilty if not indicted, regardless of whether they broke the lawand regardless of the prosecutor. This is what the CPT for G says; so G is context-specifically independentof B and M given I = false.(e) (5) A pardon is unnecessary if the person is not indicted or not found guilty; so I and G are parentsof P . One could also add B and M as parents of P , since a pardon is more likely if the person isactually innocent and if the prosecutor is politically motivated. (There are other causes of P ardon, suchas LargeDonationT oP residentsP arty, but such variables are not currently in the model.) The pardon(presumably) is a get-out-of-jail-free card, so P is a parent of J.7. (18 pts.) Language and statistical learning(a) (3) (i).(b) (5) This has two parses. The first uses V P → V P Adverb, V P → Copula Adjective, Copula → is,Adjective → well, Adverb → well. Its probability is 0.2 × 0.2 × 0.8 × 0.5 × 0.5 = 0.008. The seconduses V P → V P Adverb twice, V P → V erb, V erb → is, and Adverb → well twice. Its probability is0.2 × 0.2 × 0.1 × 0.5 × 0.5 × 0.5 = 0.0005. The total probability is 0.0085.(c) (2) (i) (ii).(d) (2) True. There can only be finitely many ways to generate the finitely many strings of 10 words.(e) (1) (ii) MAP learning. It cannot be Bayesian learning because it outputs only one hypothesis; it cannotbe maximum likelihood because it takes complexity into account. If C(h) is linearly related to log P (h)then the algorithm is doing MAP learning.(f) (5) The prior is represented by rules such asP (N0= A) : S → A SAwhere SAmeans “rest of sentence after an A.” Transitions are represented as, for example,P (Nt+1= B|Nt= A) : SA→ B SBand the sensor model is just the lexical rules such asP (Wt= is|Nt= A) : A → is


View Full Document

Berkeley COMPSCI 188 - CS 188 Final Solutions

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download CS 188 Final Solutions
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view CS 188 Final Solutions and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view CS 188 Final Solutions 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?