DOC PREVIEW
Berkeley COMPSCI 188 - Final Exam

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 188 Introduction toSpring 2009 Artificial Intelligence Final ExamINSTRUCTIONS• You have 3 hours.• The exam is closed book, closed notes except a two-page crib sheet, double-sided.• Please use non-programmable calculators only.• Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide abrief explanation. All short answer sections can be successfully answered in a few sentences at most.Last NameFirst NameSIDLoginGSISection TimeAll the work on thisexam is my own.(please sign)For staff use onlyQ. 1 Q. 2 Q. 3 Q. 4 Q. 5 Q. 6 Total/16 /15 /20 /10 /21 /18 /1002THIS PAGE INTENTIONALLY LEFT BLANKNAME: 31. (12 points) Linear Naive BayesRecall that a Naive Bayes classifier with observed random variables Fi(i = 1, . . . , n) and the query variable Yuses the classification rule:arg maxyP (y|f1, . . . , fn) = arg maxyP (y)nYi=1P (fi|y)And a linear classifier (for example, perceptron) uses the classification rule:arg maxynXi=0wy , i· fiwhere f0= 1 is a bias feature(a) (8 pt) Consider a Naive Bayes classifier with binary-valued features, i.e. fi∈ [0, 1]. Prove that it isalso a linear classifier, by defining weights wy , i(for i = 0, . . . , n) such that both decision rules aboveare equivalent. The weights should be expressed in terms of the Naive Bayes probabilities — P (y) andP (fi|y). You may assume that all the Naive Bayes probabilities are non-zero.(b) (4 pt) For the training set below with binary features F1and F2and label Y , either name a smoothingmethod that would estimate a naive Bayes model that would correctly class ify all training set data, orstate that it is impossible (i.e., there is no smoothing method that would give appropriate probabilities).F1F2Y0 0 00 1 11 0 11 1 042. (15 points) Blind Connect ThreeIn Connect Three, players alternate dropping pieces into one of four columns. A player wins by having threeconsecutive pieces of their color either horizontally, vertically, or diagonally. Assume columns have infiniteheight. A dropped piece always occupies the lowest open space in that column.You are playing the game blind-folded against a random opponent. You can’t see the opponent’s moves.However, you can always hear the opponent’s piece sliding into place. When the opponent drops his piecealong the edge of the board, it m akes a zing sound; and when he drops it in one of the center two columns, itmakes a zang sound. On the other hand, you know exactly the move which you have made. When a playerwins, the referee stops the game and announces the winner.We’ll assume that you are representing your belief with a particle filter where each particle is a completedescription of the board. Also, assume that you are the first player and that you are playing white. The onlyobservations you get are the zing and the zang of the opponent pieces falling down.(a) (8 pt) For each of the following parts, answer True if its possible that the two particles are both presenttogether in your particle filter immediately after a resampling step. If you answer False, you must justifyyour answer for full credit.i. (true or false)ii. (true or false)iii. (true or false)iv. (true or false)NAME: 5For the next three sub-parts assume that you have exactly 10 particles in your particle filter, and aftertwo rounds of moves you have the following four particles and counts of particles as shown below. Also,your utility is 1 if you win on the next move, and 0 othewise.count=4count = 3count=1 count=2(b) (1 pt) What is your belief that column 1 is em pty?(c) (2 pt) What is your MEU, and which move(s) gives you this expected utility?(d) (4 pt) What is the value of peeking through the blind-fold?63. (15 points) MDP: Walk or Jump?Consider the following MDP. It has states {0, 1, 2, 3, 4} with 4 the starting state. In every state there are twoactions: walk (W) and jump (J). The Walk action decreases the state by one. The jump action has probability1/2 of decreasing the state by two, and probability 1/2 of leaving the state unchanged. Actions will not decreasethe state below zero: you will remain in state 0 no matter which action you take, and Jumping from state1 leads to state 0 with probability 1/2 and state 1 with probability 1/2. The reward gained when taking anaction is the distance travelled squared: R(s, a, s0) = (s − s0)2. The discount ratio is γ = 1/2.(a) (2 pt) Draw the transition graph for this MDP.(b) (3 pt) Compute V (2). What is the optimal action to take in state 2?(c) (3 pt) Give an expression for V (4) in terms of V (3) and V (2).Supp ose that the MDP was connected in a ring: Walking from state 0 will take you to state 4, Jumping from1 will take you to either 4 or leave you in 1, and Jumping from 0 will take you to either 3 or leave you in 0.(d) (4 pt) Find V∗(2).(e) (3 pt) Determine the optimal policy for this MDP.NAME: 74. (20 points) Very Big V StructuresLet Vnbe a Bayes net with nodes X(i,j)for all i + j ≤ n + 1, where both i ≥ 1 and j ≥ 1, and where theparents of X(i,j)are X(i−1,j)and X(i−1,j+1). Nodes X(1,j)have no parents. V1, V2, and V3are shown below.1 2 3 1 2 3 4 !"#$%!"#$%Bayes net 1 Bayes net 2 X(1,1)!X(1,2)!X(1,3)!X(2,1)!X(2,2)!X(3,1)!X(1,1)!X(1,2)!X(2,1)!X(1,1)!V1!V2!V3!(a) (3 pt) For any Vn, give general conditions in terms of i, j, k, and ` that make X(i,j)independent of X(k,`)(i.e., X(i,j)⊥⊥ X(k,`)), assuming k > i.(b) (3 pt) For any Vn, give conditions in terms of i, j, k, and ` that make X(1,i)⊥⊥ X(1,j)| X(k,`)for j > i.(c) (4 pt) How many variables are referenced in the largest factor needed to compute P (X(10,1)) in V10ifvariables are eliminated in an order that minimizes the size of the largest factor.Hints: P (X(1,1)) references one variable, P (X(1,1), X(2,1)) references two, etc. Include factors created justbefore summing out a variable.8For the remaining parts of this question, assume each X(i,j)takes values true and false.(d) (2 pt) Given the factors below for two variables in V3, fill in the table for P (X(1,1)|¬x(3,1)) or state thatthere is not enough information.X(1,1)P (X(1,1))x(1,1)1/3¬x(1,1)2/3X(1,1)X(3,1)P (X(3,1)|X(1,1))x(1,1)x(3,1)1/3x(1,1)¬x(3,1)2/3¬x(1,1)x(3,1)0¬x(1,1)¬x(3,1)1X(1,1)X(3,1)P (X(1,1)|¬x(3,1))x(1,1)¬x(3,1)¬x(1,1)¬x(3,1)(e) (3 pt) In V4, if P (X(1,h)= true) =13for all h, and P (X(i,j)|X(i−1,j), X(i−1,j+1)) is defined below for alli > 1, what is the joint probability of all variables when X(i,j)is true if and only if i + j is


View Full Document

Berkeley COMPSCI 188 - Final Exam

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Final Exam
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Final Exam and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Final Exam 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?