DOC PREVIEW
Berkeley COMPSCI 188 - Final Exam

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 188Spring 2010Introduction toArtificial Intelligence Final ExamINSTRUCTIONS• You have 3 hours.• The exam is closed book, closed notes except a two-page crib sheet.• Please use non-programmable calculators only.• Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide abrief explanation. All short answer sections can be successfully answered in a few sentences AT MOST.First nameLast nameSIDLoginFor staff use only:Q1. Search /14Q2. Naive Bayes /15Q3. Hidden Markov Models /15Q4. Machine Learning /13Q5. Markov Decision Processes /10Q6. Short answer /33Total /1001Q1. [14 pts] SearchFor the following questions, please choose the best answer (only one answer per question). Assume a finite searchspace.(a) [2 pts] Depth-first search can be made to return the same solution as breadth-first search using:(i) Iterative Deepening(ii) A closed list/list of nodes that have been expanded(iii) A heuristic function(iv) This is not possible(b) [2 pts] A∗search can be made to perform a breadth-first search by setting (fill in correct values):1. for all nodes, heuristic =2. for all nodes, edgecost =(c) [2 pts] You run A∗search using a heuristic function which you know to be admissible and consistent. Yourfriend claims he has a search algorithm that is guaranteed to not expand more nodes than your algorithm(and in fact often expands far fewer in practice). He also tells you that his algorithm is guaranteed to find theoptimal path.Could the algorithm your friend claims to have exist? (circle one): yes noExplain:(d) [2 pts] Depth first search using a closed list/list of nodes that have been expanded is:(i) Optimal (will find a shortest path to goal)(ii) Complete (will find a path to goal if at least one exists)(iii) Both optimal and complete(iv) Neither optimal nor complete2Consider a grid, a portion of which is shown below:You would like to search for paths in this grid. Unlike in Pacman, it is possible to move diagonally as well ashorizontally and vertically. The distance between neighboring grid squares (horizontally or vertically) is 1, and thedistance between diagonally adjacent grid squares is√2.(e) [2 pts] Is the euclidean distance an admissible heuristic? The euclidean distance between two points (x1, y1)and (x2, y2) isp(x2− x1)2+ (y2− y1)2.(f) [2 pts] The Manhattan distance is not an admissible heuristic. Can it be made admissible by adding weightsto the x and y terms? The Manhattan distance between two points (x1, y1) and (x2, y2) is |x2−x1|+ |y2−y1|.A weighted version with weights α and β would be α|x2−x1|+ β|y2−y1|. Specify the (possibly empty) set ofpairs of weights (α, β) such that the weighted Manhattan distance is admissible.(g) [2 pts] Is the L∞distance an admissible heuristic? The L∞distance between two points (x1, y1) and (x2, y2)is max(|x2− x1|, |y2− y1|)3Q2. [15 pts] Naive BayesYour friend claims that he can write an effective Naive Bayes spam detector with only three features: the hour ofthe day that the email was received (H ∈ {1, 2, . . . , 24}), whether it contains the word ‘viagra’ (W ∈ {yes, no}), andwhether the email address of the sender is Known in his address book, Seen before in his inbox, or Unseen before(E ∈ {K, S, U}).(a) [3 pts] Flesh out the following information about this Bayes net:Graph structure:Parameters:Size of the set of parameters:Suppose now that you labeled three of the emails in your mailbox to test this idea:spam or ham? H W Espam 3 yes Sham 14 no Kham 15 no K(b) [2 pts] Use the three instances to estimate the maximum likelihood parameters.(c) [2 pts] Using the maximum likelihood parameters, find the predicted class of a new datapoint with H = 3,W = no, E = U .4(d) [4 pts] Now use the three to estimating the parameter using Laplace smoothing and k = 2. Do not forget tosmooth both the class prior parameters and the feature values parameters.(e) [1 pt] Using the parameters obtained with Laplace smoothing, find the predicted class of a new datapoint withH = 3, W = no, E = U.(f) [3 pts] You observe that you tend to receive spam emails in batches. In particular, if you receive one spammessage, the next message is more likely to be a spam message as well. Explain a new graphical model whichmost naturally captures this phenomena.Graph structure:Parameters:Size of the set of parameters:5Q3. [15 pts] Hidden Markov ModelsConsider the following Hidden Markov Model.X1O1X2O2X1Pr(X1)0 0.31 0.7XtXt+1Pr(Xt+1|Xt)0 0 0.40 1 0.61 0 0.81 1 0.2XtOtPr(Ot|Xt)0 A 0.90 B 0.11 A 0.51 B 0.5Suppose that O1= A and O2= B is observed.(a) [3 pts] Use the Forward algorithm to compute the probability distribution Pr(X2, O1= A, O2= B). Show yourwork. You do not need to evaluate arithmetic expressions involving only numbers.(b) [3 pts] Use the Viterbi algorithm to compute the maximum probability sequence X1, X2. Show your work.6For the next two questions, use the specified sequence of random numbers {ai} generated independently and uniformlyat random from [0, 1) to perform sampling. Specifically, to obtain a sample from a distribution over a variableY ∈ {0, 1} using the random number ai, pick Y = 0 if ai< Pr(Y = 0), and pick Y = 1 if ai≥ Pr(Y = 0). Similarly,to obtain a sample from a distribution over a variable Z ∈ {A, B} using the random number ai, pick Z = A ifai< Pr(Z = A), and pick Z = B if ai≥ Pr(Z = A). Use the random numbers {ai} in order starting from a1, usinga new random number each time a sample needs to be obtained.(c) [3 pts] Use likelihood-weighted sampling to obtain 2 samples from the distribution Pr(X1, X2|O1= A, O2= B),and then use these samples to estimate E[√X1+ 3X2|O1= A, O2= B].a1a2a3a4a5a6a7a8a9a100.134 0.847 0.764 0.255 0.495 0.449 0.652 0.789 0.094 0.028(d) [2 pts] [true or false] In general, particle filtering using a single particle is equivalent to rejection sampling inthe case that there is no evidence. Explain your answer.(e) [2 pts] [true or false] Performing particle filtering twice, each time with 50 particles, is equivalent to performingparticle filtering once with 100 particles. Explain your answer.(f) [2 pts] [true or false] Variable elimination is generally more accurate than the Forward algorithm. Explain youranswer.7Q4. [13 pts] Machine Learning(a) [4 pts] Indicate which, if any, of the statements are correct, and explain your answer. Naive Bayes trained usingmaximum-likelihood parameter estimation(i) is guaranteed not to perform worse if more


View Full Document

Berkeley COMPSCI 188 - Final Exam

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Final Exam
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Final Exam and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Final Exam 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?