DOC PREVIEW
U of I CS 498 - Homework #3

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

UIUC, CS498, Section EA - Autumn 04 - Homework #3Due Date: Tuesday, November 18, 2004, 12pmNovember 9, 20041 Graphical Models (25%)1. Consider a Bayesian network that describes relationship between genes of parents and children. Thegenes of every child are determined by the genes of its parents, with some noise. Pat and Sue areparents to Marge, Pat and Mary are parents to Fay, Mary and Bob are parents to Homer, Homer andMarge are parents to Bart, Lisa, and Maggie. Fay and Homer are parents to Sue, and Bart and Sueare parents to John.(a) Draw the Bayesian network that describes these relationships.(b) Draw a triangulated version of this network. Is it moralized?(c) In this graph, are are Homer, Marge, d-separated by Bart?(d) Are Homer, Marge, d-separated by Bart, Lisa, Maggie?(e) Are Homer, Marge, d-separated by P at, Bart, Lisa, Maggie?(f) Are Homer, Marge independent?(g) Are Homer, Marge independent given John?2. Prove that whenever X, Y are vertex-separated by Z, in an undirected graphical model, then X, Y areindependent given Z.3. Give an example of an undirected graphical model that encodes conditional independence assumptionsthat cannot be captured by a directed graphical model (Bayesian network) on the same variables.4. Give an example of a directed graphical model that encodes conditional independence assumptionsthat cannot be captured by an undirected graphical model on the same variables.2 Exact Probabilistic Inference (25%)1. Consider the Bayesian network in Figure 1. All variables have four states, besides A that has three.(a) Calculate the size of the table P (A, B, C, D = d1).(b) In the calculation P (A|E = e1) the variables have been marginalized in the following order:B, C, D, F, G, H. Calculate the size of each table produced in the process, and sum the sizes up.(c) Find an elimination order that yields a smaller sum of table sizes than the one you achieved in1b.(d) Construct a junction tree for this network.2. BN has the potentials in Table ??.(a) Calculate P (A|D = y), P (B|D = y), P (C|D = y).1ABCDEFG HFigure 1: Bayesian network.A A B,CB y n C y n D y,y y,n n,y n,ny 0.2 0.6 y 0.1 0.5 y 0.3 0.9 0.2 0.6n 0.8 0.4 n 0.9 0.5 n 0.7 0.1 0.8 0.4P (B|A) P (C|A) P (D|B, C)Figure 2: BN Potentials(b) Calculate P (B|C = y).3. One reason why variable elimination can be done in time that is exponential only in the treewidth ofthe graph, is the commutativity of the operations of summation and product when these do not refer tothe same terms (e.g.,PA,B,Cf(A, B)f (B, C) =PA,B(f(A, B)PCf(B, C))), provided the factors arenon-negative (which holds for potentials representing probability distributions). A similar commutativ-ity holds for the max function, namely, maxA,B,Cf(A, B)f (B, C) = maxA,B(f(A, B) maxCf(B, C))).Create an algorithm for finding the most likely configuration of a set of random variables, C, givensome evidence, E, and marginalizing over the rest of the random variables, R. This is called MPE(Most Probable Explanation), and is defined formally as argmaxCP r(C, E, R).Prove that your algorithm is correct.3 Sampling (25%)1. We are given an image of 100x100 pixels (256 values each gray scale), and are looking for an object inthat image. We know that the object is particularly dark (with some variation in darkness across it)in comparison with the background, but we do not know where it is or what that darkness means inpixel values. However, we do know that if a pixel belongs to the object then the adjacent pixel belongsto the object w.h.p., and that significant changes in intensity across pixels increase the probability ofthe darker pixel being part of an object. Close-by dark pixels also increase the pr obability of thosepixels belonging to the object.Our task is to find a pixel that belongs to the object with the highest probability.(a) Represent this problem with a generative model Bayesian network (i.e., an object is at a positionand size with some prior, and generates pixel values with some distribution as a result).(b) Download the Bayes Net Toolbox for Matlab (from Kevin Murphy: http://www.ai.mit.edu/ mur-phyk/Software/BNT/bnt.html) and use any sampling algorithm from those in the package (e.g.,2Gibbs, likelihood weighting) to detect an object in an image of your choice (can be something youfind online, but can also be hand-made).Hand-tune the values for the CPTs. Please provide both the image, the CPT values, the algorithm(that uses the toolbox as a subroutine), the output of the program, and the time taken.4 Variational Approximation (25%)Consider the multiple cause model discussed in class. This a with binary latent (hidden) variables, si,real-valued observed vector y and parameters φ = {{µi, πi}Ki=1, σ2}.p(s1, ..., sK|π) =KYt=1p(si) =KYi=1πsii(1 − πi)(1−si)p(y|s1, ..., sK, µσ2) = N (KXt=1siµi, σ2I)Assume you have a data set of N i.i.d. observations of y, i.e. Y = {y(1), ..., y(N)}.General Matlab hint: wherever possible, avoid looping over the data points. Many (but not all) ofthese functions can be written using matrix operations.1. Implement the fully factored (a.k.a. mean-field) variational approximation described in class. Thatis, for each data point y(n), approximate the posterior distribution over the hidden variables by adistribution:qn(s) =KYi=1λsiin(1 − λin)(1−si)and find the λ’s that maximize F holding φ fixed. Specifically, write a Matlab function:[lambda, F ] = M eanF ield(Y, mu, sigma, pie, lambda0, maxsteps)where lambda is N × K, F is the lower bound on the likelihood, Y is the N × D data matrix, mu is theD × K matrix of means, pie is the 1 × K vector of priors on s, lambda0 are initial values for lambdaand maxsteps are maximum number of steps of the fixed point equations. You might also want to seta convergence criterion so that, if F changes by less than ǫ, the iterations halt. Hand in: code.2. Apply your algorithm to derive an answer to the same problem as in question 3. Hand in: same asquestion


View Full Document

U of I CS 498 - Homework #3

Documents in this Course
Lecture 5

Lecture 5

13 pages

LECTURE

LECTURE

39 pages

Assurance

Assurance

44 pages

LECTURE

LECTURE

36 pages

Pthreads

Pthreads

29 pages

Load more
Download Homework #3
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Homework #3 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Homework #3 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?