10-701/15-781, Fall 2006, Midterm• There are 7 questions in this exam (11 pages including this cover sheet).• Questions are not equally difficult.• If you need more room to work out your answer to a question, use the back of the pageand clearly mark on the front of the page if we are to look at what’s on the back.• This exam is open book and open notes. Computers, PDAs, cell phones are not allowed.• You have 1 hour and 20 minutes. Good luck!Name:Andrew ID:Q Topic Max. Score Score1 Conditional Independece, MLE/MAP, Probability 122 Decision Tree 123 Neural Network and Regression 184 Bias-Variance Decomposition 125 Support Vector Machine 126 Generative vs. Discriminative Classifier 207 Learning Theory 14Total 10011 Conditional Independence, MLE/MAP, Probability (12 pts)1. (4 pts) Show that Pr(X, Y |Z) = Pr(X|Z) Pr(Y |Z) if Pr(X|Y, Z) = Pr(X|Z).2. (4 pts) If a data point y follows the Poisson distribution with rate parameter θ, then theprobability of a single observation y isp(y|θ) =θye−θy!, for y = 0, 1, 2, ···.You are given data points y1, ··· , ynindependently drawn from a Poisson distribution withparameter θ. Write down the log-likelihood of the data as a function of θ.3. (4 pts) Suppose that in answering a question in a multiple choice test, an examinee eitherknows the answer, with probability p, or he guesses with probability 1 − p. Assume that theprobability of answering a question correctly is 1 for an examinee who knows the answer and1/m for the examinee who guesses, where m is the number of multiple choice alternatives.What is the probability that an examinee knew the answer to a question, given that he hascorrectly answered it?22 Decision Tree (12 pts)The following data set will be used to learn a decision tree for predicting whether students arelazy (L) or diligent (D) based on their weight (Normal or Underweight), their eye color (Amber orViolet) and the numb er of eyes they have (2 or 3 or 4).Weight Eye Color Num. Eyes OutputN A 2 LN V 2 LN V 2 LU V 3 LU V 3 LU A 4 DN A 4 DN V 4 DU A 3 DU A 3 DThe following numbers may be helpful as you answer this problem without using a calculator:log20.1 = −3.32, log20.2 = −2.32, log20.3 = −1.73, log20.4 = −1.32, log20.5 = −1.*You don’t need to show the derivation for your answers in this problem.1. (3 pts) What is the conditional entropy H(EyeColor|W eight = N)?2. (3 pts) What attribute would the ID3 algorithm choose to use for the root of the tree (nopruning)?3. (4 pts) Draw the full decision tree learned for this data (no pruning).4. (2 pts) What is the training set error of this unpruned tree?33 Neural Network and Regression (18 pts)Consider a two-layer neural network to learn a function f : X → Y where X = hX1, X2i consists oftwo attributes. The weights, w1, ··· , w6, can be arbitrary. There are two possible choices for thefunction implemented by each unit in this network:• S: signed sigmoid function S(a) = sign[σ(a) − 0.5] = sign[11+exp(−a)− 0.5]• L: linear function L(a) = c awhere in both cases a =PiwiXi1. (4 pts) Assign proper activation functions (S or L) to each unit in the following graph so thisneural network simulates a linear regression: Y = β1X1+ β2X2.2. (4 pts) Assign proper activation functions (S or L) for each unit in the following graph so thisneural network simulates a binary logistic regression classifier: Y = arg maxyP (Y = y|X),where P (Y = 1|X) =exp(β1X1+β2X2)1+exp(β1X1+β2X2), P (Y = −1|X) =11+exp(β1X1+β2X2).3. (3 pts) Following problem 3.2, derive β1and β2in terms of w1, ··· , w6.44. (4 pts) Assign proper activation functions (S or L) for each unit in the following graph so thisneural network simulates a boosting classifier which combines two logistic regression classifiers,f1: X → Y1and f2: X → Y2, to produce its final prediction: Y = sign[α1Y1+ α2Y2]. Usethe same definition in problem 3.2 for f1and f2.5. (3 pts) Following problem 3.4, derive α1and α2in terms of w1, ··· , w6.54 Bias-Variance Decomposition (12 pts)1. (6 pts) Suppose you have regression data generated by a polynomial of degree 3. Characterizethe bias-variance of the estimates of the following models on the data with respect to the truemodel by circling the appropriate entry.Bias VarianceLinear regression low/high low/highPolynomial regression with degree 3 low/high low/highPolynomial regression with degree 10 low/high low/high2. Let Y = f(X) + ², where ² has mean zero and variance σ2². In k-nearest neighbor (kNN)regression, the prediction of Y at point x0is given by the average of the values Y at the kneighbors closest to x0.(a) (2 pts) Denote the `-nearest neighbor to x0by x(`)and its corresponding Y value byy(`). Write the predictionˆf(x0) of the kNN regression for x0in terms of y(`), 1 ≤ ` ≤ k.(b) (2 pts) What is the behavior of the bias as k increases?(c) (2 pts) What is the behavior of the variance as k increases?65 Support Vector Machine (12 pts)Consider a supervised learning problem in which the training examples are points in 2-dimensionalspace. The positive examples are (1, 1) and (−1, −1). The negative examples are (1, −1) and(−1, 1).1. (1 pts) Are the p ositive examples linearly separable from the negative examples in the originalspace?2. (4 pts) Consider the feature transformation φ(x) = [1, x1, x2, x1x2], where x1and x2are,respectively, the first and second coordinates of a generic example x. The prediction functionis y(x) = wT∗ φ(x) in this feature space. Give the coefficients, w, of a maximum-margindecision surface separating the positive examples from the negative examples. (You shouldbe able to do this by inspection, without any significant computation.)3. (3 pts) Add one training example to the graph so the total five examples can no longer belinearly separated in the feature space φ(x) defined in problem 5.2.4. (4 pts) What kernel K(x, x0) does this feature transformation φ correspond to?76 Generative vs. Discriminative Classifier (20 pts)Consider the binary classification problem where class label Y ∈ {0, 1} and each training exampleX has 2 binary attributes X1, X2∈ {0, 1}.In this problem, we will always assume X1and X2are conditional independent given Y , thatthe class priors are P (Y = 0) = P (Y = 1) = 0.5, and that the conditional probabilities are asfollows:P (X1|Y ) X1= 0 X1= 1Y = 0 0.7 0.3Y = 1 0.2 0.8P (X2|Y ) X2= 0 X2= 1Y = 0 0.9 0.1Y = 1 0.5 0.5The expected error rate is the probability that a classifier
View Full Document