Review•Parallel importance sampling‣bias due to 1/normalizer‣particle filter = recursive parallel IS•MCMC‣randomized search for high P(x)‣burn-in, mixing‣approx. iid: { Xt, Xt+Δ, Xt+2Δ, Xt+3Δ, … }‣use to construct estimator of EP(g(X))1Review•Metropolis-Hastings‣way to design chain w/ stationary dist’n P(X)‣proposal distribution Q(X’ | X)‣e.g., random walk N(X’ | X, σ2I)‣accept w.p. min(1, )‣tension btwn long moves, high accept rateMH algorithm•Initialize X1 arbitrarily•For t = 1, 2, …:!Sample X’ ~ Q(X’ | Xt)!Compute p =!With probability min(1, p), set Xt+1 := X’!else Xt+1 := Xt•Note: sequence X1, X2, … will usually contain duplicates182MH example−1−0.500.51−1−0.8−0.6−0.4−0.200.20.40.60.81012345YXf(X,Y)3MH example−1 −0.5 0 0.5 1−1−0.500.514In example•g(x) = x2•True E(g(X)) = 0.28…•Proposal: •Acceptance rate 55–60%•After 1000 samples, minus burn-in of 100:final estimate 0.282361final estimate 0.271167final estimate 0.322270final estimate 0.306541final estimate 0.308716Q(x!| x)=N(x!| x, 0.252I)5Gibbs sampler•Special case of MH•Divide X into blocks of r.v.s B(1), B(2), …•Proposal Q:‣pick a block i uniformly‣sample XB(i) ~ P(XB(i) | X¬B(i))•Useful property: acceptance rate p = 16Gibbs example−0.8−0.6−0.4 −0.20 0.20.40.6 0.81 1.2−0.8−0.6−0.4−0.200.20.40.60.87Gibbs example−1.5−1 −0.5 0 0.51 1.5−1−0.500.518Gibbs failure example−6−4 −202 46−5−4−3−2−10123459Relational learning•Linear regression, logistic regression: attribute-value learning‣set of i.i.d. samples from P(X, Y)•Not all data is like this‣an attribute is a property of a single entity‣what about properties of sets of entities?10Application: document clustering11Application: recommendations12Latent-variable models13Best-known LVM: PCA•Suppose Xij, Uik, Vjk all ~ Gaussian‣yields principal components analysis‣or probabilistic PCA‣or Bayesian PCA14PCA: the picture15Mean subtraction‣Uik ~ N(0, ν2)‣Vjk ~ N(0, ν2)‣Xij ~ N(Ui⋅Vj, σ2)>> mu = mean(X(:));>> colmu = mean(X - mu);>> rowmu = mean(X' - mu)';>> X = X - mu - repmat(colmu, size(X,1), 1) - repmat(rowmu, 1, size(X,2));16Data weights•Let Wij =•Likelihood ⋅ prior = •More generally, Wij ≥ 017PCA: cartoon example123456…ABCDEF…110010…011000…110110…100110…010100…011101……………………MovieUser18PCA: cartoon examplex1x2x3...xnData matrix X≈Compressed matrix Uu1u2u3...unv1 … vkBasis matrix VT19PCA: cartoon examplex1x2x3...xnData matrix X≈Compressed matrix Uu1u2u3...unv1 … vkBasis matrix VTrows of VT span the low-rank space19Interpreting PCAu1u2u3...unv1 … vkusersmoviesbasis weightsbasis vectors20Interpreting PCAu1u2u3...unv1 … vkusersmoviesbasis weightsbasis vectorsBasis vectors represent movies that vary togetherWeights say how much each user cares about each type of movie20Another use of PCAface images from Groundhog Day, extracted by Cambridge face DB project21Image matrixx1x2x3...xnimagespixels22Result of factoringu1u2u3...unv1 … vkimagespixelsbasis weightsbasis vectorsBasis vectors are often called “eigenfaces”23Eigenfacesimage credit: AT&T Labs Cambridge24PCA: finding the MLE•PCA: ‣Uik ~ N(0, ν2)‣Vjk ~ N(0, ν2)‣Xij ~ N(Ui⋅Vj, σ2)‣σ/ν → 025PCA & SVD•The singular value decomposition is‣X = R Σ ST‣R, S orthonormal; Σ ≥ 0 diagonal‣All matrices can be expressed this way‣See svd, svds in Matlab•So, PCA is U = V =26PageRank•SVD is pretty useful: turns out to be main computational step in other models too•A famous one: PageRank‣Given: web graph (V, E)‣Predict: which pages are important27PageRank: adjacency matrix28Random surfer model‣W. p. α:‣W. p. (1–α):‣Intuition: page is important if a random surfer is likely to land there29Stationary distributionA B C D00.10.20.30.40.530Thought experiment•What if A is symmetric?‣note: we’re going to stop distinguishing A, A’•So, stationary dist’n for symmetric A is:•What do people do instead?31Spectral embedding•Another famous model: spectral embedding (and its cousin, spectral clustering)•Embedding: assign low-D coordinates to vertices (e.g., web pages) so that similar nodes in graph ⇒ nearby coordinates‣A, B similar = random surfer tends to reach the same places when starting from A or B32Where does random surfer reach?•Given graph: •Start from distribution π‣after 1 step: P(j | π, 1-step) = ‣after 2 steps: P(j | π, 2-step) = ‣after t steps:33Similarity•A, B similar = random surfer tends to reach the same places when starting from A or B•P(j | π, t-step) = ‣If π has all mass on i:‣Compare i & j: ‣Role of Σt:34Role of Σt (real data)2 46 81000.20.40.60.81 t=1t=3t=5t=1035Example: dolphins•62-dolphin social network near Doubtful Sound, New Zealand‣Aij = 1 if dolphin i friends dolphin j(Lusseau et al., 2003)0 20 40 600102030405060nz = 31836Dolphin network!!"# !!"$! !"$!"#!"%!!"%!!"#!!"$!!"$!"#!"%!!"# !!"$! !"$!"#!!"#!!"%!!"$!!"&!!"&!"$!"%!"#spectral embedding random embedding37Spectral clustering•Use your favorite clustering algorithm on coordinates from spectral embedding!!"# !!"$!
View Full Document