DOC PREVIEW
Berkeley COMPSCI 294 - Lecture Notes

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Mixed Quantum StateDensity MatrixVon Neumann EntropyTwo open questions related to NRMCS 294-2 Density Matrices, von Neumann Entropy 3/7/07Spring 2007 Lecture 13In this lecture, we will discuss the basics of quantum information theory. In particular, we will discuss mixedquantum states, density matrices, von Neumann entropy and the trace distance between mixed quantumstates.1 Mixed Quantum StateSo far we have dealt with pure quantum states|ψi =∑xαx|xi.This is not the most general state we can think of. We can consider a probability distribution of pure states,such as |0i with probability 1/2 and |1i with probability 1/2. Another possibility is the state(|+i =1√2(|0i+ |1i) with probability 1/2|−i =1√2(|0i−|1i) with probability 1/2In general, we can think of mixed state {pi,|ψii} as a collection of pure states |ψii, each with associatedprobability pi, with the conditions 0 ≤ pi≤ 1 and∑ipi= 1. One context in which mixed states arisenaturally is in quantum protocols, where two players share an entangled (pure) quantum state. Each player’sview of their quantum register is then a probability distribution over pure states (achieved when the otherplayer measures their register). Another reason we consider such mixed states is because the quantum statesare hard to isolate, and hence often entangled to the environment.2 Density MatrixNow we consider the result of measuring a mixed quantum state. Suppose we have a mixture of quantumstates |ψii with probability pi. Each |ψii can be represented by a vector in C2n, and thus we can associatethe outer product |ψiihψi| =ψiψ∗i, which is an 2n×2nmatrixa1a2...aN¯a1¯a2··· ¯aN=a1¯a1a1¯a2··· a1¯aNa2¯a2a1¯a2··· a2¯aN......aN¯a1aN¯a2··· aN¯aN.We can now take the average of these matrices, and obtain the density matrix of the mixture {pi,|ψii}:ρ=∑ipi|ψiihψi|.We give some examples. Consider the mixed state |0i with probability of 1/2 and |1i with probablity 1/2.Then|0ih0| =101 0=1 00 0,CS 294-2, Spring 2007, Lecture 13 1and|1ih1| =010 1=0 00 1.Thus in this caseρ=12|0ih0|+12|1ih1| =1/2 00 1/2.Now consider another mixed state, this time consisting of |+i with probability 1/2 and |−i with probability1/2. This time we have|+ih+| = (1/2)111 1=121 11 1,and|−ih−| = (1/2)1−11 −1=121 −1−1 1.Thus in this case the offdiagonals cancel, and we getρ=12|+ih+|+12|−ih−|=1/2 00 1/2.Note that the two density matrices we computed are identical, even though the mixed state we started outwas different. Hence we see that it is possible for two different mixed states to have the same density matrix.Nonetheless, the density matrix of a mixture completely determines the effects of making a measurementon the system:Theorem 13.1: Suppose we measure a mixed state {pj,|ψji} in an orthonormal bases |βki. Then theoutcome is |βki with probability hβk|ρ|βki.Proof: We denote the probability of measuring |βki by Pr[k]. ThenPr[k] =∑jpj|hψj|βki|2=∑jpjhβk|ψjihψj|βki=*βk∑jpj|ψjihψj|βk+= hβk|ρ|βki.2We list several properties of the density matrix:1.ρis Hermitian, so the eigenvalues are real and the eigenvectors orthogonal.2. If we measure in the standard basis the probability we measure i, P[i] =ρi,i. Also, the eigenvalues ofρare non-negative. Suppose thatλand |ei are corresponding eigenvalue and eigenvector. Then if wemeasure in the eigenbasis, we havePr[e] = he|ρ|ei =λhe|ei =λ.CS 294-2, Spring 2007, Lecture 13 23. trρ= 1. This is because if we measure in the standard basisρi,i= Pr[i] but also∑iPr[i] = 1 so that∑iρi,i=∑iPr[i] = 1.Consider the following two mixtures and their density matrices:cosθ|0i+ sinθ|1i w.p. 1/2 =12cθsθcθsθ=12c2θcθsθcθsθs2θcosθ|0i−sinθ|1i w.p. 1/2 =12cθ−sθcθ−sθ=12c2θ−cθsθ−cθsθs2θ=cos2θ00 sin2θ|0i w.p.cos2θ= cos2101 0= cos21 00 0|1i w.p. sin2θ= sin2θ010 1= sin20 00 1=cos2θ00 sin2θThus, since the mixtures have identical density matrices, they are indistinguishable.3 Von Neumann EntropyWe will now show that if two mixed states are represented by different density measurements, then there isa measurement that distinguishes them. Suppose we have two mixed states, with density matrices A and Bsuch that A 6= B. We can ask, what is a good measurement to distinguish the two states? We can diagonalizethe difference A −B to get A −B = EΛE∗, where E is the matrix of orthogonal eigenvectors. Then if eiisan eigenvector with eigenvalueλi, thenλiis the difference in the probability of measuring ei:PrA[i] −PrB[i] =λi.We can define the distance between two probability distributions (with respect to a basis E) as|DA−DB|E=∑(PrA[i] −PrB[i]).If E is the eigenbasis, then|DA−DB|E=∑i|λi| = tr|A−B| = kA−Bktr,which is called the trace distance between A and B.Claim Measuring with respect to the eigenbasis E (of the matrix A −B) is optimal in the sense that itmaximizes the distance |DA−DB|Ebetween the two probability distributions.Before we prove this claim, we introduce the following definition and lemma without proof.Definition Let {ai}Ni=1and {bi}Ni=1be two non-increasing sequences such that∑iai=∑ibi. Then the se-quence {ai} is said to majorize {bi} if for all k,k∑i=1ai≥k∑i=1bi.CS 294-2, Spring 2007, Lecture 13 3Lemma[Schur] Eigenvalues of any Hermitian matrix majorizes the diagonal entries (if both are sorted innonincreasing order).Now we can prove claim 3.Proof Since we can reorder the eigenvectors, we can assumeλ1≥λ2≥ ··· ≥λn. Note that tr(A−B) = 0,so we must have∑iλi= 0. We can split theλi’s into two groups: positive ones and negative ones, we musthave∑λi>0=12kA−Bktr∑λi<0= −12kA−Bktr.Thusmaxkk∑i=1λi=12kA−Bktr.Now consider measuring in another basis. Then the matrix A−B is represented as H = F(A−B)F∗, and letµ1≥µ2≥ ···≥µnbe the diagonal entries of H. Similar argument shows thatmaxkk∑i=1µi=12n∑i=1|µi| =|DA−DB|F2.But by Schur’s lemma theλi’s majorizesµi’s, so we must have|DA−DB|F≤ |DA−DB|E= kA−Bktr.Let H(X) be the Shannon Entropy of a random variable X which can take on states p1... pn.H({pi}) =∑ipilog1piIn the


View Full Document

Berkeley COMPSCI 294 - Lecture Notes

Documents in this Course
"Woo" MAC

"Woo" MAC

11 pages

Pangaea

Pangaea

14 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?