Entropy in the Quantum WorldOutlineEntropy in the classic worldPowerPoint PresentationSlide 5Slide 6Slide 7Slide 8Theoretical BackgroundSlide 10Slide 11Slide 12Slide 13Shannon’s entropySlide 15Slide 16Slide 17Slide 18Entropy in the quantum worldSlide 20Slide 21Slide 22ApplicationsSlide 24Slide 25Slide 26Slide 27Slide 28Slide 29Slide 30Slide 31ReferencesEntropy in theQuantum WorldPanagiotis AleiferisPanagiotis AleiferisEECS 598, Fall 2001EECS 598, Fall 2001OutlineEntropy in the classic worldTheoretical background–Density matrix–Properties of the density matrix–The reduced density matrixShannon’s entropyEntropy in the quantum world–Definition and basic properties–Some useful theoremsApplications– Entropy as a measure of entanglementReferencesEntropy in the classic worldMurphy’s Laws11stst law of thermodynamics: law of thermodynamics:22ndnd law of thermodynamics: law of thermodynamics:“There is some degradation of the total energy U in the system, some non-useful heat, in any thermodynamic process.”Rudolf Clausius (1822 - 1988)Rudolf Clausius (1822 - 1988)ΔWΔQΔUWhy does heat Why does heat always flow always flow from warm to from warm to cold?cold?Ludwig Boltzmann (1844 - 1906)Ludwig Boltzmann (1844 - 1906)““When energy is degraded, the When energy is degraded, the atoms become more disordered, atoms become more disordered, the entropy increases!”the entropy increases!”““At equilibrium, the system will At equilibrium, the system will be in its most probable state and be in its most probable state and the entropy will be maximum.”the entropy will be maximum.”The more disordered The more disordered the energy, the less the energy, the less useful it can be!useful it can be!WkS logAll possible microstates of 4 coinsAll possible microstates of 4 coinsThree heads, Three heads, one tailsone tailsTwo heads, Two heads, two tailstwo tailsOne heads, One heads, three tailsthree tailsFour Four headsheadsFour Four tailstails1W4W6W4W1WBoltzmann statistics – 5 dipoles in external fieldkTP0exp0E ,1 gkTUP exp51,4UE ,5,14gkTUP2exp102,32UE ,10,23gkTUP3exp103,23UE ,10,32gkTUP4exp54,14UE ,5,41gkTUP5exp55UE ,1 gE if , if 0 UEEGeneral Relations of Boltzmann statisticsGeneral Relations of Boltzmann statistics– For a system in equilibrium at temperature T:– Statistical entropy:iiinnnkTEgkTEgPexpexpiiiPPkS lnTheoretical BackgroundThe density matrix ρ–In most cases we do NOTNOT completely know the exact state of the system. We can estimate the probabilities Pi that the system is in the states |ψi>.–Our system is in an “ensemble” of pure states {Pi,|ψi>}. iniiiiniiiiniiiniiiiiiiiiiiniiiiiiiiiiniiiniiiiiiiiaPaaPaaPaaPaPaaPaaPaaPaPaaaaaaPP2*2*1*222*12*1*2121**2*121 tr(ρ)=1Define:Properties of the density matrix–tr(ρ)=1–ρ is a positive operator (positive, means is real, non-negative, ) –if a unitary operator U is applied, the density matrix transforms as:–ρ corresponds to a pure statepure state, if and only if:–ρ corresponds to a mixed statemixed state, if and only if:vv v UtUt 121)(2tr1)(2tr–if we choose the energy eigenfunctions for our basis set, then H and ρ are both diagonaldiagonal, i.e.–in any other representation ρ may or may not be diagonal, but generally it will be symmetricsymmetric, i.e. Detailed balanceDetailed balance is essential so that equilibrium is maintained (i.e. probabilities do NOT explicitly depend on time).mnnmnmnnmnEH ,ˆnmmnThe reduced density matrix–What happens if we want to describe a subsystem of What happens if we want to describe a subsystem of the composite system?the composite system?–Divide our system AB into parts A, B.–Reduced density matrix for the subsystem A: where trB: “partial trace over subsystem B”)(ABBAtr)()(21212121bbtraabbaatrBtrace over subspace of system Btrace over subspace of system BShannon’s entropyDefinition–How much information we gain, on average, when we How much information we gain, on average, when we learn the value of a random variable X?learn the value of a random variable X? OR equivalently, What is the uncertainty, on average, about X before we What is the uncertainty, on average, about X before we learn its value?learn its value?–If {p1, p2, …,pn} the probability distribution of the n possible values of X: (bits) log,,,221 iiinpppppHXH –By definition: 0log20 = 0 (events with zero probability do not contribute to entropy.)–Entropy H(X) depends only on the respective probabilities of the individual events Xi !–Why is the entropy defined this way? Why is the entropy defined this way? It gives the minimal physical resources required to store information so that at a later time the information can be reconstructed. - “Shannon’s noiseless coding theorem”“Shannon’s noiseless coding theorem”..–Example of Shannon’s noiseless coding theorem Code 4 symbols {1, 2, 3, 4} with probabilities {1/2, 1/4, 1/8, 1/8}. Code without compression: But, what happens if we use this code instead?But, what happens if we use this code instead? Average string length for the second code: Note:Note: !!!!!! 11,10,01,00 4,3,2,1compr.without 111,110,10,0 4,3,2,1compr.with 247381381241121lenght 4781log8241log4121log2181,81,41,21222HJoint and Conditional Entropy–A pair (X,Y) of random variables.–Joint entropy of X and Y:–Entropy of X conditional on knowing Y:Mutual Information–How much do X, Y have in common?How much do X, Y have in common? –Mutual information of X and Y: ),(log),(,,2yxpyxpYXHyx YHYXHYXH
View Full Document