Unformatted text preview:

Lecture 2: The Dot Product, Orthogonal andOrthonormal Sets, the Gram-Schmidt AlgorithmAMath 352Wed., Mar. 311 / 10Dot Product or Inner Product~u,~v ∈ Rn:~u ·~v = h~u,~vi =nXj=1ujvj.This is the standard (Euclidean) inner product.An inner product is anything that satisfies:1. h~u,~vi = h~v,~ui,2. h~u,~v +~wi = h~u,~vi + h~u,~wi,3. h~u, α~vi = αh~u,~vi,4. h~u,~ui ≥ 0, with = only if~u =~0.For example h~u,~viw:=Pnj=1wjujvj, wj> 0, j = 1, . . . , n is aweighted inner product.2 / 10Dot Product or Inner Product~u,~v ∈ Rn:~u ·~v = h~u,~vi =nXj=1ujvj.This is the standard (Euclidean) inner product.An inner product is anything that satisfies:1. h~u,~vi = h~v,~ui,2. h~u,~v +~wi = h~u,~vi + h~u,~wi,3. h~u, α~vi = αh~u,~vi,4. h~u,~ui ≥ 0, with = only if~u =~0.For example h~u,~viw:=Pnj=1wjujvj, wj> 0, j = 1, . . . , n is aweighted inner product.2 / 10NormsAn inner product gives rise to a norm: k~uk = h~u,~ui1/2. For theEuclidean inner product, this means thatk~uk =vuutnXj=1u2j.A norm is anything that satisfies:1. k~uk ≥ 0, with = only if~u =~0,2. kα~uk = |α| · k~uk,3. k~u +~vk ≤ k~uk + k~vk (triangle inequality)For example k~ukw:=qPnj=1wju2j, wj> 0, j = 1, . . . , n is aweighted norm.3 / 10NormsAn inner product gives rise to a norm: k~uk = h~u,~ui1/2. For theEuclidean inner product, this means thatk~uk =vuutnXj=1u2j.A norm is anything that satisfies:1. k~uk ≥ 0, with = only if~u =~0,2. kα~uk = |α| · k~uk,3. k~u +~vk ≤ k~uk + k~vk (triangle inequality)For example k~ukw:=qPnj=1wju2j, wj> 0, j = 1, . . . , n is aweighted norm.3 / 10Orthogonal and Orthonormal VectorsTwo vectors~u and~v are orthogonal if h~u,~vi = 0. They areorthonormal if also k~uk = k~vk = 1.Example:12,4−2are orthogonal.1√512,1√204−2are orthonormal.4 / 10The Gram-Schmidt AlgorithmGiven a set of linearly independent vectors~v1, . . . ,~vm, construct anorthonormal set~q1, . . . ,~qmwherespan(~q1, . . . ,~qk) = span(~v1, . . . ,~vk), k = 1, . . . , m.Normalize the first vector:~q1=1k~v1k·~v1⇒ k~q1k =1k~v1k· k~v1k = 1.Orthogonalize the second vector:˜q2=~v2− h~v2,~q1i~q1⇒h˜q2,~q1i = h~v2,~q1i −h~v2,~q1ih~q1,~q1i = 0.Normalize the second vector:~q2=1k˜q2k˜q2.5 / 10The Gram-Schmidt AlgorithmGiven a set of linearly independent vectors~v1, . . . ,~vm, construct anorthonormal set~q1, . . . ,~qmwherespan(~q1, . . . ,~qk) = span(~v1, . . . ,~vk), k = 1, . . . , m.Normalize the first vector:~q1=1k~v1k·~v1⇒ k~q1k =1k~v1k· k~v1k = 1.Orthogonalize the second vector:˜q2=~v2− h~v2,~q1i~q1⇒h˜q2,~q1i = h~v2,~q1i −h~v2,~q1ih~q1,~q1i = 0.Normalize the second vector:~q2=1k˜q2k˜q2.5 / 10The Gram-Schmidt AlgorithmGiven a set of linearly independent vectors~v1, . . . ,~vm, construct anorthonormal set~q1, . . . ,~qmwherespan(~q1, . . . ,~qk) = span(~v1, . . . ,~vk), k = 1, . . . , m.Normalize the first vector:~q1=1k~v1k·~v1⇒ k~q1k =1k~v1k· k~v1k = 1.Orthogonalize the second vector:˜q2=~v2− h~v2,~q1i~q1⇒h˜q2,~q1i = h~v2,~q1i −h~v2,~q1ih~q1,~q1i = 0.Normalize the second vector:~q2=1k˜q2k˜q2.5 / 10The Gram-Schmidt Algorithm, Cont.Orthogonalize the third vector:˜q3=~v3− h~v3,~q1i~q1− h~v3,~q2i~q2⇒h˜q3,~q1i = h~v3,~q1i − h~v3,~q1ih~q1,~q1i − h~v3,~q2ih~q2,~q1i =h~v3,~q1i − h~v3,~q1i · 1 −h~v3,~q2i · 0 = 0;h˜q3,~q2i = h~v3,~q2i − h~v3,~q1ih~q1,~q2i − h~v3,~q2ih~q2,~q2i =h~v3,~q2i − h~v3,~q2i · 0 −h~v3,~q2i · 1 = 0.Normalize:~q3=1k˜q3k˜q3.6 / 10The Gram-Schmidt Algorithm, Cont.Orthogonalize the third vector:˜q3=~v3− h~v3,~q1i~q1− h~v3,~q2i~q2⇒h˜q3,~q1i = h~v3,~q1i − h~v3,~q1ih~q1,~q1i − h~v3,~q2ih~q2,~q1i =h~v3,~q1i − h~v3,~q1i · 1 −h~v3,~q2i · 0 = 0;h˜q3,~q2i = h~v3,~q2i − h~v3,~q1ih~q1,~q2i − h~v3,~q2ih~q2,~q2i =h~v3,~q2i − h~v3,~q2i · 0 −h~v3,~q2i · 1 = 0.Normalize:~q3=1k˜q3k˜q3.6 / 10The Gram-Schmidt Algorithm, Cont.Given a linearly independent set~v1, . . . ,~vm, set~q1=1k~v1k~v1. Forj = 2, 3, . . . , m,˜qj=~vj−j−1Xi=1h~vj,~qii~qi,~qj=1k˜qjk˜qj.7 / 10Example with the Gram-Schmidt Algorithm~v1=200,~v2=130,~v3=−145.~q1=1√22+ 02+ 02200=100.˜q2=130− (1 ·1 + 3 · 0 + 0 · 0)100=030.~q2=13030=010.8 / 10Example with the Gram-Schmidt Algorithm~v1=200,~v2=130,~v3=−145.~q1=1√22+ 02+ 02200=100.˜q2=130− (1 ·1 + 3 · 0 + 0 · 0)100=030.~q2=13030=010.8 / 10Example with the Gram-Schmidt Algorithm~v1=200,~v2=130,~v3=−145.~q1=1√22+ 02+ 02200=100.˜q2=130− (1 ·1 + 3 · 0 + 0 · 0)100=030.~q2=13030=010.8 / 10Example with the Gram-Schmidt Algorithm, Cont.~v1=200,~v2=130,~v3=−145,~q1=100,~q2=010.˜q3=−145− (−1 ·1)100− (4 ·1)010=−145+100−040=005.~q3=15005=001.9 / 10Example with the Gram-Schmidt Algorithm, Cont.~v1=200,~v2=130,~v3=−145,~q1=100,~q2=010.˜q3=−145− (−1 ·1)100− (4 ·1)010=−145+100−040=005.~q3=15005=001.9 / 10SummaryWe covered more definitions:Iinner product,Inorm,Iorthogonal and orthonormal vectors.And we covered an important algorithm: The Gram-Schmidtalgorithm.10 / 10SummaryWe covered more definitions:Iinner product,Inorm,Iorthogonal and orthonormal vectors.And we covered an important algorithm: The Gram-Schmidtalgorithm.10 /


View Full Document

UW AMATH 352 - Lecture 2

Download Lecture 2
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 2 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?