DOC PREVIEW
UIUC MATH 415 - lecture10

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Review• Goal: solve for u(x) in the boundary value problem (BVP)−d2udx2= f(x), 0 6 x 6 1, u(0) = u(1) = 0.• replace u(x) by its values at equally spaced points in [0, 1]u0= 0u1= u(h)u2= u(2h)u3= u(3h)un= u(nh)un+1= 0. . .0 h 2h 3h nh 1• −d2udx2≈ −u(x + h) − 2u(x) + u(x − h)h2at these points (finite differences)• get a linear equation at each point x = h, 2h,, nh; for n = 5, h =16:2 −1−1 2 −1−1 2 −1−1 2 −1−1 2Au1u2u3u4u5x=h2f(h)h2f(2h)h2f(3h)h2f(4h)h2f(5h)b• Compute the LU decomposition:2 −1−1 2 −1−1 2 −1−1 2 −1−1 2=1−121−231−341−4512 −132−143−154−165That’s how the LU decomposition of band matrices always look s like.Armin [email protected] decomposition vs matrix inverseIn many applications, we don’t just solveAx = b for a single b, but for many differen tb (think millions).Note, for instance, that in our example of “stead y-state temperature distribution in a bar” the mat rixA is always the same (it only depends on the kin d of problem), whereas the vector b models theexternal heat (and thus changes for each specific instance).• That’s why the LU decomposition saves us from repeating lots of computa t i o n incomparison with Gaussian el imination on[A b].• What about computing A−1?We are going to see that this is a bad idea. (It usually is.)Example 1. When using LU decomposition to solve Ax = b, we employ forward andbackward substitution:Ax = bGA=L ULc = b and Ux = cHere, we have to solve, for each b,1−121−231−341−451c = b,2 −132−143−154−165x = cby forward and back ward substi t ution.How many operation s (additions and multiplications) are needed in then × n case?2(n − 1) for Lc = b, and 1 + 2(n − 1) for Ux = c.So, roughly, a total of4n operations.On the other hand,A−1=165 4 3 2 14 8 6 4 23 6 9 6 32 4 6 8 41 2 3 4 5.How many operation s are needed to comp ute A−1b?This time, we need roughly2n2additions and mu ltiplications.Armin [email protected]• Large matrices met in application s usually are not random but ha ve some structure(such as band matr ices).• When solving linear equations, we do not (try to) compute A−1.◦ It destroys str ucture in practical problems.◦ As a result, it can be orders of magnitude slower,◦ and require orders of magnitude more memory.◦ It is also numerically unstabl e .◦ LU decomposition ca n be adjust ed to not have these drawbacks.A practice problemExample 2. Above we computed the LU decomposition for n = 5. For comparison,here ar e the details for computing the inverse whenn = 3.Do it forn = 5, and appreciate just how much computation has to be done.InvertA =2 −1−1 2 −1−1 2.Solution.2 −1 0 1 0 0−1 2 −1 0 1 00 −1 2 0 0 1>R2→R2+12R12 −1 0 1 0 0032−1121 00 −1 2 0 0 1>R3→R3+23R22 −1 0 1 0 0032−1121 00 04313231>R2→23R2R3→34R3R1→12R11 −120120 00 1 −23132300 0 1141234>R1→R1+12R2R2→R2+23R31 0 03412140 1 0121120 0 1141234Hence,2 −1−1 2 −1−1 2−1=34121412112141234.Armin [email protected] spaces and subspacesWe have already encountered vectors inRn. N ow, we discuss the general c oncept ofvectors.In place of the spaceRn, we think of general vector spaces.Definition 3. A vector space is a nonempty set V of elements, calle d vectors, whichmay be adde d an d scaled (multiplied with real numbers).The two ope r ations of addition and scalar multiplication must satisfy the followingaxioms for al lu, v , w in V , and all scalars c, d.(a) u + v is in V(b) u + v = v + u(c) (u + v) + w = u + (v + w)(d) there is a vector (called the zero vector) 0 in V such that u + 0 = u for all u in V(e) there is a vector −u such that u + (−u) = 0(f) cu is in V(g) c(u + v) = cu + cv(h) (c + d)u = cu + du(i) (cd)u = c(du)(j) 1u = utl;drA vector sp ace is a collection of vectors which can be added and s caled(without leaving the space!); subj e c t to th e usual rules you would hope for.namely: associativ ity, commutativity, distributivityExample 4. Convince yourself that M2×2=na bc d: a, b, c, d in Rois a vector space.Solution. In this context, the zero vector is 0 =0 00 0.Addition is componentwise:a bc d+e fg h=a + e b + fc + g d + hScaling is componentwise:ra bc d=ra rbrc rdArmin [email protected] and scaling satisfy the axiom s of a vector space because they are definedcomponent-wise and because ordinary addition and mult iplication are associative, com-mutative, distributiv e and w hat not.Important note: we do not use matrix multi plication here!Note: as a ve ct or space,M2×2behaves precisely like R4; we could translate betweenthe two viaa bc dabcd.A fancy person wou ld say that these two vector spaces are isomorphic.Example 5. Let Pnbe the set of all polynomials of degree at most n > 0. Is Pnavector space?Solution. Members of Pnar e of the formp(t) = a0+ a1t ++ antn,where a0, a1,, anar e in R and t is a variable.Pnis a vector space.Adding two polynomial s:[a0+ a1t ++ antn] + [b0+ b1t ++ bntn]= [(a0+ b0) + (a1+ b1)t ++ (an+ bn)tn]So addition works “component-wise” again.Scaling a polynomial:r[a0+ a1t ++ antn]= [(ra0) + (ra1)t ++ (ran)tn]Scaling work s “component-wise” as well.Again: the vector space axioms are satisfied because a ddition and scaling ar e definedcomponent-wise.As in the previous example, we see thatPnis isomorphic to Rn+1:a0+ a1t ++ antna0a1anArmin [email protected] 6. Let V be the set of all polynomials of d e gree exactly 3. Is V a vector space?Solution. No, because V does n ot contain the zero polynomial p(t) = 0.Every vector space has to have a zero vector; this is an easy necessary (but not sufficient) criterionwhen thinking abo ut whether a set is a vector space.More


View Full Document

UIUC MATH 415 - lecture10

Documents in this Course
disc_1

disc_1

2 pages

Load more
Download lecture10
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view lecture10 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view lecture10 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?