DOC PREVIEW
Stanford EE 364B - Numerical Linear Algebra Background

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Convex Optimization — Boyd & Vandenberghe9. Numerical linear algebra background• matrix structure and algorithm complexity• solving linear equations with factored matrices• LU, Cholesky, LDLTfactorization• block elimination and the matrix inversion lemma• solving underdetermined equations9–1Matrix structure and algorithm complexitycost (execution time) of solving Ax = b with A ∈ Rn×n• for general methods, grows as n3• less if A is structured (banded, sparse, Toeplitz, . . . )flop counts• flop (floating-point operation): one addition, subtraction,multiplication, or division of two floating-point numbers• to estimate complexity of an algorithm: express number of flops as a(polynomial) function of the problem dimensions, and simplify bykeeping only the leading terms• not an accurate predictor of computation time on modern computers• useful as a rough estimate of complexityNumerical linear algebra background 9–2vector-vector operations (x, y ∈ Rn)• inner product xTy: 2n − 1 flops (or 2n if n is large)• sum x + y, scalar multiplication αx: n flopsmatrix-vector product y = Ax with A ∈ Rm×n• m(2n − 1) flops (or 2mn if n large)• 2N if A is sparse with N nonzero elements• 2p(n + m) if A is given as A = UVT, U ∈ Rm×p, V ∈ Rn×pmatrix-matrix product C = AB with A ∈ Rm×n, B ∈ Rn×p• mp(2n − 1) flops (or 2mnp if n large)• less if A and/or B are sparse• (1/2)m(m + 1)(2n − 1) ≈ m2n if m = p and C symmetricNumerical linear algebra background 9–3Linear equations that are easy to solvediagonal matrices (aij= 0 if i 6= j): n flopsx = A−1b = (b1/a11, . . . , bn/ann)lower triangular (aij= 0 if j > i): n2flopsx1:= b1/a11x2:= (b2− a21x1)/a22x3:= (b3− a31x1− a32x2)/a33...xn:= (bn− an1x1− an2x2− · · · − an,n−1xn−1)/anncalled forward substitutionupper triangular (aij= 0 if j < i): n2flops via backward substitutionNumerical linear algebra background 9–4orthogonal matrices: A−1= AT• 2n2flops to compute x = ATb for general A• less with structure, e.g., if A = I − 2uuTwith kuk2= 1, we cancompute x = ATb = b − 2(uTb)u in 4n flopspermutation matrices:aij=1 j = πi0 otherwisewhere π = (π1, π2, . . . , πn) is a permutation of (1, 2, . . . , n)• interpretation: Ax = (xπ1, . . . , xπn)• satisfies A−1= AT, hence cost of solving Ax = b is 0 flopsexample:A =0 1 00 0 11 0 0, A−1= AT=0 0 11 0 00 1 0Numerical linear algebra background 9–5The factor-solve method for solving Ax = b• factor A as a product of simple matrices (usually 2 or 3):A = A1A2· · · Ak(Aidiagonal, upper or lower triangular, etc)• compute x = A−1b = A−1k· · · A−12A−11b by solving k ‘easy’ equationsA1x1= b, A2x2= x1, . . . , Akx = xk−1cost of factorization step usually dominates cost of solve stepequations with multiple righthand sidesAx1= b1, Ax2= b2, . . . , Axm= bmcost: one factorization plus m solvesNumerical linear algebra background 9–6LU factorizationevery nonsingular matrix A can be factored asA = P LUwith P a permutation matrix, L lower triangular, U upper triangularcost: (2/3)n3flopsSolving linear equations by LU factorization.given a set of linear equations Ax = b, with A nonsingular.1. LU factorization. Factor A as A = P LU ((2/3)n3flops).2. Permutation. Solve P z1= b (0 flops).3. Forward substitution. Solve Lz2= z1(n2flops).4. Backward substitution. Solve Ux = z2(n2flops).cost: (2/3)n3+ 2n2≈ (2/3)n3for large nNumerical linear algebra background 9–7sparse LU factorizationA = P1LUP2• adding permutation matrix P2offers possibility of sparser L, U (hence,cheaper factor and solve steps)• P1and P2chosen (heuristically) to yield sparse L, U• choice of P1and P2depends on sparsity pattern and values of A• cost is usually much less than (2/3)n3; exact value depends in acomplicated way on n, number of zeros in A, sparsity patternNumerical linear algebra background 9–8Cholesky factorizationevery positive definite A can be factored asA = LLTwith L lower triangularcost: (1/3)n3flopsSolving linear equations by Cholesky factorization.given a set of linear equations Ax = b, with A ∈ Sn++.1. Cholesky factorization. Factor A as A = LLT((1/3)n3flops).2. Forward substitution. Solve Lz1= b (n2flops).3. Backward substitution. Solve LTx = z1(n2flops).cost: (1/3)n3+ 2n2≈ (1/3)n3for large nNumerical linear algebra background 9–9sparse Cholesky factorizationA = P LLTPT• adding permutation matrix P offers possibility of sparser L• P chosen (heuristically) to yield sparse L• choice of P only dep ends on sparsity pattern of A (unlike sparse LU)• cost is usually much less than (1/3)n3; exact value depends in acomplicated way on n, number of zeros in A, sparsity patternNumerical linear algebra background 9–10LDLTfactorizationevery nonsingular symmetric matrix A can be factored asA = P LDLTPTwith P a permutation matrix, L lower triangular, D block diagonal with1 × 1 or 2 × 2 diagonal blockscost: (1/3)n3• cost of solving symmetric sets of linear equations by LDLTfactorization:(1/3)n3+ 2n2≈ (1/3)n3for large n• for sparse A, can choose P to yield sparse L; cost ≪ (1/3)n3Numerical linear algebra background 9–11Equations with structured sub-blocksA11A12A21A22x1x2=b1b2(1)• variables x1∈ Rn1, x2∈ Rn2; blocks Aij∈ Rni×nj• if A11is nonsingular, can eliminate x1: x1= A−111(b1− A12x2);to compute x2, solve(A22− A21A−111A12)x2= b2− A21A−111b1Solving linear equations by block elimination.given a nonsingular set of linear equations (1), with A11nonsingular.1. Form A−111A12and A−111b1.2. Form S = A22− A21A−111A12and˜b = b2− A21A−111b1.3. Determine x2by solving Sx2=˜b.4. Determine x1by solving A11x1= b1− A12x2.Numerical linear algebra background 9–12dominant terms in flop count• step 1: f + n2s (f is cost of factoring A11; s is cost of solve step)• step 2: 2n22n1(cost dominated by product of A21and A−111A12)• step 3: (2/3)n32total: f + n2s + 2n22n1+ (2/3)n32examples• general A11(f = (2/3)n31, s = 2n21): no gain over standard method#flops = (2/3)n31+ 2n21n2+ 2n22n1+ (2/3)n32= (2/3)(n1+ n2)3• block elimination is useful for structured A11(f ≪ n31)for example, diagonal (f = 0, s = n1): #flops ≈ 2n22n1+ (2/3)n32Numerical linear algebra background 9–13Structured matrix plus low rank term(A + BC)x = b• A ∈ Rn×n, B ∈ Rn×p, C ∈ Rp×n• assume A has structure (Ax = b easy to solve)first write


View Full Document

Stanford EE 364B - Numerical Linear Algebra Background

Download Numerical Linear Algebra Background
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Numerical Linear Algebra Background and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Numerical Linear Algebra Background 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?