DOC PREVIEW
Stanford EE 364B - Lecture Notes

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Subgradient Methods for Constrained Problems• projected subgradient method• projected subgradient for dual• subgradient method for constrained optimizationProf. S. Boyd, EE364b, Stanford UniversityProjected subgradient methodsolves constrained optimization problemminimize f(x)subject to x ∈ C,where f : Rn→ R, C ⊆ Rnare convexprojected subgradient method is given byx(k+1)= Π(x(k)− αkg(k)),Π is (Euclidean) projection on C, and g(k)∈ ∂f( x(k))Prof. S. Boyd, EE364b, Stanford University 1same convergence results:• for constant step size, converges to neighborhood of optimal(for f differentiable and h small enough, converges)• for diminishing nonsummable step sizes, convergeskey idea: projection does not increase distance to x⋆Prof. S. Boyd, EE364b, Stanford University 2Linear equality constraintsminimize f(x)subject to Ax = bprojection of z onto {x | Ax = b} isΠ(z) = z − AT(AAT)−1(Az − b)= (I − AT(AAT)−1A)z + AT(AAT)−1bprojected subgradient update is (using Ax(k)= b)x(k+1)= Π(x(k)− αkg(k))= x(k)− αk(I − AT(AAT)−1A)g(k)= x(k)− αkΠN (A)(g(k))Prof. S. Boyd, EE364b, Stanford University 3Example: Least l1-normminimize kxk1subject to Ax = bsubgradient of objective is g = sign(x)projected subgradient update isx(k+1)= x(k)− αk(I − AT(AAT)−1A) sign(x(k))Prof. S. Boyd, EE364b, Stanford University 4problem instance with n = 1000, m = 50, step size αk= 0.1/k, f⋆≈ 3.20 500 1000 1500 2000 2500 300010−210−1100101kf(k)best− f⋆Prof. S. Boyd, EE364b, Stanford University 5Projected subgradient for dual problem(convex) primal:minimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . , msolve dual problemmaximize g(λ)subject to λ  0via projected subgradient method:λ(k+1)=λ(k)− αkh+, h ∈ ∂(− g)(λ(k))Prof. S. Boyd, EE364b, Stanford University 6Subgradient of negative dual functionassume f0is strictly convex, and denote, for λ  0,x∗(λ) = argminz(f0(z) + λ1f1(z) + · · · + λmfm(z))so g(λ) = f0(x∗(λ)) + λ1f1(x∗(λ)) + · · · + λmfm(x∗(λ))a subgradient of −g at λ is given by hi= −fi(x∗(λ))projected subgradient method for dual:x(k)= x∗(λ(k)), λ(k+1)i=λ(k)i+ αkfi(x(k))+Prof. S. Boyd, EE364b, Stanford University 7• primal iterates x(k)are not feasible, but become feasible in limit(sometimes can find feasible, suboptimal ˜x(k)from x(k))• dual function values g(λ(k)) converge to f⋆= f0(x⋆)interpretation:• λiis price for ‘resource’ fi(x)• price update λ(k+1)i=λ(k)i+ αkfi(x(k))+– increase price λiif resource i is over-utilized (i.e., fi(x) > 0)– decrease price λiif resource i is under-utilized (i.e., fi(x) < 0)– but never let prices get negativeProf. S. Boyd, EE364b, Stanford University 8Exampleminimize strictly convex quadratic (P ≻ 0) over unit box:minimize (1/2)xTP x − qTxsubject to x2i≤ 1, i = 1, . . . , n• L(x, λ) = (1/2)xT(P + diag(2λ))x − qTx − 1Tλ• x∗(λ) = (P + diag(2λ))−1q• projected subgradient for dual:x(k)= (P + diag(2λ(k)))−1q, λ(k+1)i=λ(k)i+ αk((x(k)i)2− 1)+Prof. S. Boyd, EE364b, Stanford University 9problem instance with n = 50, fixed step size α = 0.1, f⋆≈ −5.3;˜x(k)is a nearby feasible point for x(k)5 10 15 20 25 30 35 40−50−40−30−20−100 kuppe r and lower boundsf0(˜x(k))g(λ(k))Prof. S. Boyd, EE364b, Stanford University 10Subgradient method for constrained optimizationsolves constrained optimization problemminimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . , m,where fi: Rn→ R are convexsame update x(k+1)= x(k)− αkg(k), but we haveg(k)∈∂f0(x) fi(x) ≤ 0, i = 1, . . . , m,∂fj(x) fj(x) > 0define f(k)best= min{f0(x(i)) | x(i)feasible, i = 1, . . . , k}Prof. S. Boyd, EE364b, Stanford University 11Convergenceassumptions:• there exists an optimal x⋆; Slater’s condition holds• kg(k)k2≤ G; kx(1)− x⋆k2≤ Rtypical result: for αk> 0, αk→ 0,P∞i=1αi= ∞, we have f(k)best→ f⋆Prof. S. Boyd, EE364b, Stanford University 12Example: Inequality form LPLP with n = 20 variab l es, m = 200 inequalities, f⋆≈ −3.4;αk= 1/k for optimality step, Polyak’s step size for feasibil ity step0 500 1000 1500 2000 250010−210−1100101kf(k)best− f⋆Prof. S. Boyd, EE364b, Stanford University


View Full Document

Stanford EE 364B - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?