DOC PREVIEW
MIT 10 34 - Constrained Optimization

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Constrained Optimization Augmented Lagrangian More Than One Constraint Inequality Constraints Karash-Kahn-Tucker (KKT) conditions: Sequential Quadratic Programming (SQP)10.34, Numerical Methods Applied to Chemical Engineering Professor William H. Green Lecture #17: Constrained Optimization. Notation “second derivative of f(x).”: We normally mean fxx Hessian Matrix ⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎝⎛∂∂∂∂∂∂∂∂∂∂=222122212212xfxxfxxfxfH V2f in BEERS but second derivative can also mean: fxx Laplacian Tr{H} = 222212xfxf∂∂+∂∂ - scalar V2f in Physics Texts V·(Vf) Constrained Optimization Equality Constraints: minx f(x) such that g(x) = 0 May be able to invert this statement as: xN = G(x1, x2, …, xN-1) Then we can state min as: min f(x1, x2, …, xN-1,G(x1, x2, …, xN-1)) Notice the xN is gone. Constrained becomes unconstrained. Solve with previous methods. Other way to do this: Lagrange Multipliers Unconstrained 0=∂∂mnxnxf at the minimum - constrained problems do not work that way! o BOUNDARIES GET IN THE WAY Constrained: min.min. constxnconstxnxgxf∂∂=∂∂λ Vf|const. min = λVg|x const. min Gradient of f equals 0 in directions parallel to constraint but not perpendicular Create a new function L(x, λ) = f(x) – λg(x) (λ is unknown before you do the problem) VxL = 0 at constrained min ∂L/∂λ = g(x) Æ 0 at constrained min Second derivatives not necessarily all positive Cite as: William Green, Jr., course materials for 10.34 Numerical Methods Applied to Chemical Engineering, Fall 2006. MIT OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].Augmented Lagrangian LA = f(x) – λg(x) + 2))((21xg(0)μ minx LA given initial guess λ[0], μ[0] Æ xmin[0]VxLA(xmin[0], λ[0]) = Vf|xmin[0] - λ[0] Vg(xmin[0]) - (0)μ1g(x)Vg(x) Î Vf – (λ[0] - (0)μ)(]0[minxg)Vg(x) 10.34, Numerical Methods Applied to Chemical Engineering Lecture 17 Prof. William Green Page 2 of 3 Cite as: William Green, Jr., course materials for 10.34 Numerical Methods Applied to Chemical Engineering, Fall 2006. MIT OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY]. λ[1] As μ[0] shrinks, (0)μ21 gets large, magnifying (g(x))2 term, and thus holding the constraint more strictly. minx LA using λ[1] get a new xmin. In quantum mechanics, λ corresponds to orbital energies. Most of the time, λ does not have a physical meaning. μ[0] is a mathematical trick. More Than One Constraint Suppose you have >1 constraints: g1(x) = 0 make sure these are compatible g2(x) = 0 i.e. there is a “feasible space” – set g3(x) = 0 of x that satisfies all constraints L = f(x) - λ∑iigi(x) VL = 0 Vf = λ∑iiVgiInequality Constraints very common min f(x), s.t. g(x) = 0, h(x) ≥ 0 Active inequality constraints: h(xmin) = 0 Inactive inequality constraints: h(xmin) > 0 Usually, we do not know whether h’s are active or inactive before doing a problem, but must leave in during optimization process, to allow finding of solution: Vf = KjVhj, K ≥ 0 when hj is active; also hj = 0 and kj ≥ 0. Kjhj(x const. min) = 0 when hj is inactiveif inactive, hj ≠ 0 and kj = 0. Vhj can be anything; it does not affect the problem Karash-Kahn-Tucker (KKT) conditions: L = f(x) - Σλigi(x) – Σkjhj(x) VL(xmin) = 0 h(xmin) ≥ 0 g(xmin) = 0 Kj ≥ 0 Kjhj = 0 To handle active-inactive constraints, add slack variables: hj(x) ≥ 0 Æ hj(x) – Sj = 0, Sj ≥ 0 Augmented Method LA: Optimal Sj = max{hj(x) – μ[k]Kj[k]; 0} LA = f(x) – Σλigi - Σkjhj – Σμ[k](kj[k])2 + (1/2μ[0])(gi2+hj2+(μ[0]kj)2) F(x) = VLA = 0 Use Newton’s Method with Broyden to approximate the Hessian matrix. Trying to solve: JLA *Δx = -VLA Use Newton’s method to find x Jacobian is messy: ⎟⎟⎠⎞⎜⎜⎝⎛−∇−=⎟⎟⎠⎞⎜⎜⎝⎛ΔΔ⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎝⎛⎟⎟⎠⎞⎜⎜⎝⎛∂∂⎟⎟⎠⎞⎜⎜⎝⎛∂∂−⎟⎟⎠⎞⎜⎜⎝⎛∂∂∂)(2oldxxoldjnoldTjmoldjixCSfxxCxCxxLoldλ0 If we want to: minpf(p) = (1/2)pToldxjixxL⎟⎟⎠⎞⎜⎜⎝⎛∂∂∂2p + Vf|xold·psuch that sconstraint,...,10)( Nxcpxcmoldmjxjmold=∀=+∂∂∑ - can easily get p (same as Δx above) “quadratic program” Sequential Quadratic Programming (SQP) In MATLAB: fmincon 10.34, Numerical Methods Applied to Chemical Engineering Lecture 17 Prof. William Green Page 3 of 3 Cite as: William Green, Jr., course materials for 10.34 Numerical Methods Applied to Chemical Engineering, Fall 2006. MIT OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on [DD Month


View Full Document

MIT 10 34 - Constrained Optimization

Documents in this Course
Load more
Download Constrained Optimization
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Constrained Optimization and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Constrained Optimization 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?