Unformatted text preview:

Lagrangian Relaxations and Randomization• Semidefinite relaxations• Lagrangian relaxations for general QCQPs• Randomization• Bounds on suboptimality (MAXCUT)EE392o, Stanford University 1Easy & Hard Problemsclassical view on complexity:• linear is easy• nonlinear is hard(er)EE392o, Stanford University 2Easy & Hard Problems... EE364 (and correct) view:• convex is easy• nonconvex is hard(er)EE392o, Stanford University 3Convex Optimizationminimize f0(x)subject to f1(x) ≤ 0, . . . , fm(x) ≤ 0x ∈ Rnis optimization variable; fi: Rn→ R are convex:fi(λx + (1 − λ)y) ≤ λfi(x) + (1 − λ)fi(y)for all x, y, 0 ≤ λ ≤ 1• includes LS, LP, QP, and many others• like LS, LP, and QP, convex problems are fundamentally tractableEE392o, Stanford University 4Nonconvex Problemsnonconvexity makes problems essentially untractable...• sometimes the result of bad problem formulation• however, often arises because of some natural limitation: fixedtransaction costs, binary communications, ...EE392o, Stanford University 5Example: Robust LPminimize cTxsubject to Prob(aTix ≤ bi) ≥ η, i = 1, . . . , mcoefficient vectors aiIID, N (ai, Σi); η is required reliability• for fixed x, aTix is N (aTix, xTΣix)• so for η = 50%, robust LP reduces to LPminimize cTxsubject toaTix ≤ bi, i = 1, . . . , mand so is easily solved• what about other values of η, e.g., η = 10%? η = 90%?EE392o, Stanford University 6Hint{x | Prob(aTix ≤ bi) ≥ η, i = 1, . . . , m}η = 10% η = 50% η = 90%EE392o, Stanford University 7That’s rightrobust LP with reliability η = 90% is convex, and very easily solvedrobust LP with reliability η = 10% is not convex, and extremely difficultmoral: very difficult and very easy problems can look quite similar(to the untrained eye)EE392o, Stanford University 8Nonconvex Problemswhat can be done?... we will use convex optimization results to:• find bounds on the optimal value, by relaxation• get ”good” feasible points via randomizationEE392o, Stanford University 9Basic Problem• focus here on a specific class of problems: general QCQPs• vast range of applications...the generic QCQP can be written:minimize xTP0x + qT0x + r0subject to xTPix + qTix + ri≤ 0, i = 1, . . . , m• if all Piare p.s.d., convex problem, use EE364...• here, we suppose at least one Pinot p.s.d.EE392o, Stanford University 10Example: Boolean Least SquaresBoolean least-squares problem:minimize kAx − bk2subject to x2i= 1, i = 1, . . . , n• basic problem in digital communications• could check all 2npossible values of x . . .• an NP-hard problem, and very hard in practice• many heuristics for approximate solutionEE392o, Stanford University 11Example: Partitioning Problemtwo-way partitioning problem described in §5.1.4 of the 364 reader:minimize xTW xsubject to x2i= 1, i = 1, . . . , nwhere W ∈ Sn, with Wii= 0.• a feasible x corresponds to the partition{1, . . . , n} = {i | xi= −1} ∪ {i | xi= 1}• the matrix coefficient Wijcan be interpreted as the cost of having theelements i and j in the same partition.• the objective is to find the partition with least total cost• classic particular instance: MAXCUT (Wij≥ 0)EE392o, Stanford University 12Convex Relaxationthe original QCQP:minimize xTP0x + qT0x + r0subject to xTPix + qTix + ri≤ 0, i = 1, . . . , mcan be rewritten:minimize Tr(XP0) + qT0x + r0subject to Tr(XPi) + qTix + ri≤ 0, i = 1, . . . , mX  xxTRank(X) = 1the only nonconvex constraint is now Rank(X) = 1...EE392o, Stanford University 13Convex Relaxation: Semidefinite Relaxation• we can directly relax this last constraint, i.e. drop the nonconvexRank(X) = 1 to keep only X  xxT• the resulting program gives a lower bound on the optimal valueminimize Tr(XP0) + qT0x + r0subject to Tr(XPi) + qTix + ri≤ 0, i = 1, . . . , mX  xxTTricky. . . Can be improved?EE392o, Stanford University 14Lagrangian Relaxationfrom the original problem:minimize xTP0x + qT0x + r0subject to xTPix + qTix + ri≤ 0, i = 1, . . . , mwe can form the Lagrangian:L(x, λ) = xT P0+mXi=1λiPi!x + q0+mXi=1λiqi!Tx + r0+mXi=1λiriin the variables x ∈ Rnand λ ∈ Rm+...EE392o, Stanford University 15Lagrangian Relaxation: Lagrangianthe dual can be computed explicitly as an (unconstrained) quadraticminimization problem, with:infx∈RxTP x + qTx + r =r −14qTP†q, if P  0 and q ∈ R(P )−∞, otherwisewe have:infxL(x, λ) = −14(q0+Pmi=1λiqi)T(P0+Pmi=1λiPi)†(q0+Pmi=1λiqi)+Pmi=1λiri+ r0where we recognize a Schur complement...EE392o, Stanford University 16Lagrangian Relaxation: Dualthe dual of the QCQP is then given by:maximize γ +Pmi=1λiri+ r0subject to(P0+Pmi=1λiPi) (q0+Pmi=1λiqi) /2(q0+Pmi=1λiqi)T/2 −γ 0λi≥ 0, i = 1, . . . , mwhich is a semidefinite program in the variable λ ∈ Rmand can be solvedefficientlylet us look at what happens when we use semidefinite duality to computethe dual of this last program...EE392o, Stanford University 17Lagrangian Relaxation: BidualTaking the dual again, we get an SDP is given by:minimize Tr(XP0) + qT0x + r0subject to Tr(XPi) + qTix + ri≤ 0, i = 1, . . . , mX xTx 1 0in the variables X ∈ Snand x ∈ Rn• this is a convexification of the original program• we have recovered the semidefinite relaxation in an “automatic” wayEE392o, Stanford University 18Lagrangian Relaxation: Boolean LSusing the previous technique, we can relax the original Boolean LSproblem:minimize kAx − bk2subject to x2i= 1, i = 1, . . . , nand relax it as an SDP:minimize Tr(AX) + 2bTAx + bTbsubject toX xTx 1 0Xii= 1, i = 1, . . . , nthis program then produces a lower bound on the optimal value of theoriginal Boolean LS programEE392o, Stanford University 19Lagrangian Relaxation: Partitioningthe partitioning problem defined above is:minimize xTW xsubject to x2i= 1, i = 1, . . . , nthe variable x disappears from the relaxation, which becomes:minimize Tr(W X)subject to X  0Xii= 1, i = 1, . . . , nEE392o, Stanford University 20Feasible points?• Lagrangian relaxations only provide lower bounds on the optimal value• how can we compute good feasible points?• can we measure how suboptimal this lower bound is?EE392o, Stanford University 21Randomizationthe original QCQP:minimize xTP0x + qT0x + r0subject to xTPix + qTix + ri≤ 0, i = 1, . . . , mwas relaxed into:minimize Tr(XP0) + qT0x + r0subject to Tr(XPi) + qTix + ri≤ 0, i = 1, . . . , mX xTx 1 0• the last (Schur complement)


View Full Document

Stanford EE 364B - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?