MthSc-440/640: Linear ProgrammingLecture 7Pietro BelottiDept. of Mathematical SciencesClemson UniversitySeptember 14, 2010Reading for today: pages 45-53, textbookReading for Thursday: pages 54-65, textbookRestricting the se arch to basic solutionsThe simplex method iterates between a finite (modulo cycling)set of solutions, each defined by a basis.How about all other (infinitely many) feasible solutionssatisfying the linear constraints?The fundamental theorem of linear programmingIf an LP problem max{c⊤x : Ax ≤ b, x ≥ 0}◮has no optimal solution ⇒ it is infeasible or unbounded;◮has a feasible solution ⇒ it has a basic feasible solution;◮has an optimal solution ⇒ it has a basic optimalsolution.How fast is the simplex method?Consider an LP in standard form identified by (n, m, c, A, b).◮How much time does it take to solve this problem?◮“Time” is hard to measure (due to computer speed etc.).◮However, the sequence of iterations doesn’t change1⇒ A better way to ask the question:Q. How many simple steps (e.g. sums, products, etc.) areneeded to find an optimal solution?◮It would be a function f (n, m, c, A, b), still too complicated.A simpler (and more approachable) question:Q. Given m, n, what is the maximum (i .e., worst-case) numberof steps needed to solve an LP problem with n variablesand m constraints?i.e. What is T(n, m) := maxA,b,cf (n, m, c, A, b)?◮this is called the worst-case complexity of an algorithm.1Assuming we are given the same instructions.Measuring an algorith m’s complexity◮We are interested in performance for large input sizes.⇒ We need only compare the asymptotic growth rates.i.e. we are interested in limn→+∞f (n), where f is a function ofn, and n is the size of a problem.◮Consider algorithm A with running time given by f andalgorithm B with running time given by g.◮We are interested in knowingL = limn→∞f (n)g(n)◮What are the four possibilities?◮L = 0: g grows faster than f◮L = ∞: f grows faster than g◮L = c: f and g grow at the same rate.◮The limit doesn’t exist.Θ Notation◮We now define the setΘ(g) = {f : ∃ c1, c2, n0> 0 such thatc1g(n) ≤ f (n) ≤ c2g(n) ∀n ≥ n0}◮If f ∈ Θ(g), then we say that f and g grow at the same rateor that they are of the same order.◮Note that f ∈ Θ(g) ⇔ g ∈ Θ(f )◮If limn→∞f (n)g(n)= c for some constant c, then f ∈ Θ(g).Big-O Notation◮We now define the set of functionsO(g) = {f : ∃c, n0> 0 such that f (n) ≤ cg(n) ∀n ≥ n0}◮If f ∈ O(g), then we say that “f is big-O of” g or that fgrows no faster than g◮Note that f ∈ Ω(g) ⇔ g ∈ O(f ).Big-Ω NotationΩ(g) = {f | ∃ constants c, n0> 0 s.t. 0 ≤ cg(n) ≤ f (n) ∀n ≥ n0}◮f ∈ Ω(g) means that g is an asymptotic lower bound on f◮f “grows faster than” gNote:◮f ∈ Θ(g) ⇔ f ∈ O(g) and f ∈ Ω(g).◮f ∈ Ω(g) ⇔ g ∈ O(f ).The Upshot:◮f ∈ O(g) is like “f ≤ g,”◮f ∈ Ω(g) is like “f ≥ g,”◮f ∈ Θ(g) is like “f = g.”ExamplesSome Functions in O(n2)◮n2◮n2+ n◮n2+ 1000n◮1000n2+ 1000n◮n◮n1.9999◮n2/ lg lg nSome Functions in Ω(n2)◮n2◮n2+ n◮n2+ 1000n◮1000n2+ 1000n◮n3◮n2.0001◮n2lg lg nWhich of these are Θ(n2)?Commonly Occurring Funct ionsPolynomials◮f (n) =Pki=0ainiis a polynomial of degree k◮Polynomials f of degree k are in Θ(nk).Exponentials◮A function in which n appears as an exponent on aconstant is anexponential function, i.e., 2n.◮For all positive constants a and b, limn→∞nabn= 0.⇒ All exponential functions grow faster than polynomialsComparing Algorithms0 1×1052×1053×1054×1055×1056×10510010501010010150102001025010300n50en100Measuring the difficulty of problems◮The difficulty of a problem can be judged by the(worst-case) running time of thebest-known algorithm.◮Problems for which there is an algorithm with polynomialrunning time (or better) are calledpolynomially solvable.◮Generally, these problems are considered to be easy.◮They define the complexity class P.◮In general, if a problem admits a polynomial-timecomplexity algorithm, i.e., if P ∈ P, that problem is “e asy”(even if the corresponding complexity is f (n) = n500)Is Linear Programming “easy”?The answer would be “Yes” if we knew an algorithm that, forany n and m, solves an LP in apolynomial in n and m.This brings us back to our initial question:For given m, n, what is themaximum (i.e., worst-case)number of steps needed [by the simplex method] to solve anLP problem with n variables and m constraints?Unfortunately, the worst-case complexity of the simplexalgorithm isexponential.The Klee-Minty ex ampleWhen the simplex method uses the largest coefficient rule, the LPmax 100x1+ 10x2+ x3s.t. x1≤ 120x1+ x2≤ 100200x1+ 20x2+ x3≤ 10,000x1, x2, x3≥ 0is solved with 7 pivots, that explore all basic feasible solutions.In general, this LP in n var and n con takes 2n− 1 iterations.maxPni=110n−jxjs.t. xi+ 2Pi−1j=110i−jxj≤ 100i−1∀i = 1, 2 . . . , nxi≥ 0 ∀i = 1, 2 . . . , nWe have to visit 2ndictionaries before claiming optimality.The Klee-Minty ex ample (cont.)Initial dictionary:x4= 1 − x1x5= 100 − 20x1− x2x6= 10000 − 200x1− 20x2− x3z = 0 + 100x1+ 10x2+ x3x1enters, x4leaves:x1= 1 − x4x5= 80 − x2+ 20x4x6= 9800 − 20x2− x3+ 200x4z = 100 + 10x2+ x3− 100x4x2enters, x5leaves:x1= 1 − x4x2= 80 + 20x4− x5x6= 8200 − x3− 200x4+ 20x5z = 900 + x3+ 100x4− 10x5The Klee-Minty ex ample (cont.)x4enters, x1leaves:x4= 1 − x1x2= 100 − 20x1− x5x6= 8000 + 200x1− x3+ 20x5z = 1000 − 100x1+ x3− 10x5x3enters, x6leaves:x4= 1 − x1x2= 100 − 20x1− x5x3= 8000 + 200x1+ 20x5− x6z = 9000 + 100x1+ 10x5− x6x1enters, x4leaves:x1= 1 − x4x2= 80 + 20x4− x5x3= 8200 − 200x4+ 20x5− x6z = 9100 − 100x4+ 10x5− x6The Klee-Minty ex ample (cont.)x5enters, x2leaves:x1= 1 − x4x5= 80 − x2+ 20x4x3= 9800 − 20x2+ 200x4− x6z = 9900 − 10x2+ 100x4− x6x4enters, x1leaves:x4= 1 − x1x5= 100 − 20x1− x2x3= 10000 − 200x1− 20x2− x6z = 10000 − 100x1− 10x2− x6Optimal. We needed 23− 1 = 7 pivots and visited all 8dictionaries corresponding to all vertices of the polyhedron.Complexity of th e simplex methodTherefore, the simplex method has an exponential worst-casecomplexity (bummer!).◮That does not mean that Linear Programming problemsare not polynomial.◮An algorithm for Linear Programming, known as theEllipsoid Algorithm, has polynomial worst-case
View Full Document