Geometric InterpretationSolving LPSlide 3Slide 4Slide 5Slide 6Slide 7Some Historical PerspectiveSlide 9Slide 10Slide 11Slide 12LP continued …..Linear Fractional ProgrammingLFP continued …..Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Matrix perspectiveSlide 27Slide 28LFP ExampleTransformation to LPExample:Example: Charnes and CooperGeometric Interpretation•The constraints determine the set of feasible solutions. This is a polyhedron, the higher dimensional generalization of a 2-dimensional polygon.•Finding the maximum of a linear objective function of the form Z = cx over this polyhedron essentially means to find a vertex of the polyhedron that is the farthest in the direction determined by the vector c:Solving LPOptimum•Optimum SolutionCSolving LP•In case there are only two variables, this can be graphically represented and solved in the plane.Solving LP•Graphical solution: After finding the polygonal boundary of the feasible domain D, as illustrated in the figure below, we “push” the line 3x1+3x2 = a , representing the objective function, as far as possible, so that it still intersects D. The optimum will be attained at a vertex of the polygon.Solving LPSolving LP•If, however, as typical in applications, there are many variables, this simple graphical approach does not work, one needs more systematic methods.Solving LP•Finding the optimal solution in Linear Programming takes relatively complex algorithms. Studying the details of LP algorithms is beyond the scope of this course, since in most cases thenetwork designer can apply off-the-shelf commercial software. A lot of freeware is also available on the Internet.Some Historical Perspective•The first and most widely used LP algorithm has been the Simplex method of Dantzig, published in 1951. The key idea of the method is to explore the vertices of the polyhedron, moving along edges, until an optimal vertex is reached.•There are many variants of the Simplex Method, and they usually work fast in practice. In pathological worst cases, however, they may take exponential running time.Some Historical Perspective•It was a long standing open problem whether linear programming could be solved by a polynomial-time algorithm at all, in the worst case. The two most important discoveries in this area were the following:–The first polynomial-time LP algorithm was published by Khachiyan in 1979. This result was considered a theoretical breakthrough, but was not very practical.Some Historical Perspective–A practically better algorithm was found by Karmarkar in 1984. This is a so called interior point method that starts from a point in the polyhedron and proceeds towards the optimum in a step-by-step descent fashion. Later many variants, improvements and implementations were elaborated, and now it has similar practical peformance as the Simplex Method, while guaranteeing polynomially bounded worst-case running time.Some Historical Perspective•It is interesting that, after more than a half century, a major problem is still open in the world of LP algorithms: does there exist an algorithm that solves the LP, such that the worst-case running time is bounded by a polynomial in terms of the number of variables and constraints only, independently of how large are the numbers that occur in them? (Counting elementary arithmetic operations as single steps.) Such an algorithm is called a strongly polynomial time algorithm. Khachiyan’s and Karmarkar’s methods do not have this feature.Some Historical Perspective•LP solvers of different sorts are available as commercial software, or even as freeware. Thus, the network designer typically does not have to develop his/her own LP solver. Once a problem is formulated as a linear programming task, off-the-shelf software can readily be used.LP continued …..•Linear constraints are of the form:–a1x1 + a2x2 + a3x3 + ... >= minimum–a1x1 + a2x2 + a3x3 + ... <= maximumWhere minimum and maximum are constants.•lp_solve can only handle these kind of Linear equations.Linear Fractional ProgrammingLFP continued …..•If the denominator is always negative then it can be converted to:= a11x1 + a12x2 + a13x3 + ... >= minimum * (a21 x1 + a22 x2 + a23 x3 + ...)Let us assume denomintor is positive:= a11x1 + a12x2 + a13x3 + ... – minimum * (a21 x1 + a22 x2 + a23 x3 + ...) <= 0= (a11 - minimum * a21) x1 + (a12 - minimum * a22) x2 + (a13 - minimum * a23) x3 + ... <= 0LFP continued …..•We now have a linear equation= (b11) x1 + (b12) x2 + (b13) x3 + ... <= 0LFP continued …..LFP continued …..•Linear programming only accepts models with equations in the first degree. •The objective function however has a numerator and denominator so this seems not possible to solve with pure linear programming. •However there is a trick to overcome to this problem. This model can be transform ed to another model that is pure linear. •When the solution is found to this transformed model, the results can be recalculated back to the original model.LP continued …..•There is only one condition to make this possible: –the denominator must be strictly positive (or negative, but in that case you can multiply numerator and denominator by -1 such that the denominator becomes positive). •d0 + d1x1 + d2x2 + d3x3 + ... > 0•Again note the > sign. The denominator may also not become zero. If the transformed model returns a solution saying that it is zero, then the solution is invalid.LFP continued …..LFP continued …..LFP continued …..LFP continued …..•Now also make following substitution:yj = xj y0 •Also put the bi y0 term to the left:•max c0 y0 + c1 y1 + c2 y2 + c3 y3 + ... s.t. -bi y0+ai1 y1+ai2 y2+ai3 y3+... <= 0 d0 y0 + d1 y1 + d2 y2 + d3 y3 + ... = 1 yj >= 0 All yj are variables (j starting from 0)LFP continued …..•This new transformed model is an exact transformation of the original model, but with the advantage that it is a pure linear model. •Also note that this model has one extra variable (y0) with coefficients in the matrix which are the negative of the right hand side (-bi y0). •A constraint is also added requiring the constant term in the denominator times the new variable (d0 y0) plus the denominator terms involving the transformed variables to equal 1. •The transformed model uses the same aij's as the original. Its right hand sides are all 0's except the one in the new constraint. •The objective function does
View Full Document