Chapter 7 Optimal Dispatch of Generation Part I 7 1 Introduction When generators are interconnected with various loads the generation capacity is usually much larger than the loads hence the allocation of loads on the generators can be varied It is important to reduce the cost of electricity as much as possible hence the optimal division of load meaning the least expensive is desired The most expensive cost of generation is the fuel cost Other expenses include labor repairs and maintenance These are economical factors Other factors not considered in this presentation include such factors as security stability and aesthetics 7 2 Nonlinear Function Optimization Unconstrained Parameter Optimization The necessary condition for the function f x f x1 x2 K xn to have a minimum is obtained by setting the derivatives f 0 i 1 2 K n Or using the operator xi f f f f 0 where f K is known as the gradient of f The term xn x1 x2 2 f H associated with second derivative H results in a symmetric matrix called the xi x j Hessian matrix of the function f x Suppose f x 0 where x x 1 x 2 K x n is a local extremum For this to be a minimum the Hessian matrix should be positive definite This can be checked by finding the eigenvalues of the Hessian matrix all of which should be positive at x In the process of doing this two functions from Matlab will be very useful they are shown below If x Ab then the solution is found in Matlab as x A b Also the eigenvectors of H are found using the Matlab function eig H Example 7 1 Find the minimum of f x x12 2 x22 3 x32 x1 x2 x2 x3 8 x1 16 x2 32 x3 110 Finding the first derivatives and equate to zero results in three linear algebraic equations thus f 2 x1 x2 8 0 x1 f x1 4 x2 x3 16 0 x2 f x2 6 x3 32 0 x3 or 2 1 0 x1 1 4 1 x 2 0 1 6 x3 8 16 32 Using Matlab we have A 2 1 0 1 4 1 0 1 6 b 8 16 32 x A b eigenvalues eig A x 3 0000 2 0000 5 0000 eigenvalues 1 5505 4 0000 6 4495 Thus the eigenvalues are all positive thus x 3 2 5 is a minimum in this case a global minimum since there is only one extremum 7 2 1 Constrained Parameter Optimization Equality Constraints Here we minimize f x subject to the constraints gi x 0 i 1 2 K k Such k problems can be solved using Lagrange multipliers i thus we let L f i gi The necessary conditions for constrained local minima of L are the following k L f g i i 0 xi xi i 1 xi i 1 L gi 0 i Note that the last equation is the same as the original constraints Example 7 2 2 Use the Lagrange multiplier method for solving constrained parameter optimizations to determine the minimum distance from the origin of the xy plane to a circle described by x 8 2 y 6 25 The distance to be minimized is given by f x y x 2 y 2 2 First we use Matlab to show a graphical display of the two curves from which it is clear that the answer is at 4 3 Note there is also a maximum at 12 9 wt 0 01 2 pi z 8 j 6 5 cos wt j sin wt x 0 01 12 y 6 8 x plot real z imag z x y grid xlabel x ylabel y axis 0 14 0 14 14 12 10 8 y 6 4 2 0 0 2 4 6 8 10 12 14 x Now we want to minimize f x y subject to the constraints described by the circle equation We first form the Lagrange function thus 2 2 L x 2 y 2 x 8 y 6 25 The necessary conditions for an extremum are found as 3 L 2 x 2 x 16 0 or 2 x 1 16 x L 2 y 2 y 12 0 or 2 y 1 12 y L 2 2 x 8 y 6 25 0 3 Dividing the first two equations yields y x Using this in the third equation yields 4 25 2 x 25 x 75 0 16 Solving the quadratic gives x 4 and x 12 Thus the extrema are at the points 4 3 with 1 and 12 9 with 3 From the figure it is clear that the first point is the minimum In many situations the equations are not only nonlinear but also cannot be solved in an analytical manner thus the solution has to be obtained by iteration The most common iteration method is the Newton Raphson method This is done below for the above problem Solve the first two equations for x and y in terms of 8 x 1 6 y 1 Using these values in the third equation yields 100 2 200 f f x y 75 0 2 1 1 This equation can be solved by iteration as follows using the Newton Raphson method Using the first order term in a Taylor series expansion we have k k f k 1 k k and k df d where f k is the residual at k given by f k f x k y k If this residual is zero the solution is exact otherwise we would like the residual to satisfy a very small convergence value The iteration is started with an estimated value of and continued till a specified accuracy Once is known x and y can be computed from the equations above One disadvantage to this method is the need to find the derivative of the function which in this case is found to be df 200 200 200 3 2 d 1 1 1 3 4 f Having a value for using the above equation we have Now we can 0 0 compute x and y hence we have a value for use in 0 0 f and now we can find 0 0 x 0 8 y 0 6 25 2 f 2 0 0 Thus we have a new value for 1 The df d process is repeated till the the error in evaluating f is less than some prescribed value The following Matlab program will perform the iteration clear iter 0 Iteration counter Df 10 Error in Df is set to a high value Lambda 0 4 Initial estimated value of Lambda disp Iter Df J DLambda Lambda x y while abs Df 0 0001 Test for convergence iter iter 1 No of iterations x 8 Lambda Lambda 1 y 6 Lambda Lambda 1 Df x 8 2 y 6 2 25 Residual J 200 Lambda Lambda 1 3 200 Lambda 1 2 Derivative Delambda Df J Change in variable disp iter Df J Delambda Lambda x y Lambda Lambda Delambda Successive …
View Full Document